Python Deploy

Deploy your Python project from your workstation or GPU terminal to the HPC cluster

If you have your Python project on your workstation or GPU terminal and you want to run it on the HPC cluster, here is a possible route. We use the UV tool metioned here. You should have installed them on your workstation as well on the HPC.

Starting from your workstation

The project we are going to transfer is called demo.

mkdir demo
cd demo
uv init

uv init creates two important files pyproject.toml and uv.lock, these contain all the Python packages you have used with the exact Python package versions. We will install Pytorch with CUDA that is compatible with the HPC.

uv venv --python 3.10 --seed

As an example we want to have a specific 3.10 Python version. You can leave it out to have a most recent version, but we want to show you can use any version.

It creates a virtual environment in .venv. That one is specific to your workstation. You cannot tranfer this to the HPC. The HPC uses different dynamic libraries than your workstation!

Install Pytorch that is compatible with the HPC.

source .venv/bin/activate
uv pip install torch numpy --index-url https://download.pytorch.org/whl/cu126

We use this as a test Python script:

cat >python_example.py <<EOF
import sys
print('Python version:',sys.version)
import torch
print('Cuda Available:',torch.cuda.is_available())
a=torch.rand(4,4).cuda()
print('Tensor   a:', a)
print('Tensor a*a:', a*a)
EOF

You can run it like this:

uv run python_example.py

and you should get something similar:

Python version: 3.10.12 (main, Jan 26 2026, 14:55:28) [GCC 11.4.0]
Cuda Available: True
Tensor   a: tensor([[0.8439, 0.8614, 0.4781, 0.3829],
        [0.4672, 0.5785, 0.1146, 0.1895],
        [0.1689, 0.6048, 0.7143, 0.4082],
        [0.0947, 0.8511, 0.2742, 0.4674]], device='cuda:0')
Tensor a*a: tensor([[0.7121, 0.7421, 0.2286, 0.1466],
        [0.2182, 0.3346, 0.0131, 0.0359],
        [0.0285, 0.3658, 0.5102, 0.1666],
        [0.0090, 0.7243, 0.0752, 0.2185]], device='cuda:0')

Now we can tell UV we want to sync all the packages that we have used in the pyproject.toml and uv.lock files. First add them:

uv add torch numpy markupsafe==3.0.2 --default-index https://download.pytorch.org/whl/cu126

The markupsafe was neccesary, it got it wrong with the 3.0.3 version in the Python 3.10 version. If you use a newer Python version this is not neccesary. Now we make sure all the packages are mentioned in the pyproject.toml and uv.lock files:

uv lock

Transfer files

You can copy the whole folder to the HPC login node using rsync, this way can repeat the transfer if you have changed something on your workstation, it will only transfer the files that are missing or changed. We assume you have already read the part about quota because the $HOME directory is very limited, so we copy it into the data folder.

rsync -rvtl --delete --exclude .venv/ demo vsc<your_vsc_number>@login.hpc.ugent.be:data

(replace the <your_vsc_number> with the account number you were given.)

HPC interactive shell

Head over to https://login.hpc.ugent.be/ and start an interactive Shell (tmux). Choose a cluster that has a GPU e.g. joltik, accelgor or litleo. You can find the load here to see which one is more suitable.

Sync

Now you can sync all the Python packages from your workstation. You will have the Python files already in data.

cd data/demo
uv sync --locked --python 3.10

Remember to mention the Python version if you have specified one. That’s it. Your .venv will be recreated specific for the HPC cluster. You can module load CUDA/12.6.0 if you want to compile CUDA code, depending if your wheel has to be compiled.

You should now be able to run the example with the exact same Python package versions, but on the HPC and see similar results:

uv run python_example.py

Congrats, you can now make a job file and queue your CUDA GPU job in the HPC system!