We have mentioned already about the limiting diskspace you get from the HPC setup. This is explained in https://telin.ugent.be/telin-docs/linux/hpc/vsc-account/#quota and https://telin.ugent.be/telin-docs/linux/hpc/gpu-jobs/#pytorch-example. However it is also possible to mount disks on the TELIN servers and even your workstation!
Login to HPC vsc_id@login.hpc.ugent.be with ssh first. We will install 2 tools and move them to a directory that is in your PATH. If you have not done this before, do this:
# make sure we have ~/.local/bin
mkdir -p ~/.local/bin 2>/dev/null
Execute and add this line your to your .bashrc.
PATH=~/.local/bin:$PATH
wget https://github.com/rclone/rclone/releases/download/v1.73.0/rclone-v1.73.0-linux-amd64.zip
unzip rclone-v1.73.0-linux-amd64.zip
mv rclone-v1.73.0-linux-amd64/rclone ~/.local/bin
rm -rf rclone-v1.73.0-linux-amd64
ln -s /usr/bin/fusermount .local/bin/fusermount3
We don’t need the bore executable right now in the HPC, but this is compagnion for later, if you want to expose a port from a node to the internet. e.g. Jupiter nodebook,…
wget https://github.com/ekzhang/bore/releases/download/v0.6.0/bore-v0.6.0-i686-unknown-linux-musl.tar.gz
tar zxvf bore-v0.6.0-i686-unknown-linux-musl.tar.gz
mv bore ~/.local/bin
rm bore-v0.6.0-i686-unknown-linux-musl.tar.gz
mkdir -p ~/.config/rclone
cat >>.config/rclone/rclone.conf <<EOF
[ipids]
type = sftp
host = ipids.ugent.be
user = myusername
port = 8822
key_file = ~/.ssh/id_rsa
EOF
Replace myusername with your TELIN username (and in the following). If your are not from the IPI group replace ipids througout this document with your workgroup server e.g. gaimfs and the host to telin.ugent.be!
If you don’t have a ~/.ssh/id_rsa file in the HPC, you can make one like this:
ssh-keygen -t rsa -b 4096 -N ''
ssh-copy-id -p 8822 myusername@ipids.ugent.be
This should login you in:
ssh -p 8822 myusername@ipids.ugent.be
exit
As an example we will mount the ipids:/scratch
MNT=/run/user/`id -u`/mnt
mkdir $MNT 2>/dev/null
rclone mount ipids:/scratch $MNT --daemon --log-file `tty`
You can now see and copy the files on ipids from the $MNT location. Do not forget to unmount using this command if you are finished:
fusermount -u $MNT
You can use this setup in the jobs files, however change the location to the /tmp directory (it is private for you in the nodes):
MNT=/tmp/mnt
mkdir $MNT 2>/dev/null
rclone mount ipids:/scratch $MNT --daemon --log-file `tty`
# ...run some code...
fusermount -u $MNT
As explained in this section https://telin.ugent.be/telin-docs/general/tunnel/, we can use this setup to make your workstation disk available in the HPC nodes. Install bore first on your workstation and make your disk available:
# on your workstation
# replace ipids.ugent.be to telin.ugent.be if you are not from the IPI group
BORE_SECRET=xxxxxx bore local 22 --to ipids.ugent.be
Write down the portnumber from the log e.g. 9916 in this case:
2026-02-18T11:51:34.008169Z INFO bore_cli::client: connected to server remote_port=9916
2026-02-18T11:51:34.008184Z INFO bore_cli::client: listening at ipids.ugent.be:9916
...
Head over to the HPC and first check if you can login:
ssh-copy-id -p 9916 myusername@ipids.ugent.be
ssh -p 9916 myusername@ipids.ugent.be
You should be logged into your workstation from the HPC!
Add this to your rclone.conf
cat >>.config/rclone/rclone.conf <<EOF
[workstation]
type = sftp
host = ipids.ugent.be
user = myusername
port = 9916
key_file = ~/.ssh/id_rsa
EOF
You can now mount any directory from your workstation in the HPC!
rclone mount workstation:/scratch $MNT --daemon --log-file `tty`
Rclone only gives read/write permission on the mount drive. There are some extra options to have softlinks and execute permissions. Also there is a cache feature. This is the longer version:
rclone mount workstation:/scratch $MNT --daemon --log-file `tty` --links --file-perms 0777 --vfs-cache-mode full --vfs-cache-max-size 8G --rc
rclone rc vfs/stats
You can watch how many write files there are still in the cache to be flushed.