User Tools

Site Tools



This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
techstaff:aicluster [2020/11/11 11:58]
techstaff:aicluster [2020/12/10 12:57]
kauffman [Batch]
Line 55: Line 55:
     * Not enabled yet.     * Not enabled yet.
 +====== Login ======
 +There are a set of front end nodes that give you access to the Slurm cluster. You will connect through these nodes and need to be on these nodes to submit jobs to the cluster.
 +    ssh
 +  * Requires a CS account.
 +==== File Transfer ====
 +You will use the FE nodes to transfer your files onto the cluster storage infrastructure. The network connections on those nodes are 2x 10G each.
 +=== Quota ===
 +  * By default users are given a quota of 20G.
 ====== Demo ====== ====== Demo ======
Line 110: Line 123:
 </​code>​ </​code>​
 +==== Notes on CUDA_VISIBLE_DEVICES ====
 +CUDA_VISIBLE_DEVICES:​ Displays relative gpu device number available to you. 
 +  * This variable should NOT be modified. Ever.
 +  * Relative means that if you requested one gpu it will show up as 0. Even if all other gpus on the server are being used by others.
 +===== Fairshare/​QOS =====
 +By default all usage is tracked and charged to a users default account. A fairshare value is computed and used in prioritizing a job on submission.
 +Details are being worked out for anyone that donates to the cluster. This will be some sort of tiered system where you get to use a higher priority when you need it.
 +You will need to charge an account on job submission ''​%%--account=<​name>​%%''​ and most likely select the priority level you wish to use and that you are allowed to use: ''​%%--qos=<​level>​%%''​
Line 129: Line 150:
 </​code>​ </​code>​
 +===== Jupyter Notebook Tips =====
 +==== Batch ====
 +The process for a batch job is very similar.
 +NODEIP=$(hostname -i)
 +NODEPORT=$(( $RANDOM + 1024))
 +echo "ssh command: ssh -N -L 8888:​$NODEIP:​$NODEPORT `whoami`"​
 +. ~/​myenv/​bin/​activate
 +jupyter-notebook --ip=$NODEIP --port=$NODEPORT --no-browser
 +Check the output of your job to find the ssh command to use when accessing your notebook.
 +Make a new ssh connection to tunnel your traffic. The format will be something like:
 +''​%%ssh -N -L 8888:###​.###​.###​.###:####​''​
 +This command will appear to hang since we are using the -N option which tells ssh not to run any commands including a shell on the remote machine.
 +Open your local browser and visit: ''​%%http://​localhost:​8888%%''​
 +==== Interactive ====
 +  - ''​%%srun --pty bash%%''​ run an interactive job
 +  - ''​%%unset XDG_RUNTIME_DIR%%''​ jupyter tries to use the value of this environment variable to store some files, by defaut it is set to ''​ and that causes errors when trying to run juypter notebook.
 +  - ''​%%export NODEIP=$(hostname -i)%%''​ get the ip address of the node you are using
 +  - ''​%%export NODEPORT=$(( $RANDOM + 1024 ))%%''​ get a random port above 1024
 +  - ''​%%echo $NODEIP:​$NODEPORT%%''​ echo the env var values to use later
 +  - ''​%%jupyter-notebook --ip=$NODEIP --port=$NODEPORT --no-browser%%'​ start the jupyter notebook
 +  - Make a new ssh connection with a tunnel to access your notebook
 +  - ''​%%ssh -N -L 8888:​$NODEIP:​$NODEPORT''​ using the values not variables
 +  - This will make an ssh tunnel on your local machine that fowards traffic sent to ''​%%localhost:​8888%%''​ to ''​%%$NODEIP:​$NODEPORT%%''​ via the ssh tunnel. This command will appear to hang since we are using the -N option which tells ssh not to run any commands including a shell on the remote machine.
 +  - Open your local browser and visit: ''​%%http://​localhost:​8888%%''​
/var/lib/dokuwiki/data/pages/techstaff/aicluster.txt · Last modified: 2021/01/06 16:11 by kauffman