User Tools

Site Tools


slurm:ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
slurm:ai [2021/01/06 16:01] – created kauffmanslurm:ai [2022/04/04 10:58] (current) – fix typos and add code snippet for interactive jupyter notebook chaochunh
Line 1: Line 1:
-====== AI Cluster (Slurm) ======+====== AI Cluster - Slurm ====== 
 +Please send in a ticket requesting to be added if it is your first time using the AI cluster. 
 + 
 + 
 +Feedback is requested: 
 + 
 + [[https://discord.gg/ZVjX8Gv|#slurm Discord channel]] 
 + 
 + 
 + 
 +The information from the older cluster mostly applies and I suggest you read that documentation: https://howto.cs.uchicago.edu/slurm 
 + 
 + 
 +====== Infrastructure ====== 
 +Summary of nodes installed on the cluster. 
 + 
 +  * [[ http://monitor.ai.cs.uchicago.edu|Ganglia Monitoring ]] 
 +  * [[ https://monitor2.ai.cs.uchicago.edu|Grafana Graphs ]] 
 +    * Use ''%%guest%%'' as the username and password to login. 
 + 
 +===== Computer/GPU Nodes ===== 
 +  * 6x nodes 
 +    * 2x Xeon Gold 6130 CPU @ 2.10GHz (64 threads) 
 +    * 192G RAM 
 +    * 4x Nvidia GeForce RTX2080Ti 
 + 
 +  * 2x nodes 
 +    * 2x Xeon Gold 6130 CPU @ 2.10GHz (64 threads) 
 +    * 384G RAM 
 +    * 4x Nvidia Quadro RTX 8000 
 + 
 +  * 3x nodes 
 +    * 2x AMD EPYC 7302 16-Core Processor 
 +    * 512G RAM 
 +    * 4x Nvidia A40 
 + 
 +  * all: 
 +    * zfs mirror mounted at /local 
 +      * compression to lz4: Usually this has a performance gain as less data is read and written to disk with a small overhead in CPU usage. 
 +      * As of right now there is no mechanism to clean up /local. At some point I'll probably put a find command in cron that deletes files older than 90 days or so. 
 + 
 +===== Storage ===== 
 + 
 +  * ai-storage1: 
 +    * 41T total storage 
 +    * uplink to cluster network: 2x 25G 
 +    * /home/<username> 
 +      * 20G quota per user. 
 +    * /net/projects: 
 +      * Lives on the home directory server. 
 +      * Idea would be to create a dataset with a quota for people to use. 
 +      * Normal LDAP groups that you are used to and available everywhere else would control access to these directories. e.g. jonaslab, sandlab 
 + 
 +  * ai-storage2: 
 +    * 41T total storage 
 +    * uplink to cluster network: 2x 25G 
 +    * /net/scratch: Create  yourself a directory /net/scratch/$USER. Use it for whatever you want. 
 +    * Eventually data will be auto deleted after X amount of time. Maybe 90 days or whatever we determine makes sense. 
 + 
 +  * ai-storage3: 
 +    * zfs mirror with previous snapshots of 'ai-storage1'
 +    * NOT a backup. 
 + 
 + 
 + 
 +====== Login ====== 
 + 
 +Anyone with a CS account who has previously sent in a ticket to request access to be added is allowed to login.  
 + 
 +There are a set of front end nodes that give you access to the Slurm cluster. You will connect through these nodes and need to be on these nodes to submit jobs to the cluster. 
 + 
 +    ssh cnetid@fe.ai.cs.uchicago.edu 
 +==== File Transfer ==== 
 +You will use the FE nodes to transfer your files onto the cluster storage infrastructure. The network connections on those nodes are 2x 10G each. 
 + 
 +=== Quota === 
 +  * By default users are given a quota of 20G. 
 + 
 +====== Demo ====== 
 + 
 +kauffman3 is my CS test account. 
 + 
 +<code> 
 +$ ssh kauffman3@fe.ai.cs.uchicago.edu 
 +</code> 
 +I've created a couple scripts that run some of the Slurm commands but with more useful output. cs-sinfo and cs-squeue being the only two right now. 
 +<code> 
 +kauffman3@fe01:~$ cs-sinfo 
 +NODELIST    NODES  PARTITION  STATE  CPUS  S:C:T   MEMORY  TMP_DISK WEIGHT  AVAIL_FEATURES                  REASON  GRES 
 +a[001-006]  6      geforce*   idle   64    2:16: 190000  0           'turing,geforce,rtx2080ti,11g'  none    gpu:rtx2080ti:
 +a[007-008]  2      quadro     idle   64    2:16: 383000  0           'turing,quadro,rtx8000,48g'     none    gpu:rtx8000:
 +</code> 
 +<code> 
 +kauffman3@fe01:~$ cs-squeue 
 +JOBID   PARTITION   USER           NAME                     NODELIST TRES_PER_NSTATE     TIME 
 +</code> 
 + 
 +# List the device number of the devices I've requested from Slurm. 
 +# These numbers map to /dev/nvidia? 
 +<code> 
 +kauffman3@fe01:~$ cat ./show_cuda_devices.sh 
 +#!/bin/bash 
 +hostname 
 +echo $CUDA_VISIBLE_DEVICES 
 +</code> 
 + 
 +Give me all four GPUs on systems 1-6 
 +<code> 
 +kauffman3@fe01:~$ srun -p geforce --gres=gpu:4 -w a[001-006] ./show_cuda_devices.sh 
 +a001 
 +0,1,2,3 
 +a002 
 +0,1,2,3 
 +a006 
 +0,1,2,3 
 +a005 
 +0,1,2,3 
 +a004 
 +0,1,2,3 
 +a003 
 +0,1,2,3 
 +</code> 
 +# give me all GPUs on systems 7-8 
 +#   these are the Quadro RTX 8000s 
 +<code> 
 +kauffman3@fe01:~$ srun -p quadro --gres=gpu:4 -w a[007-008] ./show_cuda_devices.sh 
 +a008 
 +0,1,2,3 
 +a007 
 +0,1,2,3 
 +</code> 
 + 
 + 
 + 
 + 
 + 
 +====== Asked Questions ====== 
 + 
 +> Do we have a max job runtime? 
 + 
 +Yes. 4 hours. This is done per partition. You are expected to write your code to accommodate for this. 
 + 
 +<code> 
 +PartitionName=geforce Nodes=a[001-006] Default=YES DefMemPerCPU=2900 MaxTime=04:00:00 State=UP Shared 
 +=YES 
 +PartitionName=quadro  Nodes=a[007-008] Default=NO DefMemPerCPU=5900 MaxTime=04:00:00 State=UP Shared= 
 +YES 
 +</code> 
 + 
 + 
 +===== Jupyter Notebook Tips ===== 
 +==== Batch ==== 
 +The process for a batch job is very similar. 
 + 
 +jupyter-notebook.sbatch 
 +<code> 
 +#!/bin/bash 
 +unset XDG_RUNTIME_DIR 
 +NODEIP=$(hostname -i) 
 +NODEPORT=$(( $RANDOM + 1024)) 
 +echo "ssh command: ssh -N -L 8888:$NODEIP:$NODEPORT `whoami`@fe01.ai.cs.uchicago.edu" 
 +. ~/myenv/bin/activate 
 +jupyter-notebook --ip=$NODEIP --port=$NODEPORT --no-browser 
 +</code> 
 + 
 +Check the output of your job to find the ssh command to use when accessing your notebook. 
 + 
 +Make a new ssh connection to tunnel your traffic. The format will be something like: 
 + 
 +''%%ssh -N -L 8888:###.###.###.###:#### user@fe01.ai.cs.uchicago.edu%%'' 
 + 
 +This command will appear to hang since we are using the -N option which tells ssh not to run any commands including a shell on the remote machine. 
 + 
 +Open your local browser and visit: ''%%http://localhost:8888%%'' 
 +==== Interactive ==== 
 +  - ''%%srun --pty bash%%'' run an interactive job 
 +  - ''%%unset XDG_RUNTIME_DIR%%'' jupyter tries to use the value of this environment variable to store some files, by defaut it is set to ''<nowiki>''</nowiki>'' and that causes errors when trying to run juypter notebook. 
 +  - ''%%export NODEIP=$(hostname -i)%%'' get the ip address of the node you are using 
 +  - ''%%export NODEPORT=$(( $RANDOM + 1024 ))%%'' get a random port above 1024 
 +  - ''%%echo $NODEIP:$NODEPORT%%'' echo the env var values to use later 
 +  - ''%%jupyter-notebook --ip=$NODEIP --port=$NODEPORT --no-browser%%'' start the jupyter notebook 
 +  - Make a new ssh connection with a tunnel to access your notebook 
 +  - ''%%ssh -N -L 8888:$NODEIP:$NODEPORT user@fe01.ai.cs.uchicago.edu%%'' using the values not variables 
 +  - This will make an ssh tunnel on your local machine that forwards traffic sent to ''%%localhost:8888%%'' to ''%%$NODEIP:$NODEPORT%%'' via the ssh tunnel. This command will appear to hang since we are using the -N option which tells ssh not to run any commands including a shell on the remote machine. 
 +  - Open your local browser and visit: ''%%http://localhost:8888%%'' 
 + 
 +Copy the following code snippt to the interactive node directly:  
 +<code> 
 +unset XDG_RUNTIME_DIR 
 +NODEIP=$(hostname -i) 
 +NODEPORT=$(( $RANDOM + 1024)) 
 +echo "ssh command: ssh -N -L 8888:$NODEIP:$NODEPORT `whoami`@fe01.ai.cs.uchicago.edu" 
 +jupyter-notebook --ip=$NODEIP --port=$NODEPORT --no-browser 
 +</code> 
 + 
 +=====Contribution Policy ===== 
 +This section can be ignored by most people. [[techstaff:aicluster-admin|If you contributed to the cluster or are in a group that has you can read more here]]. 
/var/lib/dokuwiki/data/attic/slurm/ai.1609970505.txt.gz · Last modified: 2021/01/06 16:01 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki