User Tools

Site Tools


slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
slurm [2021/01/06 16:03] kauffmanslurm [2022/10/07 15:13] (current) borja
Line 8: Line 8:
 Slurm is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in Slurm parlance) that you designate. Below is an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress. Slurm is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in Slurm parlance) that you designate. Below is an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress.
  
 +
 +===== Communication =====
 +==== Mailing List ====
 +If you are going to be a user of any of the CS Slurm clusters please sign up for the mailing list. Downtime and other relevant information will be announced here.
 +
 +[[ https://mailman.cs.uchicago.edu/cgi-bin/mailman/listinfo/slurm | Mailing List ]]
 +
 +==== Discord ====
 +There is a dedicated text channel ''%%#slurm%%'' on the UChicago CS Discord server. Please note that this Discord server is //only// for UChicago-affiliated users. You can find a link to our Discord server on the [[start|front page]] of this wiki.
  
 ===== Clusters ===== ===== Clusters =====
 +
 +We have a couple different clusters. If you don't know where to start please use the ''%%Peanut%%'' cluster. The ''%%AI Cluster%%'' is for GPU jobs and more advanced users.
 +
 +  * [[slurm:peanut|Peanut Cluster]]
 +  * [[slurm:ai|AI Cluster]]
 +
  
 ==== Peanut Cluster ==== ==== Peanut Cluster ====
Line 20: Line 35:
  
 To use this cluster there are specific nodes you need to log into. Please visit the dedicated AI cluster page for more information. To use this cluster there are specific nodes you need to log into. Please visit the dedicated AI cluster page for more information.
- 
- 
- 
  
  
Line 31: Line 43:
   ssh user@linux.cs.uchicago.edu   ssh user@linux.cs.uchicago.edu
  
-If you want to use the AI Cluster you will need to login into:+If you want to use the AI Cluster you will need to have previously requested access by sending in a ticket. Afterwards, you may login into:
  
   ssh user@fe.ai.cs.uchicago.edu   ssh user@fe.ai.cs.uchicago.edu
Line 37: Line 49:
 Please read up on the specifics on the cluster you are interested in. Please read up on the specifics on the cluster you are interested in.
  
-===== Mailing List ===== 
-If you are going to be a user of this cluster please sign up for the mailing list. Downtime and other relevant information will be announced here. 
  
-[[ https://mailman.cs.uchicago.edu/cgi-bin/mailman/listinfo/slurm | Mailing List ]] 
  
 ===== Documentation ===== ===== Documentation =====
Line 60: Line 69:
 Jobs submitted to the cluster are run from the command line. Almost anything that you can run via the command line on any of our machines in our labs can be run on our job submission server agents. Jobs submitted to the cluster are run from the command line. Almost anything that you can run via the command line on any of our machines in our labs can be run on our job submission server agents.
  
-The job submission servers run Ubuntu 14.04 with the same software as you will find on our lab computers, but without the X environment.+The job submission servers run the same software as you will find on our lab computers, but without the X environment.
  
 You can submit jobs from the departmental computers that you have access to. You will not be able to access the job server agent directly. You can submit jobs from the departmental computers that you have access to. You will not be able to access the job server agent directly.
Line 194: Line 203:
 We also have backfill turned on. This allows for jobs which are smaller to sneak in while a larger higher priority job is waiting for nodes to free up. If your job can run in the amount of time it takes for the other job to get all the nodes it needs, Slurm will schedule you to run during that period. **This means knowing how long your code will run for is very important and must be declared if you wish to leverage this feature. Otherwise the scheduler will just assume you will use the maximum allowed time for the partition when you run.**((https://rc.fas.harvard.edu/resources/running-jobs)) We also have backfill turned on. This allows for jobs which are smaller to sneak in while a larger higher priority job is waiting for nodes to free up. If your job can run in the amount of time it takes for the other job to get all the nodes it needs, Slurm will schedule you to run during that period. **This means knowing how long your code will run for is very important and must be declared if you wish to leverage this feature. Otherwise the scheduler will just assume you will use the maximum allowed time for the partition when you run.**((https://rc.fas.harvard.edu/resources/running-jobs))
  
 +====== Array Jobs ======
 +
 +Instead of submitting multiple jobs to repeat the same process for different data (e.g. getting results for different datasets for a paper) you can use a ''%%job arrays%%''.
 +
 +<code>
 +#SBATCH start-finish[:step][%maximum concurrent]
 +</code>
 +
 +Examples:
 +<code>
 +#SBATCH --array 0-15         0, 1, ..., 15
 +#SBATCH --array 1-3          0, 1, 2, 3
 +#SBATCH --array 1,3,4,     1, 3, 4, 6
 +#SBATCH --array 1-8:2        1, 3, 5, 7
 +#SBATCH --array 1-10:3%2     1, 5, 9, but the only two of these will ever run concurrently.
 +</code>
 +
 +You can differentiate the various tasks using the variable ''%%SLURM_ARRAY_TASK_ID%%''.
 +
 +<code>
 +#!/bin/bash
 +#
 +#SBATCH --mail-user=cnetid@cs.uchicago.edu
 +#SBATCH --mail-type=ALL
 +#SBATCH --output=/home/cnetid/slurm/out/%j.%N.stdout
 +#SBATCH --error=/home/cnetid/slurm/out/%j.%N.stderr
 +#SBATCH --chdir=/home/cnetid/slurm
 +#SBATCH --partition=debug
 +#SBATCH --job-name=check_hostname_of_node
 +#SBATCH --nodes=1
 +#SBATCH --ntasks=1
 +#SBATCH --mem-per-cpu=500
 +#SBATCH --time=15:00
 +#SBATCH --array 1-4
 +
 +input=("small_dataset" "medium_dataset" "large_dataset" "huge_dataset")
 +./process $input[$SLURM_ARRAY_TASK_ID]
 +</code>
 +
 +Additionally, tasks can be used to add job dependencies and other fancy features. For more information consult the [[https://slurm.schedmd.com/job_array.html|manual]]
  
 ====== Common Issues ====== ====== Common Issues ======
Line 298: Line 347:
 Please make sure you specify $CUDA_HOME and if you want to take advantage of CUDNN libraries you will need to append /usr/local/cuda-x.x/lib64 to the $LD_LIBRARY_PATH environment variable. Please make sure you specify $CUDA_HOME and if you want to take advantage of CUDNN libraries you will need to append /usr/local/cuda-x.x/lib64 to the $LD_LIBRARY_PATH environment variable.
  
-  cuda_version=9.2+  cuda_version=11.1
   export CUDA_HOME=/usr/local/cuda-${cuda_version}   export CUDA_HOME=/usr/local/cuda-${cuda_version}
   export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib64   export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib64
Line 314: Line 363:
 The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. /dev/nvidia2). The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. /dev/nvidia2).
  
-For example: If you requested multiple gpu's from Slurm (--gres=gpu:2), the CUDA_VISIBLE_DEVICES variable should contain two numbers(0-3 in this case) separated by a comma (e.g. 1,3).+For example: If you requested multiple gpu's from Slurm (--gres=gpu:2), the CUDA_VISIBLE_DEVICES variable should contain two numbers(0-3 in this case) separated by a comma (e.g. 0,1). 
 + 
 +The numbering is relative and specific to you. For example: two users with one job which require two gpus each could be assigned non-sequential gpu numbers. However CUDA_VISIBLE_DEVICES will look like this for both users: 0,1 
  
  
/var/lib/dokuwiki/data/attic/slurm.1609970639.txt.gz · Last modified: 2021/01/06 16:03 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki