slurm
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
slurm [2021/01/06 16:00] – created kauffman | slurm [2022/10/07 15:13] (current) – borja | ||
---|---|---|---|
Line 8: | Line 8: | ||
Slurm is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in Slurm parlance) that you designate. Below is an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress. | Slurm is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in Slurm parlance) that you designate. Below is an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress. | ||
+ | |||
+ | ===== Communication ===== | ||
+ | ==== Mailing List ==== | ||
+ | If you are going to be a user of any of the CS Slurm clusters please sign up for the mailing list. Downtime and other relevant information will be announced here. | ||
+ | |||
+ | [[ https:// | ||
+ | |||
+ | ==== Discord ==== | ||
+ | There is a dedicated text channel '' | ||
===== Clusters ===== | ===== Clusters ===== | ||
+ | |||
+ | We have a couple different clusters. If you don't know where to start please use the '' | ||
+ | |||
+ | * [[slurm: | ||
+ | * [[slurm: | ||
+ | |||
==== Peanut Cluster ==== | ==== Peanut Cluster ==== | ||
Line 20: | Line 35: | ||
To use this cluster there are specific nodes you need to log into. Please visit the dedicated AI cluster page for more information. | To use this cluster there are specific nodes you need to log into. Please visit the dedicated AI cluster page for more information. | ||
- | |||
- | |||
- | |||
Line 31: | Line 43: | ||
ssh user@linux.cs.uchicago.edu | ssh user@linux.cs.uchicago.edu | ||
- | If you want to use the AI Cluster you will need to login into: | + | If you want to use the AI Cluster you will need to have previously requested access by sending in a ticket. Afterwards, you may login into: |
ssh user@fe.ai.cs.uchicago.edu | ssh user@fe.ai.cs.uchicago.edu | ||
Line 37: | Line 49: | ||
Please read up on the specifics on the cluster you are interested in. | Please read up on the specifics on the cluster you are interested in. | ||
- | ===== Mailing List ===== | ||
- | If you are going to be a user of this cluster please sign up for the mailing list. Downtime and other relevant information will be announced here. | ||
- | [[ https:// | ||
===== Documentation ===== | ===== Documentation ===== | ||
Line 57: | Line 66: | ||
* [[http:// | * [[http:// | ||
+ | ====== Job Submission ====== | ||
+ | Jobs submitted to the cluster are run from the command line. Almost anything that you can run via the command line on any of our machines in our labs can be run on our job submission server agents. | ||
+ | |||
+ | The job submission servers run the same software as you will find on our lab computers, but without the X environment. | ||
+ | |||
+ | You can submit jobs from the departmental computers that you have access to. You will not be able to access the job server agent directly. | ||
+ | |||
+ | ===== Command Summary ===== | ||
+ | [[http:// | ||
+ | | ^ Slurm ^ Example ^ | ||
+ | ^ Submit a batch serial job | sbatch | sbatch runscript.sh | | ||
+ | ^ Run a script interactively | srun | srun --pty -p interact -t 10 --mem 1000 \\ /bin/bash \\ / | ||
+ | ^ Kill a job | scancel | scancel 4585 | | ||
+ | ^ View status of queues | squeue | squeue -u cnetid | | ||
+ | ^ Check current job by id | sacct | sacct -j 999999 | | ||
+ | |||
+ | |||
+ | ===== Usage ===== | ||
+ | Below are some common examples. You should consult the [[http:// | ||
+ | |||
+ | === Default Quotas === | ||
+ | By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: '' | ||
+ | |||
+ | === Exclusive access to a node === | ||
+ | You will need to add the '' | ||
+ | |||
+ | ==== sbatch ==== | ||
+ | The sbatch command is used for submitting jobs to the cluster. sbatch accepts a number of options either from the command line, or (more typically) from a batch script. An example of a Slurm batch script is shown below: | ||
+ | |||
+ | === Sample script === | ||
+ | Make sure you create a directory in which to deposit the '' | ||
+ | mkdir -p $HOME/ | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | # | ||
+ | #SBATCH --mail-user=cnetid@cs.uchicago.edu | ||
+ | #SBATCH --mail-type=ALL | ||
+ | #SBATCH --output=/ | ||
+ | #SBATCH --error=/ | ||
+ | #SBATCH --chdir=/ | ||
+ | #SBATCH --partition=debug | ||
+ | #SBATCH --job-name=check_hostname_of_node | ||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --ntasks=1 | ||
+ | #SBATCH --mem-per-cpu=500 | ||
+ | #SBATCH --time=15: | ||
+ | |||
+ | hostname | ||
+ | </ | ||
+ | |||
+ | If any of the above options are unclear as to what they do please check the man page for '' | ||
+ | man sbatch | ||
+ | |||
+ | Make sure to replace all instances of the word '' | ||
+ | |||
+ | === Submitting job script === | ||
+ | Using the above example you will want to place your tested code into a file. ' | ||
+ | < | ||
+ | sbatch hostname.job | ||
+ | </ | ||
+ | |||
+ | You can then check the status via squeue or see the output in the output directory ' | ||
+ | ==== srun ==== | ||
+ | Used to submit a job to the cluster that doesn' | ||
+ | < | ||
+ | user@host: | ||
+ | slurm2 | ||
+ | slurm2 | ||
+ | </ | ||
+ | |||
+ | '' | ||
+ | < | ||
+ | user@host: | ||
+ | </ | ||
+ | |||
+ | ==== squeue ==== | ||
+ | This command will show jobs in the queue. | ||
+ | |||
+ | < | ||
+ | user@host: | ||
+ | JOBID PARTITION | ||
+ | | ||
+ | </ | ||
+ | |||
+ | ==== scancel ==== | ||
+ | Cancel one of your own jobs. Please read the '' | ||
+ | |||
+ | < | ||
+ | scancel 29 | ||
+ | </ | ||
+ | |||
+ | ==== sinfo ==== | ||
+ | View information about Slurm nodes and partitions. | ||
+ | |||
+ | The following code block shows the what happens when you run the '' | ||
+ | < | ||
+ | user@host: | ||
+ | PARTITION AVAIL TIMELIMIT | ||
+ | debug* | ||
+ | fast up 1-00: | ||
+ | general | ||
+ | pascal | ||
+ | quadro | ||
+ | titan up 3-00: | ||
+ | </ | ||
+ | |||
+ | |||
+ | ====== Monitoring Jobs ====== | ||
+ | |||
+ | '' | ||
+ | |||
+ | Running '' | ||
+ | squeue -u cnetid | ||
+ | |||
+ | or for a particular job id. | ||
+ | squeue -j 7894 | ||
+ | |||
+ | |||
+ | ====== Interactive Jobs ====== | ||
+ | |||
+ | Though batch submission is the best way to take full advantage of the compute power in the job submission cluster, foreground, interactive jobs can also be run. | ||
+ | |||
+ | An interactive job differs from a batch job in two important aspects: | ||
+ | - The partition to be used is the interact partition | ||
+ | - Jobs should be initiated with the srun command instead of sbatch. | ||
+ | |||
+ | This command: | ||
+ | srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash | ||
+ | will start a command line shell ('' | ||
+ | |||
+ | |||
+ | ====== Job Scheduling ====== | ||
+ | We use a [[http:// | ||
+ | |||
+ | We also have backfill turned on. This allows for jobs which are smaller to sneak in while a larger higher priority job is waiting for nodes to free up. If your job can run in the amount of time it takes for the other job to get all the nodes it needs, Slurm will schedule you to run during that period. **This means knowing how long your code will run for is very important and must be declared if you wish to leverage this feature. Otherwise the scheduler will just assume you will use the maximum allowed time for the partition when you run.**((https:// | ||
+ | |||
+ | ====== Array Jobs ====== | ||
+ | |||
+ | Instead of submitting multiple jobs to repeat the same process for different data (e.g. getting results for different datasets for a paper) you can use a '' | ||
+ | |||
+ | < | ||
+ | #SBATCH start-finish[: | ||
+ | </ | ||
+ | |||
+ | Examples: | ||
+ | < | ||
+ | #SBATCH --array 0-15 0, 1, ..., 15 | ||
+ | #SBATCH --array 1-3 0, 1, 2, 3 | ||
+ | #SBATCH --array 1, | ||
+ | #SBATCH --array 1-8:2 1, 3, 5, 7 | ||
+ | #SBATCH --array 1-10: | ||
+ | </ | ||
+ | |||
+ | You can differentiate the various tasks using the variable '' | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | # | ||
+ | #SBATCH --mail-user=cnetid@cs.uchicago.edu | ||
+ | #SBATCH --mail-type=ALL | ||
+ | #SBATCH --output=/ | ||
+ | #SBATCH --error=/ | ||
+ | #SBATCH --chdir=/ | ||
+ | #SBATCH --partition=debug | ||
+ | #SBATCH --job-name=check_hostname_of_node | ||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --ntasks=1 | ||
+ | #SBATCH --mem-per-cpu=500 | ||
+ | #SBATCH --time=15: | ||
+ | #SBATCH --array 1-4 | ||
+ | |||
+ | input=(" | ||
+ | ./process $input[$SLURM_ARRAY_TASK_ID] | ||
+ | </ | ||
+ | |||
+ | Additionally, | ||
+ | |||
+ | ====== Common Issues ====== | ||
+ | |||
+ | ^Error ^What does it mean? ^ | ||
+ | | JOB < | ||
+ | | Job < | ||
+ | | JOB < | ||
+ | | error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with privileges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. | | ||
+ | |||
+ | ====== Using the GPU ====== | ||
+ | |||
+ | ===== GRES Multiple GPU's on one system ===== | ||
+ | GRES: Generic Resource. As of 2018-05-04 these only include GPU's. | ||
+ | |||
+ | Jobs will not be allocated any generic resources unless specifically requested at job submit time using the '' | ||
+ | < | ||
+ | |||
+ | Jobs will be allocated specific generic resources as needed to satisfy the request. If the job is suspended, those resources do not become available for use by other jobs. | ||
+ | |||
+ | Job steps can be allocated generic resources from those allocated to the job using the '' | ||
+ | |||
+ | ==== Ok, but I don't want to read the wall of text above ==== | ||
+ | Fine. | ||
+ | |||
+ | The '' | ||
+ | |||
+ | < | ||
+ | --gpu=gpu: | ||
+ | # Please try to limit yourself to one GPU per person. | ||
+ | </ | ||
+ | |||
+ | Example when using tensorflow: | ||
+ | |||
+ | Given the file '' | ||
+ | < | ||
+ | # | ||
+ | from tensorflow.python.client import device_lib | ||
+ | print(device_lib.list_local_devices()) | ||
+ | </ | ||
+ | |||
+ | Here we can see that no GPU was allocated to us because we did not specify the '' | ||
+ | < | ||
+ | user@bulldozer: | ||
+ | user@gpu3: | ||
+ | user@gpu3: | ||
+ | </ | ||
+ | |||
+ | If we request only 1 GPU. | ||
+ | < | ||
+ | user@bulldozer: | ||
+ | user@gpu3: | ||
+ | physical_device_desc: | ||
+ | </ | ||
+ | |||
+ | If we request 2 GPUs. | ||
+ | < | ||
+ | user@bulldozer: | ||
+ | user@gpu3: | ||
+ | physical_device_desc: | ||
+ | physical_device_desc: | ||
+ | </ | ||
+ | |||
+ | If we request more GPUs then are available. | ||
+ | < | ||
+ | kauffman3@bulldozer: | ||
+ | srun: error: Unable to allocate resources: Requested node configuration is not available | ||
+ | </ | ||
+ | |||
+ | ==== Cool, but how do I know where and what resources are available ==== | ||
+ | Turns out the '' | ||
+ | < | ||
+ | $ sinfo -O partition, | ||
+ | PARTITION | ||
+ | debug* | ||
+ | fast slurm[9-14] | ||
+ | general | ||
+ | pascal | ||
+ | quadro | ||
+ | titan | ||
+ | </ | ||
+ | |||
+ | FEATURES: Is actually just an arbitrary string in the configuration file that defines a node. However, techstaff hopes it actually provides some useful info. | ||
+ | |||
+ | GRES: Don't depend on this being accurate, however it will definitely give you a clue as to how many generic resources are in a partition. | ||
+ | |||
+ | |||
+ | ==== Checking how many Generic RESources are being consumed ==== | ||
+ | |||
+ | Simple use the '' | ||
+ | < | ||
+ | $ squeue -O username, | ||
+ | USER NODELIST | ||
+ | someusername | ||
+ | otherusername | ||
+ | ... | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== Environment Variables ===== | ||
+ | |||
+ | ==== CUDA_HOME, LD_LIBRARY_PATH ==== | ||
+ | |||
+ | Please make sure you specify $CUDA_HOME and if you want to take advantage of CUDNN libraries you will need to append / | ||
+ | |||
+ | cuda_version=11.1 | ||
+ | export CUDA_HOME=/ | ||
+ | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH: | ||
+ | |||
+ | Currently we support the same versions of CUDA that the latest version of CUDNN supports. This is not written in stone and we can accommodate most other versions if required; just let techstaff know what your needs are. | ||
+ | |||
+ | ==== PATH ==== | ||
+ | You may also need to add the following to your '' | ||
+ | |||
+ | export PATH=$PATH:/ | ||
+ | |||
+ | ==== CUDA_VISIBLE_DEVICES ==== | ||
+ | Do not set this variable. It will be set for you by Slurm. | ||
+ | |||
+ | The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. / | ||
+ | |||
+ | For example: If you requested multiple gpu's from Slurm (--gres=gpu: | ||
+ | |||
+ | The numbering is relative and specific to you. For example: two users with one job which require two gpus each could be assigned non-sequential gpu numbers. However CUDA_VISIBLE_DEVICES will look like this for both users: 0,1 | ||
+ | |||
+ | |||
+ | |||
+ | ===== Example ===== | ||
+ | This sbatch script will get device information from the installed Tesla gpu. | ||
+ | < | ||
+ | #!/bin/bash | ||
+ | # | ||
+ | #SBATCH --mail-user=cnetid@cs.uchicago.edu | ||
+ | #SBATCH --mail-type=ALL | ||
+ | #SBATCH --output=/ | ||
+ | #SBATCH --error=/ | ||
+ | #SBATCH --workdir=/ | ||
+ | #SBATCH --partition=gpu | ||
+ | #SBATCH --job-name=get_tesla_info | ||
+ | |||
+ | export PATH=$PATH:/ | ||
+ | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/ | ||
+ | |||
+ | cat << EOF > / | ||
+ | #include < | ||
+ | |||
+ | int main() { | ||
+ | int nDevices; | ||
+ | |||
+ | cudaGetDeviceCount(& | ||
+ | for (int i = 0; i < nDevices; i++) { | ||
+ | cudaDeviceProp prop; | ||
+ | cudaGetDeviceProperties(& | ||
+ | printf(" | ||
+ | printf(" | ||
+ | printf(" | ||
+ | | ||
+ | printf(" | ||
+ | | ||
+ | printf(" | ||
+ | | ||
+ | } | ||
+ | } | ||
+ | EOF | ||
+ | |||
+ | / | ||
+ | /tmp/a.out | ||
+ | rm /tmp/a.out | ||
+ | rm / | ||
+ | </ | ||
+ | ==== Output ==== | ||
+ | STDOUT will look something like this: | ||
+ | < | ||
+ | cnetid@linux1: | ||
+ | Device Number: 0 | ||
+ | Device name: Tesla M2090 | ||
+ | Memory Clock Rate (KHz): 1848000 | ||
+ | Memory Bus Width (bits): 384 | ||
+ | Peak Memory Bandwidth (GB/s): 177.408000 | ||
+ | </ | ||
+ | STDERR should be blank. | ||
+ | ====== Feedback ====== | ||
+ | If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], | ||
+ | |||
+ | ====== More ====== | ||
+ | Sometimes other universities have documentation that is better in some areas. | ||
+ | - [[ https:// | ||
+ | - [[ https:// | ||
/var/lib/dokuwiki/data/attic/slurm.1609970448.txt.gz · Last modified: 2021/01/06 16:00 by kauffman