User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
techstaff:slurm [2020/03/25 14:17] kauffmantechstaff:slurm [2020/12/23 11:40] kauffman
Line 1: Line 1:
-====== Announcements ====== 
-  * **2020-03-05**: Partition 'fast' has been temporarily repurposed until some new servers arrive. Because of the corona virus we are not certain when that will be. I hope mid April, but that could change. 
- 
- 
 ====== Peanut Job Submission Cluster ====== ====== Peanut Job Submission Cluster ======
  
 Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3). Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).
  
-For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage:+For job submission we will be using a piece of software called [[http://slurm.schedmd.com|Slurm]]. Simply put, Slurm is a queue management system; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of Slurm can be found on its homepage:
  
 "Slurm is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work."((http://slurm.schedmd.com/)) "Slurm is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work."((http://slurm.schedmd.com/))
  
-SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress.+Slurm is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in Slurm parlance) that you designate. Below is an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress.
  
  
  
 ===== Where to begin ===== ===== Where to begin =====
-SLURM is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there.+Slurm is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there.
  
   ssh user@linux.cs.uchicago.edu   ssh user@linux.cs.uchicago.edu
Line 26: Line 22:
  
 ===== Documentation ===== ===== Documentation =====
-The [[http://slurm.schedmd.com/documentation.html|SLURM webiste]] should be your primary source for documentation. If you Google SLURM questions, you'll often see the outdated Lawrence Livermore pages.+The [[http://slurm.schedmd.com/documentation.html|Slurm website]] should be your primary source for documentation.
  
-A great way to get details on SLURM commands are the manuals that are already on the cluster. For example, if you type the following command:+A great way to get details on Slurm commands are the manuals that are already on the cluster. For example, if you type the following command:
   man sbatch   man sbatch
 you will get the manual page for the ''%%sbatch%%'' command.  you will get the manual page for the ''%%sbatch%%'' command. 
  
 ===== Resources ===== ===== Resources =====
-  * [[https://rc.fas.harvard.edu/resources/documentation/convenient-slurm-commands|Common SLURM commands]] +  * [[https://rc.fas.harvard.edu/resources/documentation/convenient-slurm-commands|Common Slurm commands]] 
-  * [[http://slurm.schedmd.com/|Official SLURM website]] +  * [[http://slurm.schedmd.com/|Official Slurm website]] 
-  * [[http://slurm.schedmd.com/documentation.html|Official SLURM documentation]] +  * [[http://slurm.schedmd.com/documentation.html|Official Slurm documentation]] 
-  * [[http://slurm.schedmd.com/tutorials.html|SLURM tutorial videos]]+  * [[http://slurm.schedmd.com/tutorials.html|Slurm tutorial videos]]
   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]
   * [[http://research.computing.yale.edu/support/hpc/user-guide/slurm| Yale's User Guide]]   * [[http://research.computing.yale.edu/support/hpc/user-guide/slurm| Yale's User Guide]]
Line 139: Line 135:
 ===== Command Summary ===== ===== Command Summary =====
 [[http://slurm.schedmd.com/pdfs/summary.pdf|Cheat Sheet]] [[http://slurm.schedmd.com/pdfs/summary.pdf|Cheat Sheet]]
-| ^ SLURM ^ Example ^+| ^ Slurm ^ Example ^
 ^ Submit a batch serial job | sbatch | sbatch runscript.sh | ^ Submit a batch serial job | sbatch | sbatch runscript.sh |
 ^ Run a script interactively | srun | srun --pty -p interact -t 10 --mem 1000 \\ /bin/bash \\ /bin/hostname | ^ Run a script interactively | srun | srun --pty -p interact -t 10 --mem 1000 \\ /bin/bash \\ /bin/hostname |
Line 148: Line 144:
  
 ===== Usage ===== ===== Usage =====
-Below are some common examples. You should consult the [[http://slurm.schedmd.com/documentation.html|documentation]] of SLURM if you need further assistance.+Below are some common examples. You should consult the [[http://slurm.schedmd.com/documentation.html|documentation]] of Slurm if you need further assistance.
  
 === Default Quotas === === Default Quotas ===
Line 157: Line 153:
  
 ==== sbatch ==== ==== sbatch ====
-The sbatch command is used for submitting jobs to the cluster. sbatch accepts a number of options either from the command line, or (more typically) from a batch script. An example of a SLURM batch script is shown below:+The sbatch command is used for submitting jobs to the cluster. sbatch accepts a number of options either from the command line, or (more typically) from a batch script. An example of a Slurm batch script is shown below:
  
 === Sample script === === Sample script ===
Line 170: Line 166:
 #SBATCH --output=/home/cnetid/slurm/out/%j.%N.stdout #SBATCH --output=/home/cnetid/slurm/out/%j.%N.stdout
 #SBATCH --error=/home/cnetid/slurm/out/%j.%N.stderr #SBATCH --error=/home/cnetid/slurm/out/%j.%N.stderr
-#SBATCH --workdir=/home/cnetid/slurm+#SBATCH --chdir=/home/cnetid/slurm
 #SBATCH --partition=debug #SBATCH --partition=debug
 #SBATCH --job-name=check_hostname_of_node #SBATCH --job-name=check_hostname_of_node
Line 223: Line 219:
  
 ==== sinfo ==== ==== sinfo ====
-View information about SLURM nodes and partitions.+View information about Slurm nodes and partitions.
  
 The following code block shows the what happens when you run the ''%%sinfo%%'' command. You get a list of 'partitions' on which you can run your code. Each partition is comprised of certain types of nodes. In the case below the default (denoted by a *) is 'debug'. The job time limit is short and is meant only to debug your code. The other partitions will usually have a particular purpose in mind. 'hardware', for example, is to be used if you require direct access to the hardware instead of the KVM layer between the hardware and the OS. The following code block shows the what happens when you run the ''%%sinfo%%'' command. You get a list of 'partitions' on which you can run your code. Each partition is comprised of certain types of nodes. In the case below the default (denoted by a *) is 'debug'. The job time limit is short and is meant only to debug your code. The other partitions will usually have a particular purpose in mind. 'hardware', for example, is to be used if you require direct access to the hardware instead of the KVM layer between the hardware and the OS.
Line 240: Line 236:
 ====== Monitoring Jobs ====== ====== Monitoring Jobs ======
  
-''%%squeue%%'' and ''%%sacct%%'' are two different commands that allow you to monitor job activity in SLURM. ''%%squeue%%'' is the primary and most accurate monitoring tool since it queries the SLURM controller directly. ''%%sacct%%'' gives you similar information for running jobs, and can also report on previously finished jobs, but because it accesses the SLURM database, there are some circumstances when the information is not in sync with squeue.+''%%squeue%%'' and ''%%sacct%%'' are two different commands that allow you to monitor job activity in Slurm. ''%%squeue%%'' is the primary and most accurate monitoring tool since it queries the Slurm controller directly. ''%%sacct%%'' gives you similar information for running jobs, and can also report on previously finished jobs, but because it accesses the Slurm database, there are some circumstances when the information is not in sync with squeue.
  
 Running ''%%squeue%%'' without arguments will list all currently running jobs. It is more common, though to list jobs for a particular user (like yourself) using the ''%%-u%%'' option... Running ''%%squeue%%'' without arguments will list all currently running jobs. It is more common, though to list jobs for a particular user (like yourself) using the ''%%-u%%'' option...
Line 263: Line 259:
  
 ====== Job Scheduling ====== ====== Job Scheduling ======
-We use a [[http://slurm.schedmd.com/priority_multifactor.html|multifactor]] method of job scheduling. Job priority is assigned by a combination of fair-share, partition priority, and length of time a job has been sitting in the queue. The priority of the queue is the highest factor in the job priority calculation. For certain queues this will cause jobs on lower priority queues which overlap with that queue to be requeued. The second most important factor is fair-share score. You can find a description of how SLURM calculates Fair-share [[http://slurm.schedmd.com/priority_multifactor.html#fairshare|here]]. The third most important is how long you have been sitting in the queue. The longer your job sits in the queue the higher its priority grows. If everyone’s priority is equal then FIFO is the scheduling method. If you want to see what your current priority is just do ''%%sprio -j JOBID%%'' which will show you the calculation it does to figure out your job priority. If you do ''%%sshare -u USERNAME%%'' you can see your current fair-share and usage.((https://rc.fas.harvard.edu/resources/running-jobs))+We use a [[http://slurm.schedmd.com/priority_multifactor.html|multifactor]] method of job scheduling. Job priority is assigned by a combination of fair-share, partition priority, and length of time a job has been sitting in the queue. The priority of the queue is the highest factor in the job priority calculation. For certain queues this will cause jobs on lower priority queues which overlap with that queue to be requeued. The second most important factor is fair-share score. You can find a description of how Slurm calculates Fair-share [[http://slurm.schedmd.com/priority_multifactor.html#fairshare|here]]. The third most important is how long you have been sitting in the queue. The longer your job sits in the queue the higher its priority grows. If everyone’s priority is equal then FIFO is the scheduling method. If you want to see what your current priority is just do ''%%sprio -j JOBID%%'' which will show you the calculation it does to figure out your job priority. If you do ''%%sshare -u USERNAME%%'' you can see your current fair-share and usage.((https://rc.fas.harvard.edu/resources/running-jobs))
  
-We also have backfill turned on. This allows for jobs which are smaller to sneak in while a larger higher priority job is waiting for nodes to free up. If your job can run in the amount of time it takes for the other job to get all the nodes it needs, SLURM will schedule you to run during that period. **This means knowing how long your code will run for is very important and must be declared if you wish to leverage this feature. Otherwise the scheduler will just assume you will use the maximum allowed time for the partition when you run.**((https://rc.fas.harvard.edu/resources/running-jobs))+We also have backfill turned on. This allows for jobs which are smaller to sneak in while a larger higher priority job is waiting for nodes to free up. If your job can run in the amount of time it takes for the other job to get all the nodes it needs, Slurm will schedule you to run during that period. **This means knowing how long your code will run for is very important and must be declared if you wish to leverage this feature. Otherwise the scheduler will just assume you will use the maximum allowed time for the partition when you run.**((https://rc.fas.harvard.edu/resources/running-jobs))
  
  
Line 273: Line 269:
 | JOB <jobid> CANCELLED AT <time> DUE TO TIME LIMIT | You did not specify enough time for your job to run. The ''%%-t%%'' flag will allow you to set the time limit.| | JOB <jobid> CANCELLED AT <time> DUE TO TIME LIMIT | You did not specify enough time for your job to run. The ''%%-t%%'' flag will allow you to set the time limit.|
 | Job <jobid> exceeded <mem> memory limit, being killed | Your job is attempting to use more memory that you have requested for it. Either increase the amount of memory you have requested or reduce the amount of memory usage your application is trying to use.| | Job <jobid> exceeded <mem> memory limit, being killed | Your job is attempting to use more memory that you have requested for it. Either increase the amount of memory you have requested or reduce the amount of memory usage your application is trying to use.|
-| JOB <jobid> CANCELLED AT <time> DUE TO NODE FAILURE | There can be many reasons for this message, but most often it means that the node your job was set to run on can no longer be contacted by the the SLURM controller.| +| JOB <jobid> CANCELLED AT <time> DUE TO NODE FAILURE | There can be many reasons for this message, but most often it means that the node your job was set to run on can no longer be contacted by the the Slurm controller.| 
-| error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with priviledges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. |+| error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with privileges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. |
  
 ====== Using the GPU ====== ====== Using the GPU ======
Line 383: Line 379:
  
 ==== CUDA_VISIBLE_DEVICES ==== ==== CUDA_VISIBLE_DEVICES ====
-Do not set this variable. It will be set for you by SLURM.+Do not set this variable. It will be set for you by Slurm.
  
 The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. /dev/nvidia2). The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. /dev/nvidia2).
  
-For example: If you requested multiple gpu's from SLURM (--gres=gpu:2), the CUDA_VISIBLE_DEVICES variable should contain two numbers(0-3 in this case) separated by a comma (e.g. 1,3).+For example: If you requested multiple gpu's from Slurm (--gres=gpu:2), the CUDA_VISIBLE_DEVICES variable should contain two numbers(0-3 in this case) separated by a comma (e.g. 1,3).
  
  
Line 450: Line 446:
 Sometimes other universities have documentation that is better in some areas. Sometimes other universities have documentation that is better in some areas.
  
-  - [[ https://hpcc.usc.edu/support/documentation/slurm/ | USC SLURM Docs ]] +  - [[ https://hpcc.usc.edu/support/documentation/slurm/ | USC Slurm Docs ]] 
-  - [[ https://nesi.github.io/hpc_training/lessons/maui-and-mahuika/slurm | NESI SLURM Docs ]]+  - [[ https://nesi.github.io/hpc_training/lessons/maui-and-mahuika/slurm | NESI Slurm Docs ]]
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki