techstaff:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionLast revisionBoth sides next revision | ||
techstaff:slurm [2019/10/11 16:54] – kauffman | techstaff:slurm [2020/12/23 11:40] – kauffman | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ===== Notice ===== | ||
- | **2019-10-08**: | ||
====== Peanut Job Submission Cluster ====== | ====== Peanut Job Submission Cluster ====== | ||
Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3). | Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3). | ||
- | For job submission we will be using a piece of software called [[http:// | + | For job submission we will be using a piece of software called [[http:// |
"Slurm is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work." | "Slurm is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work." | ||
- | SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. | + | Slurm is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in Slurm parlance) that you designate. Below is an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress. |
===== Where to begin ===== | ===== Where to begin ===== | ||
- | SLURM is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there. | + | Slurm is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there. |
ssh user@linux.cs.uchicago.edu | ssh user@linux.cs.uchicago.edu | ||
Line 24: | Line 22: | ||
===== Documentation ===== | ===== Documentation ===== | ||
- | The [[http:// | + | The [[http:// |
- | A great way to get details on SLURM commands are the manuals that are already on the cluster. For example, if you type the following command: | + | A great way to get details on Slurm commands are the manuals that are already on the cluster. For example, if you type the following command: |
man sbatch | man sbatch | ||
you will get the manual page for the '' | you will get the manual page for the '' | ||
===== Resources ===== | ===== Resources ===== | ||
- | * [[https:// | + | * [[https:// |
- | * [[http:// | + | * [[http:// |
- | * [[http:// | + | * [[http:// |
- | * [[http:// | + | * [[http:// |
* [[https:// | * [[https:// | ||
* [[http:// | * [[http:// | ||
Line 49: | Line 47: | ||
* 2x 500GB SATA 7200RPM in RAID1 | * 2x 500GB SATA 7200RPM in RAID1 | ||
- | '' | + | '' |
* 24 Cores (2x 24core Intel Xeon Silver 4116 CPU @ 2.10GHz), 48 threads | * 24 Cores (2x 24core Intel Xeon Silver 4116 CPU @ 2.10GHz), 48 threads | ||
* 128gb RAM | * 128gb RAM | ||
Line 55: | Line 53: | ||
* /local: 2x 960GB Intel SSD RAID0 | * /local: 2x 960GB Intel SSD RAID0 | ||
+ | '' | ||
+ | * 6 Cores (Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz), 12 threads | ||
+ | * 32gb RAM | ||
+ | * OS: 1x 512gb SSD | ||
+ | |||
+ | '' | ||
+ | * 16 Cores (Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz), 32 threads | ||
+ | * 128gb RAM | ||
+ | * OS: 2x Samsung SSD 850 PRO 128GB | ||
+ | * /local: ZFS mirror (2x Samsung SSD 850 PRO 1TB) | ||
+ | * 2x Quadro P4000 | ||
+ | |||
+ | '' | ||
+ | * 8 Core (Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz), 16 threads | ||
+ | * 64gb RAM | ||
+ | * OS: 1x 1TB 7200k spinning disk. | ||
+ | * 4x GeForce GTX 1080 Ti | ||
==== Storage ==== | ==== Storage ==== | ||
There is slow scratch space mounted to '' | There is slow scratch space mounted to '' | ||
Line 105: | Line 120: | ||
| **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | | **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | ||
| **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the '' | | **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the '' | ||
- | | **pascal** | 2018-05-04: 1x Nvidia GTX1080. You will be forced to use this server exclusively for now. Please keep your time in interactive mode to a minimum.| | + | | **fast** | 2019-12-02: 48 threads, 128GB RAM | |
- | | **titan** | 2018-05-04: 4x Nvidia GTX1080Ti. | + | | **quadro** | 2019-12-02: 2x Quadro P4000. *| |
+ | | **pascal** | 2018-05-04: 1x Nvidia GTX1080.| | ||
+ | | **titan** | 2018-05-04: 4x Nvidia GTX1080Ti. | ||
+ | * This partition is shared and you MUST use the '' | ||
====== Job Submission ====== | ====== Job Submission ====== | ||
Jobs submitted to the cluster are run from the command line. Almost anything that you can run via the command line on any of our machines in our labs can be run on our job submission server agents. | Jobs submitted to the cluster are run from the command line. Almost anything that you can run via the command line on any of our machines in our labs can be run on our job submission server agents. | ||
Line 117: | Line 135: | ||
===== Command Summary ===== | ===== Command Summary ===== | ||
[[http:// | [[http:// | ||
- | | ^ SLURM ^ Example ^ | + | | ^ Slurm ^ Example ^ |
^ Submit a batch serial job | sbatch | sbatch runscript.sh | | ^ Submit a batch serial job | sbatch | sbatch runscript.sh | | ||
^ Run a script interactively | srun | srun --pty -p interact -t 10 --mem 1000 \\ /bin/bash \\ / | ^ Run a script interactively | srun | srun --pty -p interact -t 10 --mem 1000 \\ /bin/bash \\ / | ||
Line 126: | Line 144: | ||
===== Usage ===== | ===== Usage ===== | ||
- | Below are some common examples. You should consult the [[http:// | + | Below are some common examples. You should consult the [[http:// |
=== Default Quotas === | === Default Quotas === | ||
Line 135: | Line 153: | ||
==== sbatch ==== | ==== sbatch ==== | ||
- | The sbatch command is used for submitting jobs to the cluster. sbatch accepts a number of options either from the command line, or (more typically) from a batch script. An example of a SLURM batch script is shown below: | + | The sbatch command is used for submitting jobs to the cluster. sbatch accepts a number of options either from the command line, or (more typically) from a batch script. An example of a Slurm batch script is shown below: |
=== Sample script === | === Sample script === | ||
Line 148: | Line 166: | ||
#SBATCH --output=/ | #SBATCH --output=/ | ||
#SBATCH --error=/ | #SBATCH --error=/ | ||
- | #SBATCH --workdir=/ | + | #SBATCH --chdir=/ |
#SBATCH --partition=debug | #SBATCH --partition=debug | ||
#SBATCH --job-name=check_hostname_of_node | #SBATCH --job-name=check_hostname_of_node | ||
Line 175: | Line 193: | ||
< | < | ||
user@host: | user@host: | ||
- | research2 | + | slurm2 |
- | research2 | + | slurm2 |
</ | </ | ||
Line 201: | Line 219: | ||
==== sinfo ==== | ==== sinfo ==== | ||
- | View information about SLURM nodes and partitions. | + | View information about Slurm nodes and partitions. |
The following code block shows the what happens when you run the '' | The following code block shows the what happens when you run the '' | ||
Line 207: | Line 225: | ||
user@host: | user@host: | ||
PARTITION AVAIL TIMELIMIT | PARTITION AVAIL TIMELIMIT | ||
- | debug* | + | debug* |
- | general | + | fast up 1-00: |
- | pascal | + | general |
- | tesla | + | pascal |
+ | quadro | ||
+ | titan up 3-00: | ||
</ | </ | ||
Line 216: | Line 236: | ||
====== Monitoring Jobs ====== | ====== Monitoring Jobs ====== | ||
- | '' | + | '' |
Running '' | Running '' | ||
Line 239: | Line 259: | ||
====== Job Scheduling ====== | ====== Job Scheduling ====== | ||
- | We use a [[http:// | + | We use a [[http:// |
- | We also have backfill turned on. This allows for jobs which are smaller to sneak in while a larger higher priority job is waiting for nodes to free up. If your job can run in the amount of time it takes for the other job to get all the nodes it needs, | + | We also have backfill turned on. This allows for jobs which are smaller to sneak in while a larger higher priority job is waiting for nodes to free up. If your job can run in the amount of time it takes for the other job to get all the nodes it needs, |
Line 249: | Line 269: | ||
| JOB < | | JOB < | ||
| Job < | | Job < | ||
- | | JOB < | + | | JOB < |
- | | error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with priviledges | + | | error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with privileges |
====== Using the GPU ====== | ====== Using the GPU ====== | ||
Line 315: | Line 335: | ||
< | < | ||
$ sinfo -O partition, | $ sinfo -O partition, | ||
- | PARTITION | + | PARTITION |
- | debug* | + | debug* |
- | general | + | fast slurm[9-14] |
- | pascal | + | general |
- | titan | + | pascal |
+ | quadro | ||
+ | titan | ||
</ | </ | ||
Line 357: | Line 379: | ||
==== CUDA_VISIBLE_DEVICES ==== | ==== CUDA_VISIBLE_DEVICES ==== | ||
- | Do not set this variable. It will be set for you by SLURM. | + | Do not set this variable. It will be set for you by Slurm. |
The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. / | The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. / | ||
- | For example: If you requested multiple gpu's from SLURM (--gres=gpu: | + | For example: If you requested multiple gpu's from Slurm (--gres=gpu: |
Line 418: | Line 440: | ||
</ | </ | ||
STDERR should be blank. | STDERR should be blank. | ||
- | ====== | + | ====== |
If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], | If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], | ||
+ | |||
+ | ====== More ====== | ||
+ | Sometimes other universities have documentation that is better in some areas. | ||
+ | |||
+ | - [[ https:// | ||
+ | - [[ https:// |
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman