techstaff:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
techstaff:slurm [2015/12/08 15:31] – kauffman | techstaff:slurm [2020/06/22 10:07] – kauffman | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== | + | ====== Peanut Job Submission Cluster ====== |
- | We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might have been rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3). | + | Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3). |
For job submission we will be using a piece of software called [[http:// | For job submission we will be using a piece of software called [[http:// | ||
Line 8: | Line 8: | ||
SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. | SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. | ||
+ | |||
+ | |||
+ | |||
+ | ===== Where to begin ===== | ||
+ | SLURM is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there. | ||
+ | |||
+ | ssh user@linux.cs.uchicago.edu | ||
+ | |||
+ | ===== Mailing List ===== | ||
+ | If you are going to be a user of this cluster please sign up for the mailing list. Downtime and other relevant information will be announced here. | ||
+ | |||
+ | [[ https:// | ||
===== Documentation ===== | ===== Documentation ===== | ||
Line 22: | Line 34: | ||
* [[http:// | * [[http:// | ||
* [[https:// | * [[https:// | ||
+ | * [[http:// | ||
Line 28: | Line 41: | ||
==== Hardware ==== | ==== Hardware ==== | ||
Our cluster contains nodes with the following specs: | Our cluster contains nodes with the following specs: | ||
+ | |||
+ | '' | ||
* 16 Cores (2x 8core 3.1GHz Processors), | * 16 Cores (2x 8core 3.1GHz Processors), | ||
* 64gb RAM | * 64gb RAM | ||
* 2x 500GB SATA 7200RPM in RAID1 | * 2x 500GB SATA 7200RPM in RAID1 | ||
- | To better manage the cluster we have virtualized the job submission nodes and give them all resources of the hardware. So, the actual resources you can consume on any one node is: | + | '' |
- | * 14 Cores, | + | * 24 Cores (2x 24core Intel Xeon Silver 4116 CPU @ 2.10GHz), 48 threads |
- | * 62GB RAM | + | * 128gb RAM |
+ | * OS: 2x 240GB Intel SSD in RAID1 | ||
+ | * /local: 2x 960GB Intel SSD RAID0 | ||
+ | '' | ||
+ | * 6 Cores (Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz), 12 threads | ||
+ | * 32gb RAM | ||
+ | * OS: 1x 512gb SSD | ||
+ | |||
+ | '' | ||
+ | * 16 Cores (Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz), 32 threads | ||
+ | * 128gb RAM | ||
+ | * OS: 2x Samsung SSD 850 PRO 128GB | ||
+ | * /local: ZFS mirror (2x Samsung SSD 850 PRO 1TB) | ||
+ | * 2x Quadro P4000 | ||
+ | |||
+ | '' | ||
+ | * 8 Core (Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz), 16 threads | ||
+ | * 64gb RAM | ||
+ | * OS: 1x 1TB 7200k spinning disk. | ||
+ | * 4x GeForce GTX 1080 Ti | ||
==== Storage ==== | ==== Storage ==== | ||
- | Shared | + | There is slow scratch |
+ | |||
+ | * Files older than 90 days will be deleted automatically. | ||
+ | * Scratch space is shared by all users. | ||
+ | |||
+ | === Access === | ||
+ | Scratch space is only mounted on nodes associated with the cluster. If you want to be able to transfer files to the scratch space you will want to run an [[techstaff: | ||
+ | |||
+ | - You should only do a file transfer via the debug partition: '' | ||
+ | - Now you can create a directory of your own: '' | ||
+ | |||
+ | == Example == | ||
+ | |||
+ | Request interactive shell | ||
+ | < | ||
+ | |||
+ | Create a directory on the scratch partition if you don't already | ||
+ | < | ||
+ | |||
+ | Change into my scratch directory: | ||
+ | < | ||
+ | |||
+ | Get the files I need: | ||
+ | < | ||
+ | user@slurm1:/ | ||
+ | foo | ||
+ | </ | ||
+ | Check that the file now exists: | ||
+ | < | ||
+ | user@slurm1:/ | ||
+ | -rw------- 1 user user 105121 Dec 29 2015 foo | ||
+ | </ | ||
+ | |||
+ | I can now exit my interactive shell. | ||
+ | |||
+ | == Performance is slow == | ||
+ | This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading | ||
==== Utilization Dashboard ==== | ==== Utilization Dashboard ==== | ||
Line 43: | Line 113: | ||
==== Partitions / Queues ==== | ==== Partitions / Queues ==== | ||
- | As of December, 2015 we have will have at least 2 partitions in our cluster; ' | + | To find out what partitions we offer, checkout the [[techstaff: |
+ | |||
+ | As of December, 2015 we have will have at least 2 partitions in our cluster; ' | ||
^ Partition Name ^ Description ^ | ^ Partition Name ^ Description ^ | ||
| **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | | **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | ||
| **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the '' | | **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the '' | ||
- | | **hardware** | The purpose of this partition is to set aside nodes that do not use the virtualization layer. This is important if you need true hardware access. | | + | | **fast** | 2019-12-02: 48 threads, 128GB RAM | |
+ | | **quadro** | 2019-12-02: 2x Quadro P4000. *| | ||
+ | | **pascal** | 2018-05-04: 1x Nvidia GTX1080.| | ||
+ | | **titan** | 2018-05-04: 4x Nvidia GTX1080Ti. *| | ||
+ | * This partition is shared and you MUST use the '' | ||
====== Job Submission ====== | ====== Job Submission ====== | ||
Jobs submitted to the cluster are run from the command line. Almost anything that you can run via the command line on any of our machines in our labs can be run on our job submission server agents. | Jobs submitted to the cluster are run from the command line. Almost anything that you can run via the command line on any of our machines in our labs can be run on our job submission server agents. | ||
Line 62: | Line 137: | ||
| ^ SLURM ^ Example ^ | | ^ SLURM ^ Example ^ | ||
^ Submit a batch serial job | sbatch | sbatch runscript.sh | | ^ Submit a batch serial job | sbatch | sbatch runscript.sh | | ||
- | ^ Run a script | + | ^ Run a script |
^ Kill a job | scancel | scancel 4585 | | ^ Kill a job | scancel | scancel 4585 | | ||
^ View status of queues | squeue | squeue -u cnetid | | ^ View status of queues | squeue | squeue -u cnetid | | ||
Line 70: | Line 145: | ||
===== Usage ===== | ===== Usage ===== | ||
Below are some common examples. You should consult the [[http:// | Below are some common examples. You should consult the [[http:// | ||
+ | |||
+ | === Default Quotas === | ||
+ | By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: '' | ||
=== Exclusive access to a node === | === Exclusive access to a node === | ||
Line 79: | Line 157: | ||
=== Sample script === | === Sample script === | ||
Make sure you create a directory in which to deposit the '' | Make sure you create a directory in which to deposit the '' | ||
- | mkdir -p $HOME/ | + | mkdir -p $HOME/ |
< | < | ||
#!/bin/bash | #!/bin/bash | ||
# | # | ||
- | #SBATCH --mail-user=user@cs.uchicago.edu | + | #SBATCH --mail-user=cnetid@cs.uchicago.edu |
#SBATCH --mail-type=ALL | #SBATCH --mail-type=ALL | ||
- | #SBATCH --output=/ | + | #SBATCH --output=/ |
- | #SBATCH --error=/ | + | #SBATCH --error=/ |
- | #SBATCH --workdir=/ | + | #SBATCH --workdir=/ |
#SBATCH --partition=debug | #SBATCH --partition=debug | ||
#SBATCH --job-name=check_hostname_of_node | #SBATCH --job-name=check_hostname_of_node | ||
#SBATCH --nodes=1 | #SBATCH --nodes=1 | ||
#SBATCH --ntasks=1 | #SBATCH --ntasks=1 | ||
+ | #SBATCH --mem-per-cpu=500 | ||
#SBATCH --time=15: | #SBATCH --time=15: | ||
Line 101: | Line 180: | ||
man sbatch | man sbatch | ||
- | Make sure to replace all instances of the word '' | + | Make sure to replace all instances of the word '' |
+ | === Submitting job script === | ||
+ | Using the above example you will want to place your tested code into a file. ' | ||
+ | < | ||
+ | sbatch hostname.job | ||
+ | </ | ||
+ | |||
+ | You can then check the status via squeue or see the output in the output directory ' | ||
==== srun ==== | ==== srun ==== | ||
Used to submit a job to the cluster that doesn' | Used to submit a job to the cluster that doesn' | ||
< | < | ||
user@host: | user@host: | ||
- | research2 | + | slurm2 |
- | research2 | + | slurm2 |
</ | </ | ||
Line 139: | Line 225: | ||
user@host: | user@host: | ||
PARTITION AVAIL TIMELIMIT | PARTITION AVAIL TIMELIMIT | ||
- | debug* | + | debug* |
- | general | + | fast up 1-00: |
- | hardware | + | general |
+ | pascal | ||
+ | quadro | ||
+ | titan up 3-00: | ||
</ | </ | ||
Line 165: | Line 254: | ||
This command: | This command: | ||
- | srun -p general --pty --mem 500 -t 0-06:00 /bin/bash | + | srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash |
will start a command line shell ('' | will start a command line shell ('' | ||
- | ====== Job Scheduling ====== | ||
+ | ====== Job Scheduling ====== | ||
We use a [[http:// | We use a [[http:// | ||
Line 182: | Line 271: | ||
| JOB < | | JOB < | ||
| error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with priviledges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. | | | error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with priviledges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. | | ||
+ | |||
+ | ====== Using the GPU ====== | ||
+ | |||
+ | ===== GRES Multiple GPU's on one system ===== | ||
+ | GRES: Generic Resource. As of 2018-05-04 these only include GPU's. | ||
+ | |||
+ | Jobs will not be allocated any generic resources unless specifically requested at job submit time using the '' | ||
+ | < | ||
+ | |||
+ | Jobs will be allocated specific generic resources as needed to satisfy the request. If the job is suspended, those resources do not become available for use by other jobs. | ||
+ | |||
+ | Job steps can be allocated generic resources from those allocated to the job using the '' | ||
+ | |||
+ | ==== Ok, but I don't want to read the wall of text above ==== | ||
+ | Fine. | ||
+ | |||
+ | The '' | ||
+ | |||
+ | < | ||
+ | --gpu=gpu: | ||
+ | # Please try to limit yourself to one GPU per person. | ||
+ | </ | ||
+ | |||
+ | Example when using tensorflow: | ||
+ | |||
+ | Given the file '' | ||
+ | < | ||
+ | # | ||
+ | from tensorflow.python.client import device_lib | ||
+ | print(device_lib.list_local_devices()) | ||
+ | </ | ||
+ | |||
+ | Here we can see that no GPU was allocated to us because we did not specify the '' | ||
+ | < | ||
+ | user@bulldozer: | ||
+ | user@gpu3: | ||
+ | user@gpu3: | ||
+ | </ | ||
+ | |||
+ | If we request only 1 GPU. | ||
+ | < | ||
+ | user@bulldozer: | ||
+ | user@gpu3: | ||
+ | physical_device_desc: | ||
+ | </ | ||
+ | |||
+ | If we request 2 GPUs. | ||
+ | < | ||
+ | user@bulldozer: | ||
+ | user@gpu3: | ||
+ | physical_device_desc: | ||
+ | physical_device_desc: | ||
+ | </ | ||
+ | |||
+ | If we request more GPUs then are available. | ||
+ | < | ||
+ | kauffman3@bulldozer: | ||
+ | srun: error: Unable to allocate resources: Requested node configuration is not available | ||
+ | </ | ||
+ | |||
+ | ==== Cool, but how do I know where and what resources are available ==== | ||
+ | Turns out the '' | ||
+ | < | ||
+ | $ sinfo -O partition, | ||
+ | PARTITION | ||
+ | debug* | ||
+ | fast slurm[9-14] | ||
+ | general | ||
+ | pascal | ||
+ | quadro | ||
+ | titan | ||
+ | </ | ||
+ | |||
+ | FEATURES: Is actually just an arbitrary string in the configuration file that defines a node. However, techstaff hopes it actually provides some useful info. | ||
+ | |||
+ | GRES: Don't depend on this being accurate, however it will definitely give you a clue as to how many generic resources are in a partition. | ||
+ | |||
+ | |||
+ | ==== Checking how many Generic RESources are being consumed ==== | ||
+ | |||
+ | Simple use the '' | ||
+ | < | ||
+ | $ squeue -O username, | ||
+ | USER NODELIST | ||
+ | someusername | ||
+ | otherusername | ||
+ | ... | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== Environment Variables ===== | ||
+ | |||
+ | ==== CUDA_HOME, LD_LIBRARY_PATH ==== | ||
+ | |||
+ | Please make sure you specify $CUDA_HOME and if you want to take advantage of CUDNN libraries you will need to append / | ||
+ | |||
+ | cuda_version=9.2 | ||
+ | export CUDA_HOME=/ | ||
+ | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH: | ||
+ | |||
+ | Currently we support the same versions of CUDA that the latest version of CUDNN supports. This is not written in stone and we can accommodate most other versions if required; just let techstaff know what your needs are. | ||
+ | |||
+ | ==== PATH ==== | ||
+ | You may also need to add the following to your '' | ||
+ | |||
+ | export PATH=$PATH:/ | ||
+ | |||
+ | ==== CUDA_VISIBLE_DEVICES ==== | ||
+ | Do not set this variable. It will be set for you by SLURM. | ||
+ | |||
+ | The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. / | ||
+ | |||
+ | For example: If you requested multiple gpu's from SLURM (--gres=gpu: | ||
+ | |||
+ | |||
+ | ===== Example ===== | ||
+ | This sbatch script will get device information from the installed Tesla gpu. | ||
+ | < | ||
+ | #!/bin/bash | ||
+ | # | ||
+ | #SBATCH --mail-user=cnetid@cs.uchicago.edu | ||
+ | #SBATCH --mail-type=ALL | ||
+ | #SBATCH --output=/ | ||
+ | #SBATCH --error=/ | ||
+ | #SBATCH --workdir=/ | ||
+ | #SBATCH --partition=gpu | ||
+ | #SBATCH --job-name=get_tesla_info | ||
+ | |||
+ | export PATH=$PATH:/ | ||
+ | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/ | ||
+ | |||
+ | cat << EOF > / | ||
+ | #include < | ||
+ | |||
+ | int main() { | ||
+ | int nDevices; | ||
+ | |||
+ | cudaGetDeviceCount(& | ||
+ | for (int i = 0; i < nDevices; i++) { | ||
+ | cudaDeviceProp prop; | ||
+ | cudaGetDeviceProperties(& | ||
+ | printf(" | ||
+ | printf(" | ||
+ | printf(" | ||
+ | | ||
+ | printf(" | ||
+ | | ||
+ | printf(" | ||
+ | | ||
+ | } | ||
+ | } | ||
+ | EOF | ||
+ | |||
+ | / | ||
+ | /tmp/a.out | ||
+ | rm /tmp/a.out | ||
+ | rm / | ||
+ | </ | ||
+ | ==== Output ==== | ||
+ | STDOUT will look something like this: | ||
+ | < | ||
+ | cnetid@linux1: | ||
+ | Device Number: 0 | ||
+ | Device name: Tesla M2090 | ||
+ | Memory Clock Rate (KHz): 1848000 | ||
+ | Memory Bus Width (bits): 384 | ||
+ | Peak Memory Bandwidth (GB/s): 177.408000 | ||
+ | </ | ||
+ | STDERR should be blank. | ||
+ | ====== Feedback ====== | ||
+ | If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], | ||
====== More ====== | ====== More ====== | ||
- | If you feel this documentation is lacking | + | Sometimes other universities have documentation |
+ | |||
+ | - [[ https:// | ||
+ | |
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman