techstaff:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
techstaff:slurm [2017/08/23 13:48] – [Notice] kauffman | techstaff:slurm [2018/05/04 12:32] – kauffman | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ===== Notice ===== | + | ===== Notice ===== |
- | The SLURM cluster will become unavailable starting | + | **2017-08-31**: Configuration change to allow allocation |
- | **2017-08-22 1800**: Main cluster upgraded. You can try to use it now but I can't guarantee that I won't kill your job tomorrow or Firday. | ||
- | |||
- | **2017-08-23 1345**: GPU servers upgraded and added back to the cluster. They may be missing some software that was not automatable at previous time of installation. Send me an email if you find anything missing. | ||
- | |||
- | We still run systems with Ubuntu 14.04 installed. As of right now these systems cannot submit jobs to the cluster. This is on purpose. The slurm version jump between 14.04 and 16.04 was so huge that this was unavoidable. This means you should prefer to use linux.cs.uchicago.edu or any CS machine that run Ubuntu 16.04. | ||
====== Peanut Job Submission Cluster ====== | ====== Peanut Job Submission Cluster ====== | ||
Line 16: | Line 11: | ||
SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. | SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. | ||
+ | |||
+ | |||
===== Where to begin ===== | ===== Where to begin ===== | ||
Line 22: | Line 19: | ||
ssh user@linux.cs.uchicago.edu | ssh user@linux.cs.uchicago.edu | ||
+ | ===== Mailing List ===== | ||
+ | If you are going to be a user of this cluster please sign up for the mailing list. Downtime and other relevant information will be announced here. | ||
+ | |||
+ | [[ https:// | ||
===== Documentation ===== | ===== Documentation ===== | ||
Line 36: | Line 37: | ||
* [[http:// | * [[http:// | ||
* [[https:// | * [[https:// | ||
+ | * [[http:// | ||
Line 45: | Line 47: | ||
* 64gb RAM | * 64gb RAM | ||
* 2x 500GB SATA 7200RPM in RAID1 | * 2x 500GB SATA 7200RPM in RAID1 | ||
- | |||
- | To better manage the cluster we have virtualized the job submission nodes and give them all resources of the hardware. So, the actual resources you can consume on any one node is: | ||
- | * 14 Cores, 14 threads | ||
- | * 62GB RAM | ||
==== Storage ==== | ==== Storage ==== | ||
Line 66: | Line 64: | ||
Request interactive shell | Request interactive shell | ||
< | < | ||
+ | |||
+ | Create a directory on the scratch partition if you don't already have one: | ||
+ | < | ||
Change into my scratch directory: | Change into my scratch directory: | ||
- | < | + | < |
Get the files I need: | Get the files I need: | ||
< | < | ||
- | user@research2:/ | + | user@slurm1:/ |
foo | foo | ||
</ | </ | ||
Check that the file now exists: | Check that the file now exists: | ||
< | < | ||
- | user@research2:/ | + | user@slurm1:/ |
-rw------- 1 user user 105121 Dec 29 2015 foo | -rw------- 1 user user 105121 Dec 29 2015 foo | ||
</ | </ | ||
Line 85: | Line 86: | ||
== Performance is slow == | == Performance is slow == | ||
This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server. | This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server. | ||
+ | |||
==== Utilization Dashboard ==== | ==== Utilization Dashboard ==== | ||
Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http:// | Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http:// | ||
Line 117: | Line 119: | ||
===== Usage ===== | ===== Usage ===== | ||
Below are some common examples. You should consult the [[http:// | Below are some common examples. You should consult the [[http:// | ||
+ | |||
+ | === Default Quotas === | ||
+ | By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: '' | ||
=== Exclusive access to a node === | === Exclusive access to a node === | ||
Line 126: | Line 131: | ||
=== Sample script === | === Sample script === | ||
Make sure you create a directory in which to deposit the '' | Make sure you create a directory in which to deposit the '' | ||
- | mkdir -p $HOME/ | + | mkdir -p $HOME/ |
< | < | ||
Line 133: | Line 138: | ||
#SBATCH --mail-user=cnetid@cs.uchicago.edu | #SBATCH --mail-user=cnetid@cs.uchicago.edu | ||
#SBATCH --mail-type=ALL | #SBATCH --mail-type=ALL | ||
- | #SBATCH --output=/ | + | #SBATCH --output=/ |
- | #SBATCH --error=/ | + | #SBATCH --error=/ |
#SBATCH --workdir=/ | #SBATCH --workdir=/ | ||
#SBATCH --partition=debug | #SBATCH --partition=debug | ||
Line 140: | Line 145: | ||
#SBATCH --nodes=1 | #SBATCH --nodes=1 | ||
#SBATCH --ntasks=1 | #SBATCH --ntasks=1 | ||
+ | #SBATCH --mem-per-cpu=500 | ||
#SBATCH --time=15: | #SBATCH --time=15: | ||
Line 193: | Line 199: | ||
user@host: | user@host: | ||
PARTITION AVAIL TIMELIMIT | PARTITION AVAIL TIMELIMIT | ||
- | debug* | + | debug* |
- | general | + | general |
+ | pascal | ||
+ | tesla up 3-00: | ||
</ | </ | ||
Line 220: | Line 228: | ||
srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash | srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash | ||
will start a command line shell ('' | will start a command line shell ('' | ||
- | ====== Job Scheduling ====== | ||
+ | |||
+ | ====== Job Scheduling ====== | ||
We use a [[http:// | We use a [[http:// | ||
Line 236: | Line 245: | ||
====== Using the GPU ====== | ====== Using the GPU ====== | ||
+ | |||
+ | ===== GRES Multiple GPU's on one system ===== | ||
+ | GRES: Generic Resource. As of 2018-05-04 these only include GPU's. | ||
+ | |||
+ | Jobs will not be allocated any generic resources unless specifically requested at job submit time using the '' | ||
+ | < | ||
+ | |||
+ | Jobs will be allocated specific generic resources as needed to satisfy the request. If the job is suspended, those resources do not become available for use by other jobs. | ||
+ | |||
+ | Job steps can be allocated generic resources from those allocated to the job using the '' | ||
+ | |||
+ | ==== Ok, but I don't want to read the wall of text above ==== | ||
+ | Fine. | ||
+ | |||
+ | The '' | ||
+ | |||
+ | < | ||
+ | --gpu=gpu: | ||
+ | # Please try to limit yourself to one GPU per person. | ||
+ | </ | ||
+ | |||
+ | Example when using tensorflow: | ||
+ | |||
+ | Give the file ' | ||
+ | Depends on: | ||
+ | '' | ||
+ | '' | ||
+ | < | ||
+ | # | ||
+ | from tensorflow.python.client import device_lib | ||
+ | print(device_lib.list_local_devices()) | ||
+ | </ | ||
+ | |||
+ | Here we can see that no GPU was allocated to us because we did not specify the '' | ||
+ | < | ||
+ | kauffman3@bulldozer: | ||
+ | kauffman3@gpu3: | ||
+ | kauffman3@gpu3: | ||
+ | </ | ||
+ | |||
+ | If we request only 1 GPU. | ||
+ | < | ||
+ | kauffman3@bulldozer: | ||
+ | kauffman3@gpu3: | ||
+ | physical_device_desc: | ||
+ | </ | ||
+ | |||
+ | If we request 2 GPUs. | ||
+ | < | ||
+ | kauffman3@bulldozer: | ||
+ | kauffman3@gpu3: | ||
+ | physical_device_desc: | ||
+ | physical_device_desc: | ||
+ | </ | ||
+ | |||
+ | If we request more GPUs then are available. | ||
+ | < | ||
+ | kauffman3@bulldozer: | ||
+ | srun: error: Unable to allocate resources: Requested node configuration is not available | ||
+ | </ | ||
+ | |||
+ | ==== Cool, but how do I know where and what resources are available ==== | ||
+ | Turns out the '' | ||
+ | < | ||
+ | $ sinfo -O partition, | ||
+ | PARTITION | ||
+ | debug* | ||
+ | general | ||
+ | pascal | ||
+ | titan | ||
+ | </ | ||
+ | |||
+ | FEATURES: Is actually just an arbitrary string in the configuration file that defines a node. However, techstaff hopes it actually provides some useful info. | ||
+ | |||
+ | GRES: Don't depend on this being accurate, however it will definitely give you a clue as to how many generic resources are in a partition. | ||
+ | |||
+ | |||
+ | |||
===== Paths ===== | ===== Paths ===== | ||
You will need to add the following to your $PATH and $LD_LIBRARY_PATH. | You will need to add the following to your $PATH and $LD_LIBRARY_PATH. |
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman