techstaff:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
techstaff:slurm [2018/02/22 17:21] – kauffman | techstaff:slurm [2018/12/18 15:44] – [More] kauffman | ||
---|---|---|---|
Line 20: | Line 20: | ||
===== Mailing List ===== | ===== Mailing List ===== | ||
- | If you are going to be a user of this cluster please sign up for the mailing list. Downtime and other relevant | + | If you are going to be a user of this cluster please sign up for the mailing list. Downtime and other relevant |
[[ https:// | [[ https:// | ||
Line 64: | Line 64: | ||
Request interactive shell | Request interactive shell | ||
< | < | ||
+ | |||
+ | Create a directory on the scratch partition if you don't already have one: | ||
+ | < | ||
Change into my scratch directory: | Change into my scratch directory: | ||
- | < | + | < |
Get the files I need: | Get the files I need: | ||
< | < | ||
- | user@research2:/ | + | user@slurm1:/ |
foo | foo | ||
</ | </ | ||
Check that the file now exists: | Check that the file now exists: | ||
< | < | ||
- | user@research2:/ | + | user@slurm1:/ |
-rw------- 1 user user 105121 Dec 29 2015 foo | -rw------- 1 user user 105121 Dec 29 2015 foo | ||
</ | </ | ||
Line 95: | Line 98: | ||
| **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | | **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | ||
| **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the '' | | **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the '' | ||
- | | **gpu** | Contains servers with graphics cards. As of May 2016 there is only one node containing a Tesla M2090. You will be forced to use this server exclusively for now. Please keep your time in interactive mode to a minimum.| | + | | **pascal** | 2018-05-04: 1x Nvidia GTX1080. You will be forced to use this server exclusively for now. Please keep your time in interactive mode to a minimum.| |
+ | | **titan** | 2018-05-04: 4x Nvidia GTX1080Ti. This partition is shared and you MUST use the '' | ||
====== Job Submission ====== | ====== Job Submission ====== | ||
Line 108: | Line 112: | ||
| ^ SLURM ^ Example ^ | | ^ SLURM ^ Example ^ | ||
^ Submit a batch serial job | sbatch | sbatch runscript.sh | | ^ Submit a batch serial job | sbatch | sbatch runscript.sh | | ||
- | ^ Run a script | + | ^ Run a script |
^ Kill a job | scancel | scancel 4585 | | ^ Kill a job | scancel | scancel 4585 | | ||
^ View status of queues | squeue | squeue -u cnetid | | ^ View status of queues | squeue | squeue -u cnetid | | ||
Line 242: | Line 246: | ||
====== Using the GPU ====== | ====== Using the GPU ====== | ||
- | ===== Paths ===== | + | |
- | You will need to add the following | + | ===== GRES Multiple GPU's on one system |
+ | GRES: Generic Resource. As of 2018-05-04 these only include GPU' | ||
+ | |||
+ | Jobs will not be allocated any generic resources unless specifically requested at job submit time using the '' | ||
+ | < | ||
+ | |||
+ | Jobs will be allocated specific generic resources as needed to satisfy the request. If the job is suspended, those resources do not become available for use by other jobs. | ||
+ | |||
+ | Job steps can be allocated generic resources from those allocated to the job using the '' | ||
+ | |||
+ | ==== Ok, but I don't want to read the wall of text above ==== | ||
+ | Fine. | ||
+ | |||
+ | The '' | ||
+ | |||
+ | < | ||
+ | --gpu=gpu: | ||
+ | # Please try to limit yourself to one GPU per person. | ||
+ | </ | ||
+ | |||
+ | Example when using tensorflow: | ||
+ | |||
+ | Given the file '' | ||
+ | < | ||
+ | # | ||
+ | from tensorflow.python.client import device_lib | ||
+ | print(device_lib.list_local_devices()) | ||
+ | </ | ||
+ | |||
+ | Here we can see that no GPU was allocated to us because we did not specify the '' | ||
+ | < | ||
+ | user@bulldozer: | ||
+ | user@gpu3: | ||
+ | user@gpu3: | ||
+ | </ | ||
+ | |||
+ | If we request only 1 GPU. | ||
+ | < | ||
+ | user@bulldozer: | ||
+ | user@gpu3: | ||
+ | physical_device_desc: | ||
+ | </ | ||
+ | |||
+ | If we request 2 GPUs. | ||
+ | < | ||
+ | user@bulldozer: | ||
+ | user@gpu3: | ||
+ | physical_device_desc: | ||
+ | physical_device_desc: | ||
+ | </ | ||
+ | |||
+ | If we request more GPUs then are available. | ||
+ | < | ||
+ | kauffman3@bulldozer: | ||
+ | srun: error: Unable to allocate resources: Requested node configuration is not available | ||
+ | </ | ||
+ | |||
+ | ==== Cool, but how do I know where and what resources are available ==== | ||
+ | Turns out the '' | ||
+ | < | ||
+ | $ sinfo -O partition, | ||
+ | PARTITION | ||
+ | debug* | ||
+ | general | ||
+ | pascal | ||
+ | titan | ||
+ | </ | ||
+ | |||
+ | FEATURES: Is actually just an arbitrary string in the configuration file that defines a node. However, techstaff hopes it actually provides some useful info. | ||
+ | |||
+ | GRES: Don't depend on this being accurate, however it will definitely give you a clue as to how many generic resources are in a partition. | ||
+ | |||
+ | |||
+ | ==== Checking how many Generic RESources are being consumed ==== | ||
+ | |||
+ | Simple use the '' | ||
+ | < | ||
+ | $ squeue -O username, | ||
+ | USER NODELIST | ||
+ | someusername | ||
+ | otherusername | ||
+ | ... | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== Environment Variables ===== | ||
+ | |||
+ | ==== CUDA_HOME, | ||
+ | |||
+ | Please make sure you specify $CUDA_HOME and if you want to take advantage of CUDNN libraries you will need to append / | ||
+ | |||
+ | cuda_version=9.2 | ||
+ | export CUDA_HOME=/ | ||
+ | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH: | ||
+ | |||
+ | Currently we support the same versions of CUDA that the latest version of CUDNN supports. This is not written in stone and we can accommodate most other versions if required; just let techstaff know what your needs are. | ||
+ | |||
+ | ==== PATH ==== | ||
+ | You may also need to add the following to your '' | ||
export PATH=$PATH:/ | export PATH=$PATH:/ | ||
- | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/usr/local/ | + | |
+ | ==== CUDA_VISIBLE_DEVICES ==== | ||
+ | Do not set this variable. It will be set for you by SLURM. | ||
+ | |||
+ | The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. /dev/nvidia2). | ||
+ | |||
+ | For example: If you requested multiple gpu's from SLURM (--gres=gpu: | ||
Line 304: | Line 412: | ||
STDERR should be blank. | STDERR should be blank. | ||
====== More ====== | ====== More ====== | ||
- | If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], | + | If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], |
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman