User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
techstaff:slurm [2018/05/04 12:32] kauffmantechstaff:slurm [2020/03/25 14:17] kauffman
Line 1: Line 1:
-===== Notice ===== +====== Announcements ====== 
-**2017-08-31**: Configuration change to allow allocation on CPUs and RAM. Please read the 'Default Quotasection under https://howto.cs.uchicago.edu/techstaff:slurm#usage +  * **2020-03-05**: Partition 'fasthas been temporarily repurposed until some new servers arriveBecause of the corona virus we are not certain when that will beI hope mid April, but that could change. 
  
 ====== Peanut Job Submission Cluster ====== ====== Peanut Job Submission Cluster ======
  
-We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).+Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).
  
 For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage: For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage:
Line 44: Line 45:
 ==== Hardware ==== ==== Hardware ====
 Our cluster contains nodes with the following specs: Our cluster contains nodes with the following specs:
 +
 +''%%general%%'':
   * 16 Cores (2x 8core 3.1GHz Processors), 16 threads   * 16 Cores (2x 8core 3.1GHz Processors), 16 threads
   * 64gb RAM   * 64gb RAM
   * 2x 500GB SATA 7200RPM in RAID1   * 2x 500GB SATA 7200RPM in RAID1
  
 +''%%fast%%'':
 +  * 24 Cores (2x 24core Intel Xeon Silver 4116 CPU @ 2.10GHz), 48 threads
 +  * 128gb RAM
 +  * OS: 2x 240GB Intel SSD in RAID1
 +  * /local: 2x 960GB Intel SSD RAID0
 +
 +''%%pascal%%''
 +  * 6 Cores (Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz), 12 threads
 +  * 32gb RAM
 +  * OS: 1x 512gb SSD
 +
 +''%%quadro%%''
 +  * 16 Cores (Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz), 32 threads
 +  * 128gb RAM
 +  * OS: 2x Samsung SSD 850 PRO 128GB
 +  * /local: ZFS mirror (2x Samsung SSD 850 PRO 1TB)
 +  * 2x Quadro P4000
 +
 +''%%titan%%''
 +  * 8 Core (Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz), 16 threads
 +  * 64gb RAM
 +  * OS: 1x 1TB 7200k spinning disk.
 +  * 4x GeForce GTX 1080 Ti
 ==== Storage ==== ==== Storage ====
 There is slow scratch space mounted to ''%%/scratch%%''. It is a ZFS pool consisting of 10x 2TB 7200RPM SAS drives connected via a LSI 9211-8i and is made up of 5 mirrored VDEVs, which is similar to a RAID10. The servers uplink is 1G ethernet.  There is slow scratch space mounted to ''%%/scratch%%''. It is a ZFS pool consisting of 10x 2TB 7200RPM SAS drives connected via a LSI 9211-8i and is made up of 5 mirrored VDEVs, which is similar to a RAID10. The servers uplink is 1G ethernet. 
Line 98: Line 124:
 | **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. |
 | **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the ''%%--exclusive%%'' flag it should be done here.| | **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the ''%%--exclusive%%'' flag it should be done here.|
-| **gpu** | Contains servers with graphics cardsAs of May 2016 there is only one node containing a Tesla M2090You will be forced to use this server exclusively for now. Please keep your time in interactive mode to a minimum.|+| **fast** | 2019-12-02: 48 threads, 128GB RAM | 
 +| **quadro** | 2019-12-02: 2x Quadro P4000*| 
 +| **pascal** | 2018-05-04: 1x Nvidia GTX1080.
 +| **titan** | 2018-05-04: 4x Nvidia GTX1080Ti*|
  
 +* This partition is shared and you MUST use the ''%%--gres%%'' to specify the resources you wish to use. It is also encouraged to specify cpu and memory.
 ====== Job Submission ====== ====== Job Submission ======
 Jobs submitted to the cluster are run from the command line. Almost anything that you can run via the command line on any of our machines in our labs can be run on our job submission server agents. Jobs submitted to the cluster are run from the command line. Almost anything that you can run via the command line on any of our machines in our labs can be run on our job submission server agents.
Line 111: Line 141:
 | ^ SLURM ^ Example ^ | ^ SLURM ^ Example ^
 ^ Submit a batch serial job | sbatch | sbatch runscript.sh | ^ Submit a batch serial job | sbatch | sbatch runscript.sh |
-^ Run a script interatively | srun | srun --pty -p interact -t 10 --mem 1000 \\ /bin/bash \\ /bin/hostname |+^ Run a script interactively | srun | srun --pty -p interact -t 10 --mem 1000 \\ /bin/bash \\ /bin/hostname |
 ^ Kill a job | scancel | scancel 4585 | ^ Kill a job | scancel | scancel 4585 |
 ^ View status of queues | squeue | squeue -u cnetid | ^ View status of queues | squeue | squeue -u cnetid |
Line 167: Line 197:
 <code> <code>
 user@host:~$ srun -n2 hostname user@host:~$ srun -n2 hostname
-research2 +slurm2 
-research2+slurm2
 </code> </code>
  
Line 199: Line 229:
 user@host:~$ sinfo user@host:~$ sinfo
 PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
-debug*       up       30:00      1   idle slurm1 +debug*       up 1-00:00:00      1    mix slurm1 
-general      up 14-00:00:00        idle slurm[2-6,8] +fast         up 1-00:00:00      6   idle slurm[9-14] 
-pascal       up  3-00:00:00      1   idle gpu2 +general      up 21-00:00:       idle slurm[2-6,8] 
-tesla        up  3-00:00:00      1   idle gpu1+pascal       up 3-00:00:00      1   idle gpu2 
 +quadro       up 3-00:00:00      1   idle gpu1 
 +titan        up 3-00:00:00      1    mix gpu3
 </code> </code>
  
Line 268: Line 300:
 Example when using tensorflow: Example when using tensorflow:
  
-Give the file 'f':    +Given the file ''%%f%%'':    
-  Depends on: +<code> 
-    ''%%pip3 install --user tensorflow-gpu%%'' +#!/usr/bin/env python3 
-    ''%%export PATH=$HOME/.local/bin:$PATH%%'' +from tensorflow.python.client import device_lib 
-  <code> +print(device_lib.list_local_devices()) 
-    #!/usr/bin/env python3 +</code>
-    from tensorflow.python.client import device_lib +
-    print(device_lib.list_local_devices()) +
-  </code>+
  
 Here we can see that no GPU was allocated to us because we did not specify the ''%%--gres%%'' option Here we can see that no GPU was allocated to us because we did not specify the ''%%--gres%%'' option
 <code> <code>
-  kauffman3@bulldozer:~$ srun -p titan --pty /bin/bash +user@bulldozer:~$ srun -p titan --pty /bin/bash 
-  kauffman3@gpu3:~$ ./f 2>&1 | grep physical_device_desc +user@gpu3:~$ ./f 2>&1 | grep physical_device_desc 
-  kauffman3@gpu3:~$+user@gpu3:~$
 </code> </code>
  
 If we request only 1 GPU. If we request only 1 GPU.
 <code> <code>
-  kauffman3@bulldozer:~$ srun -p titan --pty --gres=gpu:1 /bin/bash +user@bulldozer:~$ srun -p titan --pty --gres=gpu:1 /bin/bash 
-  kauffman3@gpu3:~$ ./f 2>&1 | grep physical_device_desc +user@gpu3:~$ ./f 2>&1 | grep physical_device_desc 
-  physical_device_desc: "device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:19:00.0, compute capability: 6.1"+physical_device_desc: "device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:19:00.0, compute capability: 6.1"
 </code> </code>
  
 If we request 2 GPUs. If we request 2 GPUs.
 <code> <code>
-kauffman3@bulldozer:~$ srun -p titan --pty --gres=gpu:2 /bin/bash +user@bulldozer:~$ srun -p titan --pty --gres=gpu:2 /bin/bash 
-kauffman3@gpu3:~$ ./f 2>&1 | grep physical_device_desc +user@gpu3:~$ ./f 2>&1 | grep physical_device_desc 
-  physical_device_desc: "device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:19:00.0, compute capability: 6.1" +physical_device_desc: "device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:19:00.0, compute capability: 6.1" 
-  physical_device_desc: "device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:1a:00.0, compute capability: 6.1"+physical_device_desc: "device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:1a:00.0, compute capability: 6.1"
 </code> </code>
  
 If we request more GPUs then are available. If we request more GPUs then are available.
 <code> <code>
-  kauffman3@bulldozer:~$ srun -p titan --pty --gres=gpu:5 /bin/bash +kauffman3@bulldozer:~$ srun -p titan --pty --gres=gpu:5 /bin/bash 
-  srun: error: Unable to allocate resources: Requested node configuration is not available+srun: error: Unable to allocate resources: Requested node configuration is not available
 </code> </code>
  
Line 310: Line 339:
 <code> <code>
 $ sinfo -O partition,nodelist,gres,features,available $ sinfo -O partition,nodelist,gres,features,available
-PARTITION           NODELIST            GRES                FEATURES            AVAIL                +PARTITION           NODELIST            GRES                FEATURES            AVAIL 
-debug*              slurm1              (null)              (null)              up                   +debug*              slurm1              (null)              (null)              up 
-general             slurm[2-6,8]        (null)              (null)              up                   +fast                slurm[9-14]         (null)              (null)              up 
-pascal              gpu2                gpu:gtx1080:      'pascal,gtx1080'    up                   +general             slurm[2-6,8]        (null)              (null)              up 
-titan               gpu3                gpu:gtx1080ti:    'pascal,gtx1080ti'  up +pascal              gpu2                gpu:gtx1080:      'pascal,gtx1080'    up 
 +quadro              gpu1                gpu:p4000:        'quadro,p4000'      up 
 +titan               gpu3                gpu:gtx1080ti:    'pascal,gtx1080ti'  up
 </code> </code>
  
Line 322: Line 353:
  
  
 +==== Checking how many Generic RESources are being consumed ====
  
-===== Paths ===== +Simple use the ''%%-O%%'' option for ''%%squeue%%'' and you can see how many generic resources any particular job is consuming. 
-You will need to add the following to your $PATH and $LD_LIBRARY_PATH.+<code> 
 +$ squeue -O username,nodelist,gres 
 +USER                NODELIST            GRES                 
 +someusername        gpu3                gpu:1                
 +otherusername       gpu3                gpu:3                
 +... 
 +</code> 
 + 
 + 
 +===== Environment Variables ===== 
 + 
 +==== CUDA_HOME, LD_LIBRARY_PATH ==== 
 + 
 +Please make sure you specify $CUDA_HOME and if you want to take advantage of CUDNN libraries you will need to append /usr/local/cuda-x.x/lib64 to the $LD_LIBRARY_PATH environment variable. 
 + 
 +  cuda_version=9.2 
 +  export CUDA_HOME=/usr/local/cuda-${cuda_version} 
 +  export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib64 
 + 
 +Currently we support the same versions of CUDA that the latest version of CUDNN supports. This is not written in stone and we can accommodate most other versions if required; just let techstaff know what your needs are. 
 + 
 +==== PATH ==== 
 +You may also need to add the following to your ''%%$PATH%%''
  
   export PATH=$PATH:/usr/local/cuda/bin   export PATH=$PATH:/usr/local/cuda/bin
-  export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/usr/local/cuda/lib+ 
 +==== CUDA_VISIBLE_DEVICES ==== 
 +Do not set this variable. It will be set for you by SLURM. 
 + 
 +The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. /dev/nvidia2). 
 + 
 +For example: If you requested multiple gpu's from SLURM (--gres=gpu:2), the CUDA_VISIBLE_DEVICES variable should contain two numbers(0-3 in this case) separated by a comma (e.g. 1,3).
  
  
Line 384: Line 444:
 </code> </code>
 STDERR should be blank. STDERR should be blank.
 +====== Feedback ======
 +If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], call (773-702-1031), or stop by our office (Crerar 357).
 +
 ====== More ====== ====== More ======
-If you feel this documentation is lacking in some way please let techstaff knowEmail [[techstaff@cs.uchicago.edu]], call (773-702-1031), or stop by our office (Ryerson 154).+Sometimes other universities have documentation that is better in some areas. 
 + 
 +  - [[ https://hpcc.usc.edu/support/documentation/slurm/ | USC SLURM Docs ]] 
 +  [[ https://nesi.github.io/hpc_training/lessons/maui-and-mahuika/slurm | NESI SLURM Docs ]]
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki