User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
techstaff:slurm [2017/11/13 10:02] kauffmantechstaff:slurm [2018/05/04 12:47] – [Partitions / Queues] kauffman
Line 11: Line 11:
  
 SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress.
 +
 +
  
 ===== Where to begin ===== ===== Where to begin =====
Line 17: Line 19:
   ssh user@linux.cs.uchicago.edu   ssh user@linux.cs.uchicago.edu
  
 +===== Mailing List =====
 +If you are going to be a user of this cluster please sign up for the mailing list. Downtime and other relevant information will be announced here.
 +
 +[[ https://mailman.cs.uchicago.edu/cgi-bin/mailman/listinfo/slurm | Mailing List ]]
  
 ===== Documentation ===== ===== Documentation =====
Line 32: Line 38:
   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]
   * [[http://research.computing.yale.edu/support/hpc/user-guide/slurm| Yale's User Guide]]   * [[http://research.computing.yale.edu/support/hpc/user-guide/slurm| Yale's User Guide]]
 +
 +
 ===== Infrastructure ===== ===== Infrastructure =====
  
Line 56: Line 64:
 Request interactive shell Request interactive shell
 <code>user@csilcomputer:~$ srun --pty --mem 500 /bin/bash </code> <code>user@csilcomputer:~$ srun --pty --mem 500 /bin/bash </code>
 +
 +Create a directory on the scratch partition if you don't already have one:
 +<code>user@slurm1:~$ mkdir -p /scratch/$USER</code>
  
 Change into my scratch directory: Change into my scratch directory:
-<code>user@research2:~$ cd /scratch/user/</code>+<code>user@slurm1:~$ cd /scratch/$USER/</code>
  
 Get the files I need: Get the files I need:
 <code> <code>
-user@research2:/scratch/user$ scp user@csilcomputer:~/foo .+user@slurm1:/scratch/user$ scp user@csilcomputer:~/foo .
 foo                         100%  103KB 102.7KB/  00:00     foo                         100%  103KB 102.7KB/  00:00    
 </code> </code>
 Check that the file now exists: Check that the file now exists:
 <code> <code>
-user@research2:/scratch/user$ ls -l foo +user@slurm1:/scratch/user$ ls -l foo 
 -rw------- 1 user user 105121 Dec 29  2015 foo -rw------- 1 user user 105121 Dec 29  2015 foo
 </code> </code>
Line 75: Line 86:
 == Performance is slow == == Performance is slow ==
 This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server. This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server.
 +
 ==== Utilization Dashboard ==== ==== Utilization Dashboard ====
 Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http://peanut.cs.uchicago.edu Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http://peanut.cs.uchicago.edu
Line 86: Line 98:
 | **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. |
 | **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the ''%%--exclusive%%'' flag it should be done here.| | **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the ''%%--exclusive%%'' flag it should be done here.|
-| **gpu** | Contains servers with graphics cards. As of May 2016 there is only one node containing a Tesla M2090. You will be forced to use this server exclusively for now. Please keep your time in interactive mode to a minimum.|+| **pascal** | 2018-05-04: 1x Nvidia GTX1080. You will be forced to use this server exclusively for now. Please keep your time in interactive mode to a minimum.| 
 +| **titan** | 2018-05-04: 4x Nvidia GTX1080Ti. This partition is shared and you MUST use the ''%%--gres%%'' to specify the resources you wish to use. It is also encouraged to specify cpu and memory.|
  
 ====== Job Submission ====== ====== Job Submission ======
Line 233: Line 246:
  
 ====== Using the GPU ====== ====== Using the GPU ======
 +
 +===== GRES Multiple GPU's on one system =====
 +GRES: Generic Resource. As of 2018-05-04 these only include GPU's.
 +
 +Jobs will not be allocated any generic resources unless specifically requested at job submit time using the ''%%--gres%%'' option supported by the ''%%salloc%%'', ''%%sbatch%%'' and ''%%srun%%'' commands. The option requires an argument specifying which generic resources are required and how many resources. The resource specification is of the form ''%%name[:type:count]%%''. The name is the same name as specified by the GresTypes and Gres configuration parameters. type identifies a specific type of that generic resource (e.g. a specific model of GPU). count specifies how many resources are required and has a default value of 1. For example:
 +<code>sbatch --gres=gpu:titan:2 ....</code>
 +
 +Jobs will be allocated specific generic resources as needed to satisfy the request. If the job is suspended, those resources do not become available for use by other jobs.
 +
 +Job steps can be allocated generic resources from those allocated to the job using the ''%%--gres%%'' option with the ''%%srun%%'' command as described above. By default, a job step will be allocated all of the generic resources allocated to the job. If desired, the job step may explicitly specify a different generic resource count than the job. This design choice was based upon a scenario where each job executes many job steps. If job steps were granted access to all generic resources by default, some job steps would need to explicitly specify zero generic resource counts, which we considered more confusing. The job step can be allocated specific generic resources and those resources will not be available to other job steps. A simple example is shown below.
 +
 +==== Ok, but I don't want to read the wall of text above ====
 +Fine.
 +
 +The ''%%--gres%%'' (man srun) is required if you want to make use of a gpu.
 +
 +<code>
 +  --gpu=gpu:   # where 'N' is the number of GPUs requested.
 +                 # Please try to limit yourself to one GPU per person.
 +</code>
 +
 +Example when using tensorflow:
 +
 +Given the file ''%%f%%'':   
 +<code>
 +#!/usr/bin/env python3
 +from tensorflow.python.client import device_lib
 +print(device_lib.list_local_devices())
 +</code>
 +
 +Here we can see that no GPU was allocated to us because we did not specify the ''%%--gres%%'' option
 +<code>
 +user@bulldozer:~$ srun -p titan --pty /bin/bash
 +user@gpu3:~$ ./f 2>&1 | grep physical_device_desc
 +user@gpu3:~$
 +</code>
 +
 +If we request only 1 GPU.
 +<code>
 +user@bulldozer:~$ srun -p titan --pty --gres=gpu:1 /bin/bash
 +user@gpu3:~$ ./f 2>&1 | grep physical_device_desc
 +physical_device_desc: "device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:19:00.0, compute capability: 6.1"
 +</code>
 +
 +If we request 2 GPUs.
 +<code>
 +user@bulldozer:~$ srun -p titan --pty --gres=gpu:2 /bin/bash
 +user@gpu3:~$ ./f 2>&1 | grep physical_device_desc
 +physical_device_desc: "device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:19:00.0, compute capability: 6.1"
 +physical_device_desc: "device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:1a:00.0, compute capability: 6.1"
 +</code>
 +
 +If we request more GPUs then are available.
 +<code>
 +kauffman3@bulldozer:~$ srun -p titan --pty --gres=gpu:5 /bin/bash
 +srun: error: Unable to allocate resources: Requested node configuration is not available
 +</code>
 +
 +==== Cool, but how do I know where and what resources are available ====
 +Turns out the ''%%sinfo%%'' command is super useful.
 +<code>
 +$ sinfo -O partition,nodelist,gres,features,available
 +PARTITION           NODELIST            GRES                FEATURES            AVAIL               
 +debug*              slurm1              (null)              (null)              up                  
 +general             slurm[2-6,8]        (null)              (null)              up                  
 +pascal              gpu2                gpu:gtx1080:      'pascal,gtx1080'    up                  
 +titan               gpu3                gpu:gtx1080ti:    'pascal,gtx1080ti'  up 
 +</code>
 +
 +FEATURES: Is actually just an arbitrary string in the configuration file that defines a node. However, techstaff hopes it actually provides some useful info.
 +
 +GRES: Don't depend on this being accurate, however it will definitely give you a clue as to how many generic resources are in a partition.
 +
 +
 +
 ===== Paths ===== ===== Paths =====
-You will need to add the following to your $PATH and $LD_LIBRARY_PATH.+You will need to add the following to your ''%%$PATH%%'' and ''%%$LD_LIBRARY_PATH%%''.
  
   export PATH=$PATH:/usr/local/cuda/bin   export PATH=$PATH:/usr/local/cuda/bin
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki