User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
techstaff:slurm [2015/12/29 15:53] – [sinfo] kauffmantechstaff:slurm [2017/11/13 10:02] kauffman
Line 1: Line 1:
-====== DRAFT | Peanut Job Submission Cluster ======+===== Notice ===== 
 +**2017-08-31**: Configuration change to allow allocation on CPUs and RAM. Please read the 'Default Quota' section under https://howto.cs.uchicago.edu/techstaff:slurm#usage 
  
-We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might have been rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).+====== Peanut Job Submission Cluster ====== 
 + 
 +We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).
  
 For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage: For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage:
Line 8: Line 11:
  
 SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress.
 +
 +===== Where to begin =====
 +SLURM is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there.
 +
 +  ssh user@linux.cs.uchicago.edu
 +
  
 ===== Documentation ===== ===== Documentation =====
Line 22: Line 31:
   * [[http://slurm.schedmd.com/tutorials.html|SLURM tutorial videos]]   * [[http://slurm.schedmd.com/tutorials.html|SLURM tutorial videos]]
   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]
- +  * [[http://research.computing.yale.edu/support/hpc/user-guide/slurm| Yale's User Guide]]
 ===== Infrastructure ===== ===== Infrastructure =====
  
Line 31: Line 39:
   * 64gb RAM   * 64gb RAM
   * 2x 500GB SATA 7200RPM in RAID1   * 2x 500GB SATA 7200RPM in RAID1
- 
-To better manage the cluster we have virtualized the job submission nodes and give them all resources of the hardware. So, the actual resources you can consume on any one node is: 
-  * 14 Cores, 14 threads 
-  * 62GB RAM 
  
 ==== Storage ==== ==== Storage ====
Line 41: Line 45:
   * Files older than 90 days will be deleted automatically.   * Files older than 90 days will be deleted automatically.
   * Scratch space is shared by all users.   * Scratch space is shared by all users.
 +
 +=== Access ===
 +Scratch space is only mounted on nodes associated with the cluster. If you want to be able to transfer files to the scratch space you will want to run an [[techstaff:slurm#interactive_jobs|interactive shell]]. Now you will be able to use standard tools such as ''%%scp%%'' or ''%%rsync%%'' to transfer files.
 +
 +  - You should only do a file transfer via the debug partition: ''%% srun -p debug --pty --mem 500 /bin/bash%%''
 +  - Now you can create a directory of your own: ''%%mkdir /scratch/$USER%%'' You should store any files you create in this directory.
 +
 +== Example ==
 +
 +Request interactive shell
 +<code>user@csilcomputer:~$ srun --pty --mem 500 /bin/bash </code>
 +
 +Change into my scratch directory:
 +<code>user@research2:~$ cd /scratch/user/</code>
 +
 +Get the files I need:
 +<code>
 +user@research2:/scratch/user$ scp user@csilcomputer:~/foo .
 +foo                         100%  103KB 102.7KB/  00:00    
 +</code>
 +Check that the file now exists:
 +<code>
 +user@research2:/scratch/user$ ls -l foo 
 +-rw------- 1 user user 105121 Dec 29  2015 foo
 +</code>
 +
 +I can now exit my interactive shell.
 +
 +== Performance is slow ==
 +This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server.
 ==== Utilization Dashboard ==== ==== Utilization Dashboard ====
 Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http://peanut.cs.uchicago.edu Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http://peanut.cs.uchicago.edu
Line 52: Line 86:
 | **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. |
 | **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the ''%%--exclusive%%'' flag it should be done here.| | **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the ''%%--exclusive%%'' flag it should be done here.|
 +| **gpu** | Contains servers with graphics cards. As of May 2016 there is only one node containing a Tesla M2090. You will be forced to use this server exclusively for now. Please keep your time in interactive mode to a minimum.|
  
 ====== Job Submission ====== ====== Job Submission ======
Line 73: Line 107:
 ===== Usage ===== ===== Usage =====
 Below are some common examples. You should consult the [[http://slurm.schedmd.com/documentation.html|documentation]] of SLURM if you need further assistance. Below are some common examples. You should consult the [[http://slurm.schedmd.com/documentation.html|documentation]] of SLURM if you need further assistance.
 +
 +=== Default Quotas ===
 +By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: ''%%--mem-per-cpu%%'', ''%%--nodes%%'', ''%%--ntasks%%''.
  
 === Exclusive access to a node === === Exclusive access to a node ===
Line 82: Line 119:
 === Sample script === === Sample script ===
 Make sure you create a directory in which to deposit the ''%%STDIN%%'', ''%%STDOUT%%'', ''%%STDERR%%'' files. Make sure you create a directory in which to deposit the ''%%STDIN%%'', ''%%STDOUT%%'', ''%%STDERR%%'' files.
-   mkdir -p $HOME/slurm/slurm_out+   mkdir -p $HOME/slurm/out
  
 <code> <code>
 #!/bin/bash #!/bin/bash
 # #
-#SBATCH --mail-user=user@cs.uchicago.edu+#SBATCH --mail-user=cnetid@cs.uchicago.edu
 #SBATCH --mail-type=ALL #SBATCH --mail-type=ALL
-#SBATCH --output=/home/user/slurm/slurm_out/%j.%N.stdout +#SBATCH --output=/home/cnetid/slurm/out/%j.%N.stdout 
-#SBATCH --error=/home/user/slurm/slurm_out/%j.%N.stderr +#SBATCH --error=/home/cnetid/slurm/out/%j.%N.stderr 
-#SBATCH --workdir=/home/user/slurm+#SBATCH --workdir=/home/cnetid/slurm
 #SBATCH --partition=debug #SBATCH --partition=debug
 #SBATCH --job-name=check_hostname_of_node #SBATCH --job-name=check_hostname_of_node
 #SBATCH --nodes=1 #SBATCH --nodes=1
 #SBATCH --ntasks=1 #SBATCH --ntasks=1
 +#SBATCH --mem-per-cpu=500
 #SBATCH --time=15:00 #SBATCH --time=15:00
  
Line 104: Line 142:
   man sbatch   man sbatch
  
-Make sure to replace all instances of the word ''%%user%%'' with your CNETID.+Make sure to replace all instances of the word ''%%cnetid%%'' with your CNETID.
  
 +=== Submitting job script ===
 +Using the above example you will want to place your tested code into a file. 'hostname.job' is the file name in this example.
 +<code>
 +sbatch hostname.job
 +</code>
 +
 +You can then check the status via squeue or see the output in the output directory '$HOME/slurm/slurm_out'.
 ==== srun ==== ==== srun ====
 Used to submit a job to the cluster that doesn't necessarily need a script. Used to submit a job to the cluster that doesn't necessarily need a script.
Line 142: Line 187:
 user@host:~$ sinfo user@host:~$ sinfo
 PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
-debug*       up      30:00      1   idle research2 +debug*       up       30:00      1   idle slurm1 
-general      up 1-00:00:00      6   idle research[3-8]+general      up 14-00:00:00      6   idle slurm[2-6,8] 
 +pascal       up  3-00:00:00      1   idle gpu2 
 +tesla        up  3-00:00:00      1   idle gpu1
 </code> </code>
  
Line 167: Line 214:
  
 This command: This command:
-   srun -p general --pty --mem 500 -t 0-06:00 /bin/bash+   srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash
 will start a command line shell (''%%/bin/bash%%'') on the 'general' queue with 500 MB of RAM for 6 hours; 1 core on 1 node is assumed as these parameters (''%%-n 1 -N 1%%'') were left out. When the interactive session starts, you will notice that you are no longer on a login node, but rather one of the compute nodes dedicated to this queue. The ''%%--pty%%'' option allows the session to act like a standard terminal. will start a command line shell (''%%/bin/bash%%'') on the 'general' queue with 500 MB of RAM for 6 hours; 1 core on 1 node is assumed as these parameters (''%%-n 1 -N 1%%'') were left out. When the interactive session starts, you will notice that you are no longer on a login node, but rather one of the compute nodes dedicated to this queue. The ''%%--pty%%'' option allows the session to act like a standard terminal.
  
-====== Job Scheduling ====== 
  
 +====== Job Scheduling ======
 We use a [[http://slurm.schedmd.com/priority_multifactor.html|multifactor]] method of job scheduling. Job priority is assigned by a combination of fair-share, partition priority, and length of time a job has been sitting in the queue. The priority of the queue is the highest factor in the job priority calculation. For certain queues this will cause jobs on lower priority queues which overlap with that queue to be requeued. The second most important factor is fair-share score. You can find a description of how SLURM calculates Fair-share [[http://slurm.schedmd.com/priority_multifactor.html#fairshare|here]]. The third most important is how long you have been sitting in the queue. The longer your job sits in the queue the higher its priority grows. If everyone’s priority is equal then FIFO is the scheduling method. If you want to see what your current priority is just do ''%%sprio -j JOBID%%'' which will show you the calculation it does to figure out your job priority. If you do ''%%sshare -u USERNAME%%'' you can see your current fair-share and usage.((https://rc.fas.harvard.edu/resources/running-jobs)) We use a [[http://slurm.schedmd.com/priority_multifactor.html|multifactor]] method of job scheduling. Job priority is assigned by a combination of fair-share, partition priority, and length of time a job has been sitting in the queue. The priority of the queue is the highest factor in the job priority calculation. For certain queues this will cause jobs on lower priority queues which overlap with that queue to be requeued. The second most important factor is fair-share score. You can find a description of how SLURM calculates Fair-share [[http://slurm.schedmd.com/priority_multifactor.html#fairshare|here]]. The third most important is how long you have been sitting in the queue. The longer your job sits in the queue the higher its priority grows. If everyone’s priority is equal then FIFO is the scheduling method. If you want to see what your current priority is just do ''%%sprio -j JOBID%%'' which will show you the calculation it does to figure out your job priority. If you do ''%%sshare -u USERNAME%%'' you can see your current fair-share and usage.((https://rc.fas.harvard.edu/resources/running-jobs))
  
Line 185: Line 232:
 | error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with priviledges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. | | error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with priviledges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. |
  
 +====== Using the GPU ======
 +===== Paths =====
 +You will need to add the following to your $PATH and $LD_LIBRARY_PATH.
 +
 +  export PATH=$PATH:/usr/local/cuda/bin
 +  export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/usr/local/cuda/lib
 +
 +
 +===== Example =====
 +This sbatch script will get device information from the installed Tesla gpu.
 +<code>
 +#!/bin/bash
 +#
 +#SBATCH --mail-user=cnetid@cs.uchicago.edu
 +#SBATCH --mail-type=ALL
 +#SBATCH --output=/home/cnetid/slurm/slurm_out/%j.%N.stdout
 +#SBATCH --error=/home/cnetid/slurm/slurm_out/%j.%N.stderr
 +#SBATCH --workdir=/home/cnetid/slurm
 +#SBATCH --partition=gpu
 +#SBATCH --job-name=get_tesla_info
 +
 +export PATH=$PATH:/usr/local/cuda/bin
 +export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/usr/local/cuda/lib
 +
 +cat << EOF > /tmp/getinfo.cu
 +#include <stdio.h>
 +
 +int main() {
 +  int nDevices;
 +
 +  cudaGetDeviceCount(&nDevices);
 +  for (int i = 0; i < nDevices; i++) {
 +    cudaDeviceProp prop;
 +    cudaGetDeviceProperties(&prop, i);
 +    printf("Device Number: %d\n", i);
 +    printf("  Device name: %s\n", prop.name);
 +    printf("  Memory Clock Rate (KHz): %d\n",
 +           prop.memoryClockRate);
 +    printf("  Memory Bus Width (bits): %d\n",
 +           prop.memoryBusWidth);
 +    printf("  Peak Memory Bandwidth (GB/s): %f\n\n",
 +           2.0*prop.memoryClockRate*(prop.memoryBusWidth/8)/1.0e6);
 +  }
 +}
 +EOF
 +
 +/usr/local/cuda/bin/nvcc /tmp/getinfo.cu -o /tmp/a.out
 +/tmp/a.out
 +rm /tmp/a.out
 +rm /tmp/getinfo.cu
 +</code>
 +==== Output ====
 +STDOUT will look something like this:
 +<code>
 +cnetid@linux1:~$ cat $HOME/slurm/slurm_out/12567.gpu1.stdout 
 +Device Number: 0
 +  Device name: Tesla M2090
 +  Memory Clock Rate (KHz): 1848000
 +  Memory Bus Width (bits): 384
 +  Peak Memory Bandwidth (GB/s): 177.408000
 +</code>
 +STDERR should be blank.
 ====== More ====== ====== More ======
-If you feel this documentation is lacking in some way please let techstaff know. Email(techstaff@cs.uchicago.edu), call(773-702-1031), or stop by our office (Ryerson 154).+If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], call (773-702-1031), or stop by our office (Ryerson 154).
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki