User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
techstaff:slurm [2017/07/27 11:23] kauffmantechstaff:slurm [2018/03/08 10:02] – [Storage] kauffman
Line 1: Line 1:
-===== Notice =====  +===== Notice ===== 
-Please use 'liverpool.cs.uchicago.edu' to submit your jobs. The SLURM cluster has not been upgraded to 16.04 yet. +**2017-08-31**: Configuration change to allow allocation on CPUs and RAM. Please read the 'Default Quota' section under https://howto.cs.uchicago.edu/techstaff:slurm#usage 
  
 ====== Peanut Job Submission Cluster ====== ====== Peanut Job Submission Cluster ======
Line 12: Line 11:
  
 SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress.
 +
 +
  
 ===== Where to begin ===== ===== Where to begin =====
Line 18: Line 19:
   ssh user@linux.cs.uchicago.edu   ssh user@linux.cs.uchicago.edu
  
 +===== Mailing List =====
 +If you are going to be a user of this cluster please sign up for the mailing list. Downtime and other relevant information will be announced here.
 +
 +[[ https://mailman.cs.uchicago.edu/cgi-bin/mailman/listinfo/slurm | Mailing List ]]
  
 ===== Documentation ===== ===== Documentation =====
Line 32: Line 37:
   * [[http://slurm.schedmd.com/tutorials.html|SLURM tutorial videos]]   * [[http://slurm.schedmd.com/tutorials.html|SLURM tutorial videos]]
   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]
 +  * [[http://research.computing.yale.edu/support/hpc/user-guide/slurm| Yale's User Guide]]
  
  
Line 41: Line 47:
   * 64gb RAM   * 64gb RAM
   * 2x 500GB SATA 7200RPM in RAID1   * 2x 500GB SATA 7200RPM in RAID1
- 
-To better manage the cluster we have virtualized the job submission nodes and give them all resources of the hardware. So, the actual resources you can consume on any one node is: 
-  * 14 Cores, 14 threads 
-  * 62GB RAM 
  
 ==== Storage ==== ==== Storage ====
Line 62: Line 64:
 Request interactive shell Request interactive shell
 <code>user@csilcomputer:~$ srun --pty --mem 500 /bin/bash </code> <code>user@csilcomputer:~$ srun --pty --mem 500 /bin/bash </code>
 +
 +Create a directory on the scratch partition if you don't already have one:
 +<code>user@slurm1:~$ mkdir -p /scratch/$USER</code>
  
 Change into my scratch directory: Change into my scratch directory:
-<code>user@research2:~$ cd /scratch/user/</code>+<code>user@slurm1:~$ cd /scratch/$USER/</code>
  
 Get the files I need: Get the files I need:
 <code> <code>
-user@research2:/scratch/user$ scp user@csilcomputer:~/foo .+user@slurm1:/scratch/user$ scp user@csilcomputer:~/foo .
 foo                         100%  103KB 102.7KB/  00:00     foo                         100%  103KB 102.7KB/  00:00    
 </code> </code>
 Check that the file now exists: Check that the file now exists:
 <code> <code>
-user@research2:/scratch/user$ ls -l foo +user@slurm1:/scratch/user$ ls -l foo 
 -rw------- 1 user user 105121 Dec 29  2015 foo -rw------- 1 user user 105121 Dec 29  2015 foo
 </code> </code>
Line 81: Line 86:
 == Performance is slow == == Performance is slow ==
 This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server. This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server.
 +
 ==== Utilization Dashboard ==== ==== Utilization Dashboard ====
 Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http://peanut.cs.uchicago.edu Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http://peanut.cs.uchicago.edu
Line 113: Line 119:
 ===== Usage ===== ===== Usage =====
 Below are some common examples. You should consult the [[http://slurm.schedmd.com/documentation.html|documentation]] of SLURM if you need further assistance. Below are some common examples. You should consult the [[http://slurm.schedmd.com/documentation.html|documentation]] of SLURM if you need further assistance.
 +
 +=== Default Quotas ===
 +By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: ''%%--mem-per-cpu%%'', ''%%--nodes%%'', ''%%--ntasks%%''.
  
 === Exclusive access to a node === === Exclusive access to a node ===
Line 122: Line 131:
 === Sample script === === Sample script ===
 Make sure you create a directory in which to deposit the ''%%STDIN%%'', ''%%STDOUT%%'', ''%%STDERR%%'' files. Make sure you create a directory in which to deposit the ''%%STDIN%%'', ''%%STDOUT%%'', ''%%STDERR%%'' files.
-   mkdir -p $HOME/slurm/slurm_out+   mkdir -p $HOME/slurm/out
  
 <code> <code>
Line 129: Line 138:
 #SBATCH --mail-user=cnetid@cs.uchicago.edu #SBATCH --mail-user=cnetid@cs.uchicago.edu
 #SBATCH --mail-type=ALL #SBATCH --mail-type=ALL
-#SBATCH --output=/home/cnetid/slurm/slurm_out/%j.%N.stdout +#SBATCH --output=/home/cnetid/slurm/out/%j.%N.stdout 
-#SBATCH --error=/home/cnetid/slurm/slurm_out/%j.%N.stderr+#SBATCH --error=/home/cnetid/slurm/out/%j.%N.stderr
 #SBATCH --workdir=/home/cnetid/slurm #SBATCH --workdir=/home/cnetid/slurm
 #SBATCH --partition=debug #SBATCH --partition=debug
Line 136: Line 145:
 #SBATCH --nodes=1 #SBATCH --nodes=1
 #SBATCH --ntasks=1 #SBATCH --ntasks=1
 +#SBATCH --mem-per-cpu=500
 #SBATCH --time=15:00 #SBATCH --time=15:00
  
Line 189: Line 199:
 user@host:~$ sinfo user@host:~$ sinfo
 PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
-debug*       up      30:00      1   idle research2 +debug*       up       30:00      1   idle slurm1 
-general      up 1-00:00:00      6   idle research[3-8]+general      up 14-00:00:00      6   idle slurm[2-6,8] 
 +pascal       up  3-00:00:00      1   idle gpu2 
 +tesla        up  3-00:00:00      1   idle gpu1
 </code> </code>
  
Line 216: Line 228:
    srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash    srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash
 will start a command line shell (''%%/bin/bash%%'') on the 'general' queue with 500 MB of RAM for 6 hours; 1 core on 1 node is assumed as these parameters (''%%-n 1 -N 1%%'') were left out. When the interactive session starts, you will notice that you are no longer on a login node, but rather one of the compute nodes dedicated to this queue. The ''%%--pty%%'' option allows the session to act like a standard terminal. will start a command line shell (''%%/bin/bash%%'') on the 'general' queue with 500 MB of RAM for 6 hours; 1 core on 1 node is assumed as these parameters (''%%-n 1 -N 1%%'') were left out. When the interactive session starts, you will notice that you are no longer on a login node, but rather one of the compute nodes dedicated to this queue. The ''%%--pty%%'' option allows the session to act like a standard terminal.
-====== Job Scheduling ====== 
  
 +
 +====== Job Scheduling ======
 We use a [[http://slurm.schedmd.com/priority_multifactor.html|multifactor]] method of job scheduling. Job priority is assigned by a combination of fair-share, partition priority, and length of time a job has been sitting in the queue. The priority of the queue is the highest factor in the job priority calculation. For certain queues this will cause jobs on lower priority queues which overlap with that queue to be requeued. The second most important factor is fair-share score. You can find a description of how SLURM calculates Fair-share [[http://slurm.schedmd.com/priority_multifactor.html#fairshare|here]]. The third most important is how long you have been sitting in the queue. The longer your job sits in the queue the higher its priority grows. If everyone’s priority is equal then FIFO is the scheduling method. If you want to see what your current priority is just do ''%%sprio -j JOBID%%'' which will show you the calculation it does to figure out your job priority. If you do ''%%sshare -u USERNAME%%'' you can see your current fair-share and usage.((https://rc.fas.harvard.edu/resources/running-jobs)) We use a [[http://slurm.schedmd.com/priority_multifactor.html|multifactor]] method of job scheduling. Job priority is assigned by a combination of fair-share, partition priority, and length of time a job has been sitting in the queue. The priority of the queue is the highest factor in the job priority calculation. For certain queues this will cause jobs on lower priority queues which overlap with that queue to be requeued. The second most important factor is fair-share score. You can find a description of how SLURM calculates Fair-share [[http://slurm.schedmd.com/priority_multifactor.html#fairshare|here]]. The third most important is how long you have been sitting in the queue. The longer your job sits in the queue the higher its priority grows. If everyone’s priority is equal then FIFO is the scheduling method. If you want to see what your current priority is just do ''%%sprio -j JOBID%%'' which will show you the calculation it does to figure out your job priority. If you do ''%%sshare -u USERNAME%%'' you can see your current fair-share and usage.((https://rc.fas.harvard.edu/resources/running-jobs))
  
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki