User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
techstaff:slurm [2015/12/29 15:51] – [Partitions / Queues] kauffmantechstaff:slurm [2016/01/11 08:13] – [sbatch] kauffman
Line 8: Line 8:
  
 SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress.
 +
 +===== Where to begin =====
 +SLURM is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there.
 +
 +  ssh user@linux.cs.uchicago.edu
 +
  
 ===== Documentation ===== ===== Documentation =====
Line 41: Line 47:
   * Files older than 90 days will be deleted automatically.   * Files older than 90 days will be deleted automatically.
   * Scratch space is shared by all users.   * Scratch space is shared by all users.
 +
 +=== Access ===
 +Scratch space is only mounted on nodes associated with the cluster. If you want to be able to transfer files to the scratch space you will want to run an [[techstaff:slurm#interactive_jobs|interactive shell]]. Now you will be able to use standard tools such as ''%%scp%%'' or ''%%rsync%%'' to transfer files.
 +
 +  - You should only do a file transfer via the debug partition: ''%% srun -p debug --pty --mem 500 /bin/bash%%''
 +  - Now you can create a directory of your own: ''%%mkdir /scratch/$USER%%'' You should store any files you create in this directory.
 +
 +== Example ==
 +
 +Request interactive shell
 +<code>user@csilcomputer:~$ srun --pty --mem 500 /bin/bash </code>
 +
 +Change into my scratch directory:
 +<code>user@research2:~$ cd /scratch/user/</code>
 +
 +Get the files I need:
 +<code>
 +user@research2:/scratch/user$ scp user@csilcomputer:~/foo .
 +foo                         100%  103KB 102.7KB/  00:00    
 +</code>
 +Check that the file now exists:
 +<code>
 +user@research2:/scratch/user$ ls -l foo 
 +-rw------- 1 user user 105121 Dec 29  2015 foo
 +</code>
 +
 +I can now exit my interactive shell.
 +
 +== Performance is slow ==
 +This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server.
 ==== Utilization Dashboard ==== ==== Utilization Dashboard ====
 Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http://peanut.cs.uchicago.edu Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http://peanut.cs.uchicago.edu
Line 47: Line 83:
 To find out what partitions we offer, checkout the [[techstaff:slurm#sinfo|sinfo]] command. To find out what partitions we offer, checkout the [[techstaff:slurm#sinfo|sinfo]] command.
  
-As of December, 2015 we have will have at least 2 partitions in our cluster; 'debug' and 'general'. An other partition is not guarenteed and will server a specific purpose.+As of December, 2015 we have will have at least 2 partitions in our cluster; 'debug' and 'general'.
  
 ^ Partition Name ^ Description ^ ^ Partition Name ^ Description ^
Line 87: Line 123:
 #!/bin/bash #!/bin/bash
 # #
-#SBATCH --mail-user=user@cs.uchicago.edu+#SBATCH --mail-user=cnetid@cs.uchicago.edu
 #SBATCH --mail-type=ALL #SBATCH --mail-type=ALL
-#SBATCH --output=/home/user/slurm/slurm_out/%j.%N.stdout +#SBATCH --output=/home/cnetid/slurm/slurm_out/%j.%N.stdout 
-#SBATCH --error=/home/user/slurm/slurm_out/%j.%N.stderr +#SBATCH --error=/home/cnetid/slurm/slurm_out/%j.%N.stderr 
-#SBATCH --workdir=/home/user/slurm+#SBATCH --workdir=/home/cnetid/slurm
 #SBATCH --partition=debug #SBATCH --partition=debug
 #SBATCH --job-name=check_hostname_of_node #SBATCH --job-name=check_hostname_of_node
Line 104: Line 140:
   man sbatch   man sbatch
  
-Make sure to replace all instances of the word ''%%user%%'' with your CNETID.+Make sure to replace all instances of the word ''%%cnetid%%'' with your CNETID.
  
 ==== srun ==== ==== srun ====
Line 143: Line 179:
 PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
 debug*       up      30:00      1   idle research2 debug*       up      30:00      1   idle research2
-general      up   infinite        idle research[3-7] +general      up 1-00:00:00        idle research[3-8]
-hardware     up    1:00:00      1   idle research8+
 </code> </code>
  
Line 168: Line 203:
  
 This command: This command:
-   srun -p general --pty --mem 500 -t 0-06:00 /bin/bash+   srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash
 will start a command line shell (''%%/bin/bash%%'') on the 'general' queue with 500 MB of RAM for 6 hours; 1 core on 1 node is assumed as these parameters (''%%-n 1 -N 1%%'') were left out. When the interactive session starts, you will notice that you are no longer on a login node, but rather one of the compute nodes dedicated to this queue. The ''%%--pty%%'' option allows the session to act like a standard terminal. will start a command line shell (''%%/bin/bash%%'') on the 'general' queue with 500 MB of RAM for 6 hours; 1 core on 1 node is assumed as these parameters (''%%-n 1 -N 1%%'') were left out. When the interactive session starts, you will notice that you are no longer on a login node, but rather one of the compute nodes dedicated to this queue. The ''%%--pty%%'' option allows the session to act like a standard terminal.
- 
 ====== Job Scheduling ====== ====== Job Scheduling ======
  
Line 187: Line 221:
  
 ====== More ====== ====== More ======
-If you feel this documentation is lacking in some way please let techstaff know. Email(techstaff@cs.uchicago.edu), call(773-702-1031), or stop by our office (Ryerson 154).+If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], call (773-702-1031), or stop by our office (Ryerson 154).
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki