User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
techstaff:slurm [2017/11/13 10:02] kauffmantechstaff:slurm [2018/03/08 09:59] – [Storage] kauffman
Line 11: Line 11:
  
 SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress.
 +
 +
  
 ===== Where to begin ===== ===== Where to begin =====
Line 17: Line 19:
   ssh user@linux.cs.uchicago.edu   ssh user@linux.cs.uchicago.edu
  
 +===== Mailing List =====
 +If you are going to be a user of this cluster please sign up for the mailing list. Downtime and other relevant information will be announced here.
 +
 +[[ https://mailman.cs.uchicago.edu/cgi-bin/mailman/listinfo/slurm | Mailing List ]]
  
 ===== Documentation ===== ===== Documentation =====
Line 32: Line 38:
   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]
   * [[http://research.computing.yale.edu/support/hpc/user-guide/slurm| Yale's User Guide]]   * [[http://research.computing.yale.edu/support/hpc/user-guide/slurm| Yale's User Guide]]
 +
 +
 ===== Infrastructure ===== ===== Infrastructure =====
  
Line 56: Line 64:
 Request interactive shell Request interactive shell
 <code>user@csilcomputer:~$ srun --pty --mem 500 /bin/bash </code> <code>user@csilcomputer:~$ srun --pty --mem 500 /bin/bash </code>
 +
 +Create a directory on the scratch partition if you don't already have one:
 +<code>user@slurm1:~$ mkdir -p /scratch/$USER
  
 Change into my scratch directory: Change into my scratch directory:
-<code>user@research2:~$ cd /scratch/user/</code>+<code>user@slurm1:~$ cd /scratch/$USER/</code>
  
 Get the files I need: Get the files I need:
 <code> <code>
-user@research2:/scratch/user$ scp user@csilcomputer:~/foo .+user@slurm1:/scratch/user$ scp user@csilcomputer:~/foo .
 foo                         100%  103KB 102.7KB/  00:00     foo                         100%  103KB 102.7KB/  00:00    
 </code> </code>
 Check that the file now exists: Check that the file now exists:
 <code> <code>
-user@research2:/scratch/user$ ls -l foo +user@slurm1:/scratch/user$ ls -l foo 
 -rw------- 1 user user 105121 Dec 29  2015 foo -rw------- 1 user user 105121 Dec 29  2015 foo
 </code> </code>
Line 75: Line 86:
 == Performance is slow == == Performance is slow ==
 This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server. This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server.
 +
 ==== Utilization Dashboard ==== ==== Utilization Dashboard ====
 Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http://peanut.cs.uchicago.edu Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http://peanut.cs.uchicago.edu
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki