User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
techstaff:slurm [2017/07/27 11:18] kauffmantechstaff:slurm [2017/08/31 15:12] – [Usage] kauffman
Line 1: Line 1:
-NoticePlease use 'liverpool.cs.uchicago.edu' to submit your jobs. The SLURM cluster has not been upgraded to 16.04 yet.+===== Notice ===== 
 +**All is back to normal. Please submit jobs from linux1,2,3Email techstaff@cs.uchicago.edu if you find that something is amiss.**
  
 +The SLURM cluster will become unavailable starting 2017-08-22 for an upgrade. Normal service should resume on 2017-08-25. Please check back here for status updates.
 +
 +**2017-08-22 1800**: Main cluster upgraded. You can try to use it now but I can't guarantee that I won't kill your job tomorrow or Firday.
 +
 +**2017-08-23 1345**: GPU servers upgraded and added back to the cluster. They may be missing some software that was not automatable at previous time of installation. Send me an email if you find anything missing.
 +
 +We still run systems with Ubuntu 14.04 installed. As of right now these systems cannot submit jobs to the cluster. This is on purpose. The slurm version jump between 14.04 and 16.04 was so huge that this was unavoidable. This means you should prefer to use linux.cs.uchicago.edu or any CS machine that run Ubuntu 16.04.
 +
 +**2017-08-24**: Everything seems to be working as expected. Please start using the cluster again. Email techstaff@cs.uchicago.edu if something is wrong/unexpected/broken/etc.
 ====== Peanut Job Submission Cluster ====== ====== Peanut Job Submission Cluster ======
  
Line 39: Line 49:
   * 64gb RAM   * 64gb RAM
   * 2x 500GB SATA 7200RPM in RAID1   * 2x 500GB SATA 7200RPM in RAID1
- 
-To better manage the cluster we have virtualized the job submission nodes and give them all resources of the hardware. So, the actual resources you can consume on any one node is: 
-  * 14 Cores, 14 threads 
-  * 62GB RAM 
  
 ==== Storage ==== ==== Storage ====
Line 111: Line 117:
 ===== Usage ===== ===== Usage =====
 Below are some common examples. You should consult the [[http://slurm.schedmd.com/documentation.html|documentation]] of SLURM if you need further assistance. Below are some common examples. You should consult the [[http://slurm.schedmd.com/documentation.html|documentation]] of SLURM if you need further assistance.
 +
 +=== Default Quotas ===
 +By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: ''%%--mem-per-cpu%%'', ''%%--nodes%%'', ''%%--ntasks%%''.
  
 === Exclusive access to a node === === Exclusive access to a node ===
Line 120: Line 129:
 === Sample script === === Sample script ===
 Make sure you create a directory in which to deposit the ''%%STDIN%%'', ''%%STDOUT%%'', ''%%STDERR%%'' files. Make sure you create a directory in which to deposit the ''%%STDIN%%'', ''%%STDOUT%%'', ''%%STDERR%%'' files.
-   mkdir -p $HOME/slurm/slurm_out+   mkdir -p $HOME/slurm/out
  
 <code> <code>
Line 127: Line 136:
 #SBATCH --mail-user=cnetid@cs.uchicago.edu #SBATCH --mail-user=cnetid@cs.uchicago.edu
 #SBATCH --mail-type=ALL #SBATCH --mail-type=ALL
-#SBATCH --output=/home/cnetid/slurm/slurm_out/%j.%N.stdout +#SBATCH --output=/home/cnetid/slurm/out/%j.%N.stdout 
-#SBATCH --error=/home/cnetid/slurm/slurm_out/%j.%N.stderr+#SBATCH --error=/home/cnetid/slurm/out/%j.%N.stderr
 #SBATCH --workdir=/home/cnetid/slurm #SBATCH --workdir=/home/cnetid/slurm
 #SBATCH --partition=debug #SBATCH --partition=debug
Line 134: Line 143:
 #SBATCH --nodes=1 #SBATCH --nodes=1
 #SBATCH --ntasks=1 #SBATCH --ntasks=1
 +#SBATCH --mem-per-cpu=500
 #SBATCH --time=15:00 #SBATCH --time=15:00
  
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki