User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
techstaff:slurm [2016/05/06 15:25] – [Partitions / Queues] kauffmantechstaff:slurm [2016/10/04 13:18] kauffman
Line 1: Line 1:
 ====== DRAFT | Peanut Job Submission Cluster ====== ====== DRAFT | Peanut Job Submission Cluster ======
  
-We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might have been rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).+We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).
  
 For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage: For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage:
Line 142: Line 142:
 Make sure to replace all instances of the word ''%%cnetid%%'' with your CNETID. Make sure to replace all instances of the word ''%%cnetid%%'' with your CNETID.
  
 +=== Submitting job script ===
 +Using the above example you will want to place your tested code into a file. 'hostname.job' is the file name in this example.
 +<code>
 +sbatch hostname.job
 +</code>
 +
 +You can then check the status via squeue or see the output in the output directory '$HOME/slurm/slurm_out'.
 ==== srun ==== ==== srun ====
 Used to submit a job to the cluster that doesn't necessarily need a script. Used to submit a job to the cluster that doesn't necessarily need a script.
Line 220: Line 227:
 | error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with priviledges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. | | error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with priviledges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. |
  
 +====== Using the GPU ======
 +===== Paths =====
 +You will need to add the following to your $PATH and $LD_LIBRARY_PATH.
 +
 +  export PATH=$PATH:/usr/local/cuda/bin
 +  export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/usr/local/cuda/lib
 +
 +
 +===== Example =====
 +This sbatch script will get device information from the installed Tesla gpu.
 +<code>
 +#!/bin/bash
 +#
 +#SBATCH --mail-user=cnetid@cs.uchicago.edu
 +#SBATCH --mail-type=ALL
 +#SBATCH --output=/home/cnetid/slurm/slurm_out/%j.%N.stdout
 +#SBATCH --error=/home/cnetid/slurm/slurm_out/%j.%N.stderr
 +#SBATCH --workdir=/home/cnetid/slurm
 +#SBATCH --partition=gpu
 +#SBATCH --job-name=get_tesla_info
 +
 +cat << EOF > /tmp/getinfo.cu
 +#include <stdio.h>
 +
 +int main() {
 +  int nDevices;
 +
 +  cudaGetDeviceCount(&nDevices);
 +  for (int i = 0; i < nDevices; i++) {
 +    cudaDeviceProp prop;
 +    cudaGetDeviceProperties(&prop, i);
 +    printf("Device Number: %d\n", i);
 +    printf("  Device name: %s\n", prop.name);
 +    printf("  Memory Clock Rate (KHz): %d\n",
 +           prop.memoryClockRate);
 +    printf("  Memory Bus Width (bits): %d\n",
 +           prop.memoryBusWidth);
 +    printf("  Peak Memory Bandwidth (GB/s): %f\n\n",
 +           2.0*prop.memoryClockRate*(prop.memoryBusWidth/8)/1.0e6);
 +  }
 +}
 +EOF
 +
 +/usr/local/cuda/bin/nvcc /tmp/getinfo.cu -o /tmp/a.out
 +/tmp/a.out
 +rm /tmp/a.out
 +rm /tmp/getinfo.cu
 +</code>
 +==== Output ====
 +STDOUT will look something like this:
 +<code>
 +cnetid@linux1:~$ cat $HOME/slurm/slurm_out/12567.gpu1.stdout 
 +Device Number: 0
 +  Device name: Tesla M2090
 +  Memory Clock Rate (KHz): 1848000
 +  Memory Bus Width (bits): 384
 +  Peak Memory Bandwidth (GB/s): 177.408000
 +</code>
 +STDERR should be blank.
 ====== More ====== ====== More ======
 If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], call (773-702-1031), or stop by our office (Ryerson 154). If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], call (773-702-1031), or stop by our office (Ryerson 154).
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki