techstaff:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
techstaff:slurm [2016/05/09 15:06] – kauffman | techstaff:slurm [2017/11/13 10:02] – kauffman | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== DRAFT | Peanut Job Submission Cluster ====== | + | ===== Notice |
+ | **2017-08-31**: | ||
- | We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might have been rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3). | + | ====== Peanut Job Submission Cluster ====== |
+ | |||
+ | We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3). | ||
For job submission we will be using a piece of software called [[http:// | For job submission we will be using a piece of software called [[http:// | ||
Line 28: | Line 31: | ||
* [[http:// | * [[http:// | ||
* [[https:// | * [[https:// | ||
- | + | * [[http:// | |
===== Infrastructure ===== | ===== Infrastructure ===== | ||
Line 37: | Line 39: | ||
* 64gb RAM | * 64gb RAM | ||
* 2x 500GB SATA 7200RPM in RAID1 | * 2x 500GB SATA 7200RPM in RAID1 | ||
- | |||
- | To better manage the cluster we have virtualized the job submission nodes and give them all resources of the hardware. So, the actual resources you can consume on any one node is: | ||
- | * 14 Cores, 14 threads | ||
- | * 62GB RAM | ||
==== Storage ==== | ==== Storage ==== | ||
Line 109: | Line 107: | ||
===== Usage ===== | ===== Usage ===== | ||
Below are some common examples. You should consult the [[http:// | Below are some common examples. You should consult the [[http:// | ||
+ | |||
+ | === Default Quotas === | ||
+ | By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: '' | ||
=== Exclusive access to a node === | === Exclusive access to a node === | ||
Line 118: | Line 119: | ||
=== Sample script === | === Sample script === | ||
Make sure you create a directory in which to deposit the '' | Make sure you create a directory in which to deposit the '' | ||
- | mkdir -p $HOME/ | + | mkdir -p $HOME/ |
< | < | ||
Line 125: | Line 126: | ||
#SBATCH --mail-user=cnetid@cs.uchicago.edu | #SBATCH --mail-user=cnetid@cs.uchicago.edu | ||
#SBATCH --mail-type=ALL | #SBATCH --mail-type=ALL | ||
- | #SBATCH --output=/ | + | #SBATCH --output=/ |
- | #SBATCH --error=/ | + | #SBATCH --error=/ |
#SBATCH --workdir=/ | #SBATCH --workdir=/ | ||
#SBATCH --partition=debug | #SBATCH --partition=debug | ||
Line 132: | Line 133: | ||
#SBATCH --nodes=1 | #SBATCH --nodes=1 | ||
#SBATCH --ntasks=1 | #SBATCH --ntasks=1 | ||
+ | #SBATCH --mem-per-cpu=500 | ||
#SBATCH --time=15: | #SBATCH --time=15: | ||
Line 185: | Line 187: | ||
user@host: | user@host: | ||
PARTITION AVAIL TIMELIMIT | PARTITION AVAIL TIMELIMIT | ||
- | debug* | + | debug* |
- | general | + | general |
+ | pascal | ||
+ | tesla up 3-00: | ||
</ | </ | ||
Line 212: | Line 216: | ||
srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash | srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash | ||
will start a command line shell ('' | will start a command line shell ('' | ||
- | ====== Job Scheduling ====== | ||
+ | |||
+ | ====== Job Scheduling ====== | ||
We use a [[http:// | We use a [[http:// | ||
Line 228: | Line 233: | ||
====== Using the GPU ====== | ====== Using the GPU ====== | ||
+ | ===== Paths ===== | ||
+ | You will need to add the following to your $PATH and $LD_LIBRARY_PATH. | ||
+ | |||
+ | export PATH=$PATH:/ | ||
+ | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/ | ||
+ | |||
+ | |||
===== Example ===== | ===== Example ===== | ||
This sbatch script will get device information from the installed Tesla gpu. | This sbatch script will get device information from the installed Tesla gpu. | ||
Line 240: | Line 252: | ||
#SBATCH --partition=gpu | #SBATCH --partition=gpu | ||
#SBATCH --job-name=get_tesla_info | #SBATCH --job-name=get_tesla_info | ||
+ | |||
+ | export PATH=$PATH:/ | ||
+ | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/ | ||
cat << EOF > / | cat << EOF > / | ||
Line 268: | Line 283: | ||
rm / | rm / | ||
</ | </ | ||
+ | ==== Output ==== | ||
+ | STDOUT will look something like this: | ||
+ | < | ||
+ | cnetid@linux1: | ||
+ | Device Number: 0 | ||
+ | Device name: Tesla M2090 | ||
+ | Memory Clock Rate (KHz): 1848000 | ||
+ | Memory Bus Width (bits): 384 | ||
+ | Peak Memory Bandwidth (GB/s): 177.408000 | ||
+ | </ | ||
+ | STDERR should be blank. | ||
====== More ====== | ====== More ====== | ||
If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], | If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], |
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman