techstaff:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
techstaff:slurm [2015/12/29 15:14] – kauffman | techstaff:slurm [2016/05/09 15:08] – [Example] kauffman | ||
---|---|---|---|
Line 8: | Line 8: | ||
SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. | SLURM is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below is an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job, and how to monitor progress. | ||
+ | |||
+ | ===== Where to begin ===== | ||
+ | SLURM is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there. | ||
+ | |||
+ | ssh user@linux.cs.uchicago.edu | ||
+ | |||
===== Documentation ===== | ===== Documentation ===== | ||
Line 39: | Line 45: | ||
There is slow scratch space mounted to '' | There is slow scratch space mounted to '' | ||
+ | * Files older than 90 days will be deleted automatically. | ||
+ | * Scratch space is shared by all users. | ||
+ | |||
+ | === Access === | ||
+ | Scratch space is only mounted on nodes associated with the cluster. If you want to be able to transfer files to the scratch space you will want to run an [[techstaff: | ||
+ | |||
+ | - You should only do a file transfer via the debug partition: '' | ||
+ | - Now you can create a directory of your own: '' | ||
+ | |||
+ | == Example == | ||
+ | |||
+ | Request interactive shell | ||
+ | < | ||
+ | |||
+ | Change into my scratch directory: | ||
+ | < | ||
+ | |||
+ | Get the files I need: | ||
+ | < | ||
+ | user@research2:/ | ||
+ | foo | ||
+ | </ | ||
+ | Check that the file now exists: | ||
+ | < | ||
+ | user@research2:/ | ||
+ | -rw------- 1 user user 105121 Dec 29 2015 foo | ||
+ | </ | ||
+ | |||
+ | I can now exit my interactive shell. | ||
+ | |||
+ | == Performance is slow == | ||
+ | This is expected. The maximum speed this server will ever be able to achieve is 1Gb/s because of its single 1G ethernet uplink. If this cluster gains in popularity we plan on upgrading the network and storage server. | ||
==== Utilization Dashboard ==== | ==== Utilization Dashboard ==== | ||
Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http:// | Sometimes it is useful to see how much of the cluster is utilized. You can do that via the following URL: http:// | ||
==== Partitions / Queues ==== | ==== Partitions / Queues ==== | ||
- | As of December, 2015 we have will have at least 2 partitions in our cluster; ' | + | To find out what partitions we offer, checkout the [[techstaff: |
+ | |||
+ | As of December, 2015 we have will have at least 2 partitions in our cluster; ' | ||
^ Partition Name ^ Description ^ | ^ Partition Name ^ Description ^ | ||
| **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | | **debug** | The partition your job will be submitted to if none is specified. The purpose of this partition is to make sure your code is running as it should before submitting a long running job to the general queue. | | ||
| **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the '' | | **general** | All jobs that have been thoroughly tested can be submitted here. This partition will have access to more nodes and will process most of the jobs. If you need to use the '' | ||
- | | **hardware** | The purpose | + | | **gpu** | Contains servers with graphics cards. As of May 2016 there is only one node containing a Tesla M2090. You will be forced |
====== Job Submission ====== | ====== Job Submission ====== | ||
Line 84: | Line 123: | ||
#!/bin/bash | #!/bin/bash | ||
# | # | ||
- | #SBATCH --mail-user=user@cs.uchicago.edu | + | #SBATCH --mail-user=cnetid@cs.uchicago.edu |
#SBATCH --mail-type=ALL | #SBATCH --mail-type=ALL | ||
- | #SBATCH --output=/ | + | #SBATCH --output=/ |
- | #SBATCH --error=/ | + | #SBATCH --error=/ |
- | #SBATCH --workdir=/ | + | #SBATCH --workdir=/ |
#SBATCH --partition=debug | #SBATCH --partition=debug | ||
#SBATCH --job-name=check_hostname_of_node | #SBATCH --job-name=check_hostname_of_node | ||
Line 101: | Line 140: | ||
man sbatch | man sbatch | ||
- | Make sure to replace all instances of the word '' | + | Make sure to replace all instances of the word '' |
+ | === Submitting job script === | ||
+ | Using the above example you will want to place your tested code into a file. ' | ||
+ | < | ||
+ | sbatch hostname.job | ||
+ | </ | ||
+ | |||
+ | You can then check the status via squeue or see the output in the output directory ' | ||
==== srun ==== | ==== srun ==== | ||
Used to submit a job to the cluster that doesn' | Used to submit a job to the cluster that doesn' | ||
Line 140: | Line 186: | ||
PARTITION AVAIL TIMELIMIT | PARTITION AVAIL TIMELIMIT | ||
debug* | debug* | ||
- | general | + | general |
- | hardware | + | |
</ | </ | ||
Line 165: | Line 210: | ||
This command: | This command: | ||
- | srun -p general --pty --mem 500 -t 0-06:00 /bin/bash | + | srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash |
will start a command line shell ('' | will start a command line shell ('' | ||
- | |||
====== Job Scheduling ====== | ====== Job Scheduling ====== | ||
Line 183: | Line 227: | ||
| error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with priviledges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. | | | error: Unable to allocate resources: More processors requested than permitted | It usually has **nothing** to do with priviledges you may or may not have. Rather, it usually means that you have allocated more processors than one compute node actually has. | | ||
+ | ====== Using the GPU ====== | ||
+ | ===== Example ===== | ||
+ | This sbatch script will get device information from the installed Tesla gpu. | ||
+ | < | ||
+ | #!/bin/bash | ||
+ | # | ||
+ | #SBATCH --mail-user=cnetid@cs.uchicago.edu | ||
+ | #SBATCH --mail-type=ALL | ||
+ | #SBATCH --output=/ | ||
+ | #SBATCH --error=/ | ||
+ | #SBATCH --workdir=/ | ||
+ | #SBATCH --partition=gpu | ||
+ | #SBATCH --job-name=get_tesla_info | ||
+ | |||
+ | cat << EOF > / | ||
+ | #include < | ||
+ | |||
+ | int main() { | ||
+ | int nDevices; | ||
+ | |||
+ | cudaGetDeviceCount(& | ||
+ | for (int i = 0; i < nDevices; i++) { | ||
+ | cudaDeviceProp prop; | ||
+ | cudaGetDeviceProperties(& | ||
+ | printf(" | ||
+ | printf(" | ||
+ | printf(" | ||
+ | | ||
+ | printf(" | ||
+ | | ||
+ | printf(" | ||
+ | | ||
+ | } | ||
+ | } | ||
+ | EOF | ||
+ | |||
+ | / | ||
+ | /tmp/a.out | ||
+ | rm /tmp/a.out | ||
+ | rm / | ||
+ | </ | ||
+ | ==== Output ==== | ||
+ | STDOUT will look something like this: | ||
+ | < | ||
+ | cnetid@linux1: | ||
+ | Device Number: 0 | ||
+ | Device name: Tesla M2090 | ||
+ | Memory Clock Rate (KHz): 1848000 | ||
+ | Memory Bus Width (bits): 384 | ||
+ | Peak Memory Bandwidth (GB/s): 177.408000 | ||
+ | </ | ||
+ | STDERR should be blank. | ||
====== More ====== | ====== More ====== | ||
- | If you feel this documentation is lacking in some way please let techstaff know. Email(techstaff@cs.uchicago.edu), call(773-702-1031), | + | If you feel this documentation is lacking in some way please let techstaff know. Email [[techstaff@cs.uchicago.edu]], call (773-702-1031), |
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman