slurm
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| slurm [2021/01/15 06:07] – Add array jobs explanation. kameranis | slurm [2025/06/30 17:59] (current) – amcguire | ||
|---|---|---|---|
| Line 8: | Line 8: | ||
| Slurm is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in Slurm parlance) that you designate. Below is an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress. | Slurm is similar to most other queue systems in that you write a batch script, then submit it to the queue manager. The queue manager schedules your job to run on the queue (or partition in Slurm parlance) that you designate. Below is an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress. | ||
| + | |||
| + | ===== Communication ===== | ||
| + | ==== Mailing List ==== | ||
| + | If you are going to be a user of any of the CS Slurm clusters please sign up for the mailing list. Downtime and other relevant information will be announced here. | ||
| + | |||
| + | [[ https:// | ||
| + | |||
| + | ==== Discord ==== | ||
| + | There is a dedicated text channel '' | ||
| ===== Clusters ===== | ===== Clusters ===== | ||
| + | |||
| + | We have a couple different clusters. If you don't know where to start please use the '' | ||
| * [[slurm: | * [[slurm: | ||
| Line 16: | Line 27: | ||
| ==== Peanut Cluster ==== | ==== Peanut Cluster ==== | ||
| - | Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., | + | Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., |
| Additionally, | Additionally, | ||
| Line 24: | Line 35: | ||
| To use this cluster there are specific nodes you need to log into. Please visit the dedicated AI cluster page for more information. | To use this cluster there are specific nodes you need to log into. Please visit the dedicated AI cluster page for more information. | ||
| - | |||
| - | |||
| - | |||
| ===== Where to begin ===== | ===== Where to begin ===== | ||
| - | Slurm is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there. | + | Slurm is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers ('' |
| ssh user@linux.cs.uchicago.edu | ssh user@linux.cs.uchicago.edu | ||
| - | If you want to use the AI Cluster you will need to login into: | + | If you want to use the AI Cluster you will need to have previously requested access by sending in a ticket. Afterwards, you may login into: |
| ssh user@fe.ai.cs.uchicago.edu | ssh user@fe.ai.cs.uchicago.edu | ||
| Line 41: | Line 49: | ||
| Please read up on the specifics on the cluster you are interested in. | Please read up on the specifics on the cluster you are interested in. | ||
| - | ===== Mailing List ===== | ||
| - | If you are going to be a user of this cluster please sign up for the mailing list. Downtime and other relevant information will be announced here. | ||
| - | [[ https:// | ||
| ===== Documentation ===== | ===== Documentation ===== | ||
| Line 83: | Line 88: | ||
| === Default Quotas === | === Default Quotas === | ||
| By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: '' | By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: '' | ||
| + | |||
| + | === MPI Usage === | ||
| + | The AI cluster supports the use of MPI. The following example illustrates its basic use. | ||
| + | |||
| + | < | ||
| + | amcguire@fe01: | ||
| + | #include < | ||
| + | #include < | ||
| + | #include < | ||
| + | |||
| + | int main(int argc, char **argv) { | ||
| + | // Initialize MPI | ||
| + | MPI_Init(& | ||
| + | |||
| + | // Get the number of processes in the global communicator | ||
| + | int count; | ||
| + | MPI_Comm_size(MPI_COMM_WORLD, | ||
| + | |||
| + | // Get the rank of the current process | ||
| + | int rank; | ||
| + | MPI_Comm_rank(MPI_COMM_WORLD, | ||
| + | |||
| + | // Get the current hostname | ||
| + | char hostname[1024]; | ||
| + | gethostname(hostname, | ||
| + | |||
| + | // Print a hello world message for this rank | ||
| + | printf(" | ||
| + | |||
| + | // Finalize the MPI environment before exiting | ||
| + | MPI_Finalize(); | ||
| + | } | ||
| + | amcguire@fe01: | ||
| + | #!/bin/bash | ||
| + | #SBATCH -J mpi-hello | ||
| + | #SBATCH -n 2 # Number of processes | ||
| + | #SBATCH -t 0: | ||
| + | #SBATCH -o hello-job.out | ||
| + | |||
| + | # Disable the Infiniband transport for OpenMPI (not present on all clusters) | ||
| + | #export OMPI_MCA_btl=" | ||
| + | |||
| + | # Run the job (assumes the batch script is submitted from the same directory) | ||
| + | mpirun -np 2 ./mpi-hello | ||
| + | |||
| + | amcguire@fe01: | ||
| + | amcguire@fe01: | ||
| + | -rwxrwx--- 1 amcguire amcguire 16992 Jun 30 10:49 mpi-hello | ||
| + | amcguire@fe01: | ||
| + | Submitted batch job 1196702 | ||
| + | amcguire@fe01: | ||
| + | Hello from process 0 of 2 on host p001 | ||
| + | Hello from process 1 of 2 on host p002 | ||
| + | </ | ||
| === Exclusive access to a node === | === Exclusive access to a node === | ||
| Line 342: | Line 401: | ||
| Please make sure you specify $CUDA_HOME and if you want to take advantage of CUDNN libraries you will need to append / | Please make sure you specify $CUDA_HOME and if you want to take advantage of CUDNN libraries you will need to append / | ||
| - | cuda_version=9.2 | + | cuda_version=11.1 |
| export CUDA_HOME=/ | export CUDA_HOME=/ | ||
| export LD_LIBRARY_PATH=$LD_LIBRARY_PATH: | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH: | ||
| Line 358: | Line 417: | ||
| The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. / | The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. / | ||
| - | For example: If you requested multiple gpu's from Slurm (--gres=gpu: | + | For example: If you requested multiple gpu's from Slurm (--gres=gpu: |
| + | |||
| + | The numbering is relative and specific to you. For example: two users with one job which require two gpus each could be assigned non-sequential gpu numbers. However CUDA_VISIBLE_DEVICES will look like this for both users: 0,1 | ||
| Line 407: | Line 469: | ||
| STDOUT will look something like this: | STDOUT will look something like this: | ||
| < | < | ||
| - | cnetid@linux1:~$ cat $HOME/ | + | cnetid@focal0:~$ cat $HOME/ |
| Device Number: 0 | Device Number: 0 | ||
| Device name: Tesla M2090 | Device name: Tesla M2090 | ||
/var/lib/dokuwiki/data/attic/slurm.1610712448.txt.gz · Last modified: 2021/01/15 06:07 by kameranis