User Tools

Site Tools


slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
slurm [2021/01/19 09:33] kauffmanslurm [2025/06/30 17:59] (current) amcguire
Line 9: Line 9:
  
  
-===== Mailing List =====+===== Communication ===== 
 +==== Mailing List ====
 If you are going to be a user of any of the CS Slurm clusters please sign up for the mailing list. Downtime and other relevant information will be announced here. If you are going to be a user of any of the CS Slurm clusters please sign up for the mailing list. Downtime and other relevant information will be announced here.
  
 [[ https://mailman.cs.uchicago.edu/cgi-bin/mailman/listinfo/slurm | Mailing List ]] [[ https://mailman.cs.uchicago.edu/cgi-bin/mailman/listinfo/slurm | Mailing List ]]
  
 +==== Discord ====
 +There is a dedicated text channel ''%%#slurm%%'' on the UChicago CS Discord server. Please note that this Discord server is //only// for UChicago-affiliated users. You can find a link to our Discord server on the [[start|front page]] of this wiki.
  
 ===== Clusters ===== ===== Clusters =====
 +
 +We have a couple different clusters. If you don't know where to start please use the ''%%Peanut%%'' cluster. The ''%%AI Cluster%%'' is for GPU jobs and more advanced users.
  
   * [[slurm:peanut|Peanut Cluster]]   * [[slurm:peanut|Peanut Cluster]]
Line 22: Line 27:
  
 ==== Peanut Cluster ==== ==== Peanut Cluster ====
-Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1linux2linux3).+Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., ''focal0''''focal1''..., ''focal7'').
  
 Additionally, this cluster is used for courses that require it. Additionally, this cluster is used for courses that require it.
Line 30: Line 35:
  
 To use this cluster there are specific nodes you need to log into. Please visit the dedicated AI cluster page for more information. To use this cluster there are specific nodes you need to log into. Please visit the dedicated AI cluster page for more information.
- 
- 
- 
  
  
 ===== Where to begin ===== ===== Where to begin =====
  
-Slurm is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there.+Slurm is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (''linux.cs.uchicago.edu'') is expected to be our most common use case, so you should start there.
  
   ssh user@linux.cs.uchicago.edu   ssh user@linux.cs.uchicago.edu
  
-If you want to use the AI Cluster you will need to login into:+If you want to use the AI Cluster you will need to have previously requested access by sending in a ticket. Afterwards, you may login into:
  
   ssh user@fe.ai.cs.uchicago.edu   ssh user@fe.ai.cs.uchicago.edu
Line 86: Line 88:
 === Default Quotas === === Default Quotas ===
 By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: ''%%--mem-per-cpu%%'', ''%%--nodes%%'', ''%%--ntasks%%''. By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: ''%%--mem-per-cpu%%'', ''%%--nodes%%'', ''%%--ntasks%%''.
 +
 +=== MPI Usage ===
 +The AI cluster supports the use of MPI. The following example illustrates its basic use.
 +
 +<code>
 +amcguire@fe01:~$ cat mpi-hello.c 
 +#include <mpi.h>
 +#include <unistd.h>
 +#include <stdio.h>
 +
 +int main(int argc, char **argv) {
 +    // Initialize MPI
 +    MPI_Init(&argc, &argv);
 +
 +    // Get the number of processes in the global communicator
 +    int count;
 +    MPI_Comm_size(MPI_COMM_WORLD, &count);
 +
 +    // Get the rank of the current process
 +    int rank;
 +    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
 +
 +    // Get the current hostname
 +    char hostname[1024];
 +    gethostname(hostname, sizeof(hostname));
 +
 +    // Print a hello world message for this rank
 +    printf("Hello from process %d of %d on host %s\n", rank, count, hostname);
 +
 +    // Finalize the MPI environment before exiting
 +    MPI_Finalize();
 +}
 +amcguire@fe01:~$ cat hello-job.sh
 +#!/bin/bash
 +#SBATCH -J mpi-hello            # Job name
 +#SBATCH -n 2                    # Number of processes
 +#SBATCH -t 0:10:00              # Max wall time
 +#SBATCH -o hello-job.out        # Output file name
 +
 +# Disable the Infiniband transport for OpenMPI (not present on all clusters)
 +#export OMPI_MCA_btl="^openib"
 +
 +# Run the job (assumes the batch script is submitted from the same directory)
 +mpirun -np 2 ./mpi-hello
 +
 +amcguire@fe01:~$ mpicc -o mpi-hello mpi-hello.c
 +amcguire@fe01:~$ ls -l mpi-hello
 +-rwxrwx--- 1 amcguire amcguire 16992 Jun 30 10:49 mpi-hello
 +amcguire@fe01:~$ sbatch -w p001,p002 -p peanut-cpu hello-job.sh
 +Submitted batch job 1196702
 +amcguire@fe01:~$ cat hello-job.out 
 +Hello from process 0 of 2 on host p001
 +Hello from process 1 of 2 on host p002
 +</code>
  
 === Exclusive access to a node === === Exclusive access to a node ===
Line 345: Line 401:
 Please make sure you specify $CUDA_HOME and if you want to take advantage of CUDNN libraries you will need to append /usr/local/cuda-x.x/lib64 to the $LD_LIBRARY_PATH environment variable. Please make sure you specify $CUDA_HOME and if you want to take advantage of CUDNN libraries you will need to append /usr/local/cuda-x.x/lib64 to the $LD_LIBRARY_PATH environment variable.
  
-  cuda_version=9.2+  cuda_version=11.1
   export CUDA_HOME=/usr/local/cuda-${cuda_version}   export CUDA_HOME=/usr/local/cuda-${cuda_version}
   export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib64   export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib64
Line 361: Line 417:
 The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. /dev/nvidia2). The variable name is actually misleading; since it does NOT mean the amount of devices, but rather the physical device number assigned by the kernel (e.g. /dev/nvidia2).
  
-For example: If you requested multiple gpu's from Slurm (--gres=gpu:2), the CUDA_VISIBLE_DEVICES variable should contain two numbers(0-3 in this case) separated by a comma (e.g. 1,3).+For example: If you requested multiple gpu's from Slurm (--gres=gpu:2), the CUDA_VISIBLE_DEVICES variable should contain two numbers(0-3 in this case) separated by a comma (e.g. 0,1). 
 + 
 +The numbering is relative and specific to you. For example: two users with one job which require two gpus each could be assigned non-sequential gpu numbers. However CUDA_VISIBLE_DEVICES will look like this for both users: 0,1 
  
  
Line 410: Line 469:
 STDOUT will look something like this: STDOUT will look something like this:
 <code> <code>
-cnetid@linux1:~$ cat $HOME/slurm/slurm_out/12567.gpu1.stdout +cnetid@focal0:~$ cat $HOME/slurm/slurm_out/12567.gpu1.stdout 
 Device Number: 0 Device Number: 0
   Device name: Tesla M2090   Device name: Tesla M2090
/var/lib/dokuwiki/data/attic/slurm.1611070414.txt.gz · Last modified: 2021/01/19 09:33 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki