User Tools

Site Tools


slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
slurm [2025/01/13 12:44] – [Output] amcguireslurm [2025/06/30 17:59] (current) amcguire
Line 39: Line 39:
 ===== Where to begin ===== ===== Where to begin =====
  
-Slurm is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (linux.cs.uchicago.edu) is expected to be our most common use case, so you should start there.+Slurm is a set of command line utilities that can be accessed via the command line from **most** any computer science system you can login to. Using our main shell servers (''linux.cs.uchicago.edu'') is expected to be our most common use case, so you should start there.
  
   ssh user@linux.cs.uchicago.edu   ssh user@linux.cs.uchicago.edu
Line 88: Line 88:
 === Default Quotas === === Default Quotas ===
 By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: ''%%--mem-per-cpu%%'', ''%%--nodes%%'', ''%%--ntasks%%''. By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: ''%%--mem-per-cpu%%'', ''%%--nodes%%'', ''%%--ntasks%%''.
 +
 +=== MPI Usage ===
 +The AI cluster supports the use of MPI. The following example illustrates its basic use.
 +
 +<code>
 +amcguire@fe01:~$ cat mpi-hello.c 
 +#include <mpi.h>
 +#include <unistd.h>
 +#include <stdio.h>
 +
 +int main(int argc, char **argv) {
 +    // Initialize MPI
 +    MPI_Init(&argc, &argv);
 +
 +    // Get the number of processes in the global communicator
 +    int count;
 +    MPI_Comm_size(MPI_COMM_WORLD, &count);
 +
 +    // Get the rank of the current process
 +    int rank;
 +    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
 +
 +    // Get the current hostname
 +    char hostname[1024];
 +    gethostname(hostname, sizeof(hostname));
 +
 +    // Print a hello world message for this rank
 +    printf("Hello from process %d of %d on host %s\n", rank, count, hostname);
 +
 +    // Finalize the MPI environment before exiting
 +    MPI_Finalize();
 +}
 +amcguire@fe01:~$ cat hello-job.sh
 +#!/bin/bash
 +#SBATCH -J mpi-hello            # Job name
 +#SBATCH -n 2                    # Number of processes
 +#SBATCH -t 0:10:00              # Max wall time
 +#SBATCH -o hello-job.out        # Output file name
 +
 +# Disable the Infiniband transport for OpenMPI (not present on all clusters)
 +#export OMPI_MCA_btl="^openib"
 +
 +# Run the job (assumes the batch script is submitted from the same directory)
 +mpirun -np 2 ./mpi-hello
 +
 +amcguire@fe01:~$ mpicc -o mpi-hello mpi-hello.c
 +amcguire@fe01:~$ ls -l mpi-hello
 +-rwxrwx--- 1 amcguire amcguire 16992 Jun 30 10:49 mpi-hello
 +amcguire@fe01:~$ sbatch -w p001,p002 -p peanut-cpu hello-job.sh
 +Submitted batch job 1196702
 +amcguire@fe01:~$ cat hello-job.out 
 +Hello from process 0 of 2 on host p001
 +Hello from process 1 of 2 on host p002
 +</code>
  
 === Exclusive access to a node === === Exclusive access to a node ===
/var/lib/dokuwiki/data/attic/slurm.1736793895.txt.gz · Last modified: 2025/01/13 12:44 by amcguire

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki