User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
techstaff:slurm [2016/05/11 15:22] – [Using the GPU] kauffmantechstaff:slurm [2017/08/31 16:14] kauffman
Line 1: Line 1:
-====== DRAFT | Peanut Job Submission Cluster ======+===== Notice ===== 
 +**2017-08-31**: Configuration change to allow allocation on CPUs and RAM. Please read the 'Default Quota' section under https://howto.cs.uchicago.edu/techstaff:slurm#usage 
  
-We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might have been rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).+====== Peanut Job Submission Cluster ====== 
 + 
 +We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).
  
 For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage: For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage:
Line 28: Line 31:
   * [[http://slurm.schedmd.com/tutorials.html|SLURM tutorial videos]]   * [[http://slurm.schedmd.com/tutorials.html|SLURM tutorial videos]]
   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]   * [[https://computing.llnl.gov/linux/slurm/quickstart.html|LLNL quick start user guide]]
- +  * [[http://research.computing.yale.edu/support/hpc/user-guide/slurm| Yale's User Guide]]
 ===== Infrastructure ===== ===== Infrastructure =====
  
Line 37: Line 39:
   * 64gb RAM   * 64gb RAM
   * 2x 500GB SATA 7200RPM in RAID1   * 2x 500GB SATA 7200RPM in RAID1
- 
-To better manage the cluster we have virtualized the job submission nodes and give them all resources of the hardware. So, the actual resources you can consume on any one node is: 
-  * 14 Cores, 14 threads 
-  * 62GB RAM 
  
 ==== Storage ==== ==== Storage ====
Line 109: Line 107:
 ===== Usage ===== ===== Usage =====
 Below are some common examples. You should consult the [[http://slurm.schedmd.com/documentation.html|documentation]] of SLURM if you need further assistance. Below are some common examples. You should consult the [[http://slurm.schedmd.com/documentation.html|documentation]] of SLURM if you need further assistance.
 +
 +=== Default Quotas ===
 +By default we set a job to be run on one CPU and allocate 100MB of RAM. If you require more than that you should specify what you need. Using the following options will do: ''%%--mem-per-cpu%%'', ''%%--nodes%%'', ''%%--ntasks%%''.
  
 === Exclusive access to a node === === Exclusive access to a node ===
Line 118: Line 119:
 === Sample script === === Sample script ===
 Make sure you create a directory in which to deposit the ''%%STDIN%%'', ''%%STDOUT%%'', ''%%STDERR%%'' files. Make sure you create a directory in which to deposit the ''%%STDIN%%'', ''%%STDOUT%%'', ''%%STDERR%%'' files.
-   mkdir -p $HOME/slurm/slurm_out+   mkdir -p $HOME/slurm/out
  
 <code> <code>
Line 125: Line 126:
 #SBATCH --mail-user=cnetid@cs.uchicago.edu #SBATCH --mail-user=cnetid@cs.uchicago.edu
 #SBATCH --mail-type=ALL #SBATCH --mail-type=ALL
-#SBATCH --output=/home/cnetid/slurm/slurm_out/%j.%N.stdout +#SBATCH --output=/home/cnetid/slurm/out/%j.%N.stdout 
-#SBATCH --error=/home/cnetid/slurm/slurm_out/%j.%N.stderr+#SBATCH --error=/home/cnetid/slurm/out/%j.%N.stderr
 #SBATCH --workdir=/home/cnetid/slurm #SBATCH --workdir=/home/cnetid/slurm
 #SBATCH --partition=debug #SBATCH --partition=debug
Line 132: Line 133:
 #SBATCH --nodes=1 #SBATCH --nodes=1
 #SBATCH --ntasks=1 #SBATCH --ntasks=1
 +#SBATCH --mem-per-cpu=500
 #SBATCH --time=15:00 #SBATCH --time=15:00
  
Line 247: Line 249:
 #SBATCH --partition=gpu #SBATCH --partition=gpu
 #SBATCH --job-name=get_tesla_info #SBATCH --job-name=get_tesla_info
 +
 +export PATH=$PATH:/usr/local/cuda/bin
 +export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/usr/local/cuda/lib
  
 cat << EOF > /tmp/getinfo.cu cat << EOF > /tmp/getinfo.cu
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki