User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
techstaff:slurm [2016/05/11 15:22] – [Using the GPU] kauffmantechstaff:slurm [2017/08/24 14:54] – [Notice] kauffman
Line 1: Line 1:
-====== DRAFT | Peanut Job Submission Cluster ======+===== Notice ===== 
 +**All is back to normal. Please submit jobs from linux1,2,3. Email techstaff@cs.uchicago.edu if you find that something is amiss.**
  
-We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might have been rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).+The SLURM cluster will become unavailable starting 2017-08-22 for an upgrade. Normal service should resume on 2017-08-25. Please check back here for status updates. 
 + 
 +**2017-08-22 1800**: Main cluster upgraded. You can try to use it now but I can't guarantee that I won't kill your job tomorrow or Firday. 
 + 
 +**2017-08-23 1345**: GPU servers upgraded and added back to the cluster. They may be missing some software that was not automatable at previous time of installation. Send me an email if you find anything missing. 
 + 
 +We still run systems with Ubuntu 14.04 installed. As of right now these systems cannot submit jobs to the cluster. This is on purpose. The slurm version jump between 14.04 and 16.04 was so huge that this was unavoidable. This means you should prefer to use linux.cs.uchicago.edu or any CS machine that run Ubuntu 16.04. 
 + 
 +**2017-08-24**: Everything seems to be working as expected. Please start using the cluster again. Email techstaff@cs.uchicago.edu if something is wrong/unexpected/broken/etc. 
 +====== Peanut Job Submission Cluster ====== 
 + 
 +We are currently **alpha** testing and gauging user interest in a cluster of machines that allows for the submission of long running compute jobs. Think of these machines as a dumping ground for discrete computing tasks that might be rude or disruptive to execute on the main (shared) shell servers (i.e., linux1, linux2, linux3).
  
 For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage: For job submission we will be using a piece of software called [[http://slurm.schedmd.com|SLURM]]. Simply put, SLURM is a queue management system and stands for **S**imple **L**inux **U**tility for **R**esource **M**anagement; it was developed at the Lawrence Livermore National Lab. It currently supports some of the largest compute clusters in the world. The best description of SLURM can be found on its homepage:
Line 247: Line 259:
 #SBATCH --partition=gpu #SBATCH --partition=gpu
 #SBATCH --job-name=get_tesla_info #SBATCH --job-name=get_tesla_info
 +
 +export PATH=$PATH:/usr/local/cuda/bin
 +export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=/usr/local/cuda/lib
  
 cat << EOF > /tmp/getinfo.cu cat << EOF > /tmp/getinfo.cu
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki