User Tools

Site Tools


techstaff:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revisionBoth sides next revision
techstaff:slurm [2017/08/31 16:14] kauffmantechstaff:slurm [2017/11/13 10:02] kauffman
Line 187: Line 187:
 user@host:~$ sinfo user@host:~$ sinfo
 PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
-debug*       up      30:00      1   idle research2 +debug*       up       30:00      1   idle slurm1 
-general      up 1-00:00:00      6   idle research[3-8]+general      up 14-00:00:00      6   idle slurm[2-6,8] 
 +pascal       up  3-00:00:00      1   idle gpu2 
 +tesla        up  3-00:00:00      1   idle gpu1
 </code> </code>
  
Line 214: Line 216:
    srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash    srun -p general --pty --cpus-per-task 1 --mem 500 -t 0-06:00 /bin/bash
 will start a command line shell (''%%/bin/bash%%'') on the 'general' queue with 500 MB of RAM for 6 hours; 1 core on 1 node is assumed as these parameters (''%%-n 1 -N 1%%'') were left out. When the interactive session starts, you will notice that you are no longer on a login node, but rather one of the compute nodes dedicated to this queue. The ''%%--pty%%'' option allows the session to act like a standard terminal. will start a command line shell (''%%/bin/bash%%'') on the 'general' queue with 500 MB of RAM for 6 hours; 1 core on 1 node is assumed as these parameters (''%%-n 1 -N 1%%'') were left out. When the interactive session starts, you will notice that you are no longer on a login node, but rather one of the compute nodes dedicated to this queue. The ''%%--pty%%'' option allows the session to act like a standard terminal.
-====== Job Scheduling ====== 
  
 +
 +====== Job Scheduling ======
 We use a [[http://slurm.schedmd.com/priority_multifactor.html|multifactor]] method of job scheduling. Job priority is assigned by a combination of fair-share, partition priority, and length of time a job has been sitting in the queue. The priority of the queue is the highest factor in the job priority calculation. For certain queues this will cause jobs on lower priority queues which overlap with that queue to be requeued. The second most important factor is fair-share score. You can find a description of how SLURM calculates Fair-share [[http://slurm.schedmd.com/priority_multifactor.html#fairshare|here]]. The third most important is how long you have been sitting in the queue. The longer your job sits in the queue the higher its priority grows. If everyone’s priority is equal then FIFO is the scheduling method. If you want to see what your current priority is just do ''%%sprio -j JOBID%%'' which will show you the calculation it does to figure out your job priority. If you do ''%%sshare -u USERNAME%%'' you can see your current fair-share and usage.((https://rc.fas.harvard.edu/resources/running-jobs)) We use a [[http://slurm.schedmd.com/priority_multifactor.html|multifactor]] method of job scheduling. Job priority is assigned by a combination of fair-share, partition priority, and length of time a job has been sitting in the queue. The priority of the queue is the highest factor in the job priority calculation. For certain queues this will cause jobs on lower priority queues which overlap with that queue to be requeued. The second most important factor is fair-share score. You can find a description of how SLURM calculates Fair-share [[http://slurm.schedmd.com/priority_multifactor.html#fairshare|here]]. The third most important is how long you have been sitting in the queue. The longer your job sits in the queue the higher its priority grows. If everyone’s priority is equal then FIFO is the scheduling method. If you want to see what your current priority is just do ''%%sprio -j JOBID%%'' which will show you the calculation it does to figure out your job priority. If you do ''%%sshare -u USERNAME%%'' you can see your current fair-share and usage.((https://rc.fas.harvard.edu/resources/running-jobs))
  
/var/lib/dokuwiki/data/pages/techstaff/slurm.txt · Last modified: 2021/01/06 16:13 by kauffman

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki