techstaff:aicluster
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
techstaff:aicluster [2020/11/11 11:45] – kauffman | techstaff:aicluster [2021/01/06 16:11] (current) – kauffman | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== AI Cluster - Slurm ====== | ====== AI Cluster - Slurm ====== | ||
- | Cluster is up and running now. Anyone with a CS account who wishes to test it out should do so. | + | [[slurm:ai|This page has moved]] |
- | + | ||
- | + | ||
- | Feedback is requested: | + | |
- | + | ||
- | [[https:// | + | |
- | + | ||
- | Knowledge of how to use Slurm already is preferred at this stage of testing. | + | |
- | + | ||
- | + | ||
- | The information from the older cluster mostly applies and I suggest you read that documentation: | + | |
- | + | ||
- | + | ||
- | ====== Demo ====== | + | |
- | + | ||
- | kauffman3 is my CS test account. | + | |
- | + | ||
- | < | + | |
- | $ ssh kauffman3@fe.ai.cs.uchicago.edu | + | |
- | </ | + | |
- | I've created a couple scripts that run some of the Slurm commands but with more useful output. cs-sinfo and cs-squeue being the only two right now. | + | |
- | < | + | |
- | kauffman3@fe01: | + | |
- | NODELIST | + | |
- | a[001-006] | + | |
- | a[007-008] | + | |
- | </ | + | |
- | < | + | |
- | kauffman3@fe01: | + | |
- | JOBID | + | |
- | </ | + | |
- | + | ||
- | # List the device number of the devices I've requested from Slurm. | + | |
- | # These numbers map to / | + | |
- | < | + | |
- | kauffman3@fe01: | + | |
- | # | + | |
- | hostname | + | |
- | echo $CUDA_VISIBLE_DEVICES | + | |
- | </ | + | |
- | + | ||
- | Give me all four GPUs on systems 1-6 | + | |
- | < | + | |
- | kauffman3@fe01: | + | |
- | a001 | + | |
- | 0,1,2,3 | + | |
- | a002 | + | |
- | 0,1,2,3 | + | |
- | a006 | + | |
- | 0,1,2,3 | + | |
- | a005 | + | |
- | 0,1,2,3 | + | |
- | a004 | + | |
- | 0,1,2,3 | + | |
- | a003 | + | |
- | 0,1,2,3 | + | |
- | </ | + | |
- | # give me all GPUs on systems 7-8 | + | |
- | # these are the Quadro RTX 8000s | + | |
- | < | + | |
- | kauffman3@fe01: | + | |
- | a008 | + | |
- | 0,1,2,3 | + | |
- | a007 | + | |
- | 0,1,2,3 | + | |
- | </ | + | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | ====== Storage ====== | + | |
- | + | ||
- | / | + | |
- | | + | |
- | + | ||
- | / | + | |
- | Lives on the home directory server. | + | |
- | Idea would be to create a dataset with a quota for people to use. | + | |
- | Normal LDAP groups that you are used to and available everywhere else would control access to these directories. | + | |
- | e.g. jonaslab, sandlab | + | |
- | + | ||
- | + | ||
- | Currently there is no quota on home directories. | + | |
- | + | ||
- | homes and scratch each connected via 2x 25G. Both are SSD only so the storage should be FAST. | + | |
- | + | ||
- | + | ||
- | Each compute node (nodes with gpus) has a zfs mirror mounted at /local | + | |
- | I set compression to lz4 by default. Usually this has a performance gain as less data is read and written to disk with a small overhead in CPU usage. | + | |
- | As of right now there is no mechanism to clean up /local. At some point I'll probably put a find command in cron that deletes files older than 90 days or so. | + | |
- | + | ||
- | + | ||
- | ====== Asked Questions ====== | + | |
- | + | ||
- | > Do we have a max job runtime? | + | |
- | + | ||
- | Yes. 4 hours. This is done per partition. You are expected to write your code to accommodate for this. | + | |
- | + | ||
- | < | + | |
- | PartitionName=geforce Nodes=a[001-006] Default=YES DefMemPerCPU=2900 MaxTime=04: | + | |
- | =YES | + | |
- | PartitionName=quadro | + | |
- | YES | + | |
- | </ | + | |
- | + |
/var/lib/dokuwiki/data/pages/techstaff/aicluster.txt · Last modified: 2021/01/06 16:11 by kauffman