Deep Blizzard uses the SLURM workload manager to schedule jobs on compute nodes. To run a job, you must create a SLURM submit script that defines the resources your job requires (CPUs, memory, GPUs, walltime, etc.).
Slurm User Documentation:
https://slurm.schedmd.com/quickstart.html
Research Computing maintains a set of tested example job scripts that reflect the current cluster configuration.
To begin:
cp /mnt/it_research/examples/deepblizzard/v2/* . |
Submit a simple test job:
sbatch 00_hello_world_cpu.slurm |
When the job completes, view the output:
|
cat slurm-[jobid].out |
These example scripts are the authoritative reference for job submission on Deep Blizzard.
Deep Blizzard uses different SLURM partitions and accounts depending on your research group.
cd $SLURM_SUBMIT_DIR |
The final section appends the job's completion time to the output file:
|
echo "" |
To run this first job. Login to the login node, create a file called basic-submit.slurm and past the above script into the file.
Then submit the script using the slurm scheduler:
|
$ sbatch basic-submit.slurm |
Once your job completes you should see the following(or similar since the actual compute node could be different) when you cat your output file:
|
$ cat slurm-18513.out |
There are additional sample batch files on the system that you can copy into your home directory:
|
$ cp /mnt/it_research/examples/bioinformatics/* . |
Additional scripts you copied to your home directory:
|
$ ls bio* |
Since you have copied these to your home directory you can edit and modify them as needed or use them as the start of your own batch file.