UMBC logo
UMBC High Performance Computing Facility
Please note that this page is under construction. We are documenting the 240-node cluster maya that will be available after Summer 2014. Currently, the 84-node cluster tara still operates independently, until it becomes part of maya at the end of Summer 2014. Please see the 2013 Resources Pages under the Resources tab for tara information.
How to run programs on maya

Introduction

Running a program on maya is a bit different than running one on a standard workstation. When we log into the cluster, we are interacting with the front end node. But we would like our programs to run on the compute nodes, which is where the real computing power of the cluster is. We will walk through the processes of running serial and parallel code on the cluster, and then later discuss some of the finer details. This page uses the code examples from the compilation tutorial. Please download and compile those examples, so you can follow along.

Resource intensive jobs (long running, high memory demand, etc) should be run on the compute nodes of the cluster. You cannot execute jobs directly on the compute nodes yourself; you must request the cluster's batch system do it on your behalf. To use the batch system, you will submit a special script which contains instructions to execute your job on the compute nodes. When submitting your job, you specify a partition (group of nodes: for testing vs. production) and a QOS (a classification that determines what kind of resources your job will need). Your job will wait in the queue until it is "next in line", and free processors on the compute nodes become available. Which job is "next in line" is determined by the scheduling policy of the cluster. Once a job is started, it continues until it either completes or reaches its time limit, in which case it is terminated by the system.

The batch system used on maya is called SLURM, which is short for Simple Linux Utility for Resource Management. Users transitioning from the cluster hpc should be aware that SLURM behaves a bit differently than PBS, and the scripting is a little different too. Unfortunately, this means you will need to rewrite your batch scripts. However many of the confusing points of PBS, such as requesting the number of nodes and tasks per node, are simplified in SLURM.

Scheduling Fundamentals on maya: Partitions, QOS's, and more

Please read first the Scheduling Policy web page for complete background on the available queues and their limitations.

Interacting with the Batch System

There are several basic commands you'll need to know to submit jobs, cancel them, and check their status. These are:

Check here for more detailed information about job monitoring.

scancel

The first command we will mention is scancel. If you've submitted a job that you no longer want, you should be a responsible user and kill it. This will prevent resources from being wasted, and allow other users' jobs to run. Jobs can be killed while they are pending (waiting to run), or while they are actually running. To remove a job from the queue or to cancel a running job cleanly, use the scancel command with the identifier of the job to be deleted. For instance:
[araim1@maya-usr1 hello_serial]$ scancel 636
[araim1@maya-usr1 hello_serial]$
The job identifier can be obtained from the job listing from squeue (see below) or you might have noted it from the response of the call to sbatch, when you originally submitted the job (also below). Try "man scancel" for more information.

sbatch

Now that we know how to cancel a job, we will see how to submit one. You can use the sbatch command to submit a script to the batch queue system.
[araim1@maya-usr1 hello_serial]$ sbatch run.slurm
sbatch: Submitted batch job 2626
[araim1@maya-usr1 hello_serial]$ 
In this example run.slurm is the script we are sending to the batch queue system. We will see shortly how to formulate such a script. Notice that sbatch returns a job identifier. We can use this to kill the job later if necessary, or to check its status. For more information, check the man page by running "man sbatch".

squeue

You can use the squeue command to check the status of jobs in the batch queue system. Here's an example of the basic usage:
[araim1@maya-usr1 ~]$ squeue
  JOBID PARTITION     NAME     USER  ST       TIME  NODES QOS    NODELIST(REASON)
   2564     batch   MPI_DG  gobbert  PD       0:00     64 medium (Resources)
   2626     batch fmMle_no   araim1   R       0:02      4 normal n[9-12]
   2579     batch   MPI_DG  gobbert   R 1-02:40:36      2 long   n[7-8]
   2615     batch     test   aaronk   R    2:41:51     32 medium n[3-6,14-41]
[araim1@maya-usr1 ~]$ 
The most interesting column is the one titled ST for "status". It shows what a job is doing at this point in time. The state "PD" indicates that the job has been queued. When free processor cores become available and this process is "next in line", it will change to the "R" state and begin executing. You may also see a job with status "CG" which means it's completing, and about to exit the batch system. Other statuses are possible too, see the man page for squeue. Once a job has exited the batch queue system, it will no longer show up in the squeue display.

We can also see several other pieces of useful information. The TIME column shows the current walltime used by the job. For example, job 2578 has been running for 1 day, 2 hours, 40 minutes, and 36 seconds. The NODELIST column shows which compute nodes have been assigned to the job. For job 2626, nodes n9, n10, n11, and n12 are being used. However for job 2564, we can see that it's pending because it's waiting on resources.

sinfo

The sinfo command also shows the current status of the batch system, but from the point of view of the SLURM partitions. Here is an example
[gobbert@maya-usr1 hello-serial]$ sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
develop*     up      31:00      2   idle n[1-2]
batch        up   infinite     82   idle n[3-84]
[araim1@maya-usr1 ~]$ 

Running Serial Jobs

This section assumes you've already compiled the serial hello world example. Now we'll see how to run it several different ways.

Test runs on the front-end node

The most obvious way to run the program is on the front end node, which we normally log into.
[hu6@maya-usr1 hello_serial]$ ./hello_serial 
Hello world from maya-usr1
[hu6@maya-usr1 hello_serial]$

We can see the reported hostname which confirms that the program ran on the front end node.

Jobs should usually be run on the front end node only for testing purposes. The purpose of the front end node is to develop code and submit jobs to the compute nodes. Everyone who uses maya must interact with the front end node, so slowing it down will affect all users. Therefore, the usage policy prohibits the use of the front end node for running jobs. One exception to this rule is graphical post-processing of results that can only be done interactively in some software packages, for instance, COMSOL Multiphysics. (Our "hello world" example here uses very little memory and runs very quickly and is run on the front end node exactly for testing purposes as part of this tutorial.)

Test runs on the develop partition

Let's submit our job to the develop partition, since we just created it and we're not completely sure that it works. The following script will accomplish this. Save it to your account alongside the "hello-serial" executable.
#!/bin/bash
#SBATCH --job-name=hello_serial
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=develop

./hello_serial

Download:
../code/hello_serial/run-testing.slurm
Here, the partition flag chooses the develop partition. The output and error flags set the file name for capturing standard output (stdout) and standard error (stderr), respectively, and the job-name flag simply sets the string that is displayed as the name of the job in squeue. Now we're ready to submit our job to the scheduler. To accomplish this, use the sbatch command as follows
[araim1@maya-usr1 hello_serial]$ sbatch run-testing.slurm
sbatch: Submitted batch job 2626
[araim1@maya-usr1 hello_serial]$
If the submission was successful, the sbatch command returns a job identifier to us. We can use this to check the status of the job (squeue), or delete it (scancel) if necessary. This job should run very quickly if there are processors available, but we can try to check its status in the batch queue system. The following command shows that our job is not in the system - it is so quick that it has already completed!
[araim1@maya-usr1 hello_serial]$ squeue
  JOBID PARTITION     NAME     USER ST       TIME  NODES QOS     NODELIST(REASON)
[araim1@maya-usr1 hello_serial]$
We should have obtained two output files. The file slurm.err contains stderr output from our program. If slurm.err isn't empty, check the contents carefully as something may have gone wrong. The file slurm.out contains our stdout output; it should contain the hello world message from our program.
[araim1@maya-usr1 hello_serial]$ ls slurm.*
slurm.err  slurm.out
[araim1@maya-usr1 hello_serial]$ cat slurm.err 
[araim1@maya-usr1 hello_serial]$ cat slurm.out
Hello world from n70
[araim1@maya-usr1 hello_serial]$ 
Notice that the hostname no longer matches the front end node, but one of the test nodes. We've successfully used one of the compute nodes to run our job. The develop partitions limits jobs to five minutes by default, measured in "walltime", which is just the elapsed run time. The limit can be raised to up to 30 minutes using the --time flag, details are given later on this page. After your job has reached this time limit, it is stopped by the scheduler. This is done to ensure that everyone has a fair chance to use the cluster.
Note that with SLURM, the stdout and stderr files (slurm.out and slurm.err) will be written gradually as your job executes. This is different than PBS which was used on hpc, where stdout/stderr files did not exist until the job completed.
The stdout and stderr mechanisms in the batch system are not intended for large amounts of output. If your program writes out more than a few KB of output, consider using file I/O to write to logs or data files.

Production runs on the batch partition

Once our job has been tested and we're confident that it's working correctly, we can run it in the batch partition. Now the walltime limit for our job will be raised, based on the QOS we choose. There are also many more compute nodes available in this partition, so we probably won't have to wait long to find a free processor. Start by creating the following script.
#!/bin/bash
#SBATCH --job-name=hello_serial
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --qos=normal

./hello_serial

Download: ../code/hello_serial/run-serial.slurm
The flags for job-name, output, and error are the same as in the previous script. The partition flag is now set to batch. Additionally, the qos flag chooses the normal QOS. This is the default QOS, so the result would be the same if we had not specified any QOS; we recommend always specifying a QOS explicitly for clarity. To submit our job to the scheduler, we issue the command
[araim1@maya-usr1 hello_serial]$ sbatch run-serial.slurm
sbatch: Submitted batch job 2626
[araim1@maya-usr1 hello_serial]$
We can check the job's status, but due to its shortness, it has already completed and does not show up any more.
[araim1@maya-usr1 hello_serial]$ squeue
  JOBID PARTITION     NAME     USER ST       TIME  NODES QOS     NODELIST(REASON)
[araim1@maya-usr1 hello_serial]$
This time our stdout output file indicates that our job has run on one of the primary compute nodes, rather than a develop node
[araim1@maya-usr1 hello_serial]$ ls slurm.*
slurm.err  slurm.out
[araim1@maya-usr1 hello_serial]$ cat slurm.err 
[araim1@maya-usr1 hello_serial]$ cat slurm.out
Hello world from n71
[araim1@maya-usr1 hello_serial]$ 
When using the batch partition, you'll be sharing resources with other researchers. So keep your duties as a responsible user in mind, which are described in this tutorial and in the usage policy.

Selecting a QOS

Notice that we specified the normal QOS with the qos flag. Because we know our job is very quick, a more appropriate QOS would be short. To specify the short QOS, replace normal by short to get the line

#SBATCH --qos=short
in the submission script. In the same way, you can access any of the QOS's listed in the scheduling policy. The rule of thumb is that you should always choose the QOS whose wall time limit is the most appropriate for your job. Realizing that these limits are hard upper limits, you will want to stay safely under them, or in other words, pick a QOS whose wall time limit is comfortably larger than the actually expected run time.

Note that the QOS of each job is shown by default in the squeue output. We have set this up on maya as a convenience, by setting the SQUEUE_FORMAT environment variable.

The develop partition only has one QOS, namely the SLURM default normal, so expect to see "normal" in the QOS column for any job in the develop partition.

Running Parallel Jobs

This section assumes that you've successfully compiled the parallel hello world example. Now we'll see how to run this program on the cluster.

Test runs on the develop partition

Example 1: Single process

First we will run the hello_parallel program as a single process. This will appear very similar to the serial job case. The difference is that now we are using the MPI-enabled executable hello_parallel, rather than the plain hello_serial executable. Create the following script in the same directory as the hello_parallel program. Notice the addition of the "srun" command before the executable, which is used to launch MPI-enabled programs. We've also added "--nodes=1" and "--ntasks-per-node=1" to specify what kind of resources we'll need for our parallel program.
#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=develop
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1

srun ./hello_parallel

Download:
../code/hello_parallel/intel-np1.slurm
Now submit the script
[hu6@maya-usr1 hpcfweb]$ sbatch intel-np1.slurm 
Submitted batch job 21503
[hu6@maya-usr1 hpcfweb]$
Checking the output after the job has completed, we can see that exactly one process has run and reported back.
[hu6@maya-usr1 hpcfweb]$ ls slurm.*
slurm.err  slurm.out
[hu6@maya-usr1 hpcfweb]$ cat slurm.err 
[hu6@maya-usr1 hpcfweb]$ cat slurm.out 
Hello world from process 000 out of 001, processor name n70

Example 2: One node, two processes

Next we will run the job on two processes of the same node. This is one important test, to ensure that our code will function in parallel. We want to be especially careful that the communications work correctly, and that processes don't hang. We modify the single process script and set "--ntasks-per-node=2".
#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=develop
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2

srun ./hello_parallel

Download: ../code/hello_parallel/intel-ppn2.slurm
Submit the script to the batch queue system
[hu6@maya-usr1 hpcfweb]$ sbatch intel-ppn2.slurm
sbatch: Submitted batch job 2626
[hu6@maya-usr1 hpcfweb]$
Now observe that two processes have run and reported in. Both were located on the same node as we expected.
[araim1@maya-usr1 hello_parallel]$ ls slurm.*
slurm.err  slurm.out
[araim1@maya-usr1 hello_parallel]$ cat slurm.err 
[araim1@maya-usr1 hello_parallel]$ cat slurm.out 
Hello world from process 000 out of 002, processor name n1
Hello world from process 001 out of 002, processor name n1
[araim1@maya-usr1 hello_parallel]$ 

Example 3: Two nodes, one process per node

Now let's try to use two different nodes, but only one process on each node. This will exercise our program's use of the high performance network, which didn't come into the picture when a single node was used.
#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=develop
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=1

srun ./hello_parallel

Download: ../code/hello_parallel/intel-nodes2-ppn1.slurm
Submit the script to the batch queue system
[hu6@maya-usr1 hpcfweb]$ sbatch intel-nodes2-ppn1.slurm 
Submitted batch job 21505
Notice that again we have two processes, but this time they have distinct processor names.
[hu6@maya-usr1 hpcfweb]$ ls slurm.*
slurm.err  slurm.out
[hu6@maya-usr1 hpcfweb]$ cat slurm.err 
[hu6@maya-usr1 hpcfweb]$ cat slurm.out 
Hello world from process 000 out of 002, processor name n1
Hello world from process 001 out of 002, processor name n2
[hu6@maya-usr1 hpcfweb]$ 

Example 4: Two nodes, eight processes per node

To illustrate the use of more processes, let's try a job that uses two nodes, eight processes on each node. This is still possible on the develop partition. Therefore it is possible to run small performance studies which are completely restricted to the develop partition. Use the following batch script
#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=develop
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8

srun ./hello_parallel

Download: ../code/hello_parallel/intel-nodes2-ppn8.slurm
Submit the script to the batch system
[araim1@maya-usr1 hello_parallel]$ sbatch intel-nodes2-ppn8.slurm
sbatch: Submitted batch job 2626
[araim1@maya-usr1 hello_parallel]$
For reference, we quote the output of squeue, when using the above environment variable setting, which reads for this job
[araim1@maya-usr1 hello_parallel]$ squeue
  JOBID PARTITION     NAME     USER ST       TIME  NODES QOS     NODELIST(REASON)
  61911   develop hello_pa   araim1  R       0:02      2 normal  n[1-2]
Now observe the output. Notice that the processes have reported back in a non-deterministic order, and there are eight per node if you count them.
[araim1@maya-usr1 hello_parallel]$ ls slurm.*
slurm.err  slurm.out
[araim1@maya-usr1 hello_parallel]$ cat slurm.err 
[araim1@maya-usr1 hello_parallel]$ cat slurm.out
Hello world from process 002 out of 016, processor name n1
Hello world from process 011 out of 016, processor name n2
Hello world from process 014 out of 016, processor name n2
Hello world from process 006 out of 016, processor name n1
Hello world from process 010 out of 016, processor name n2
Hello world from process 007 out of 016, processor name n1
Hello world from process 001 out of 016, processor name n1
Hello world from process 015 out of 016, processor name n2
Hello world from process 000 out of 016, processor name n1
Hello world from process 008 out of 016, processor name n2
Hello world from process 003 out of 016, processor name n1
Hello world from process 012 out of 016, processor name n2
Hello world from process 005 out of 016, processor name n1
Hello world from process 013 out of 016, processor name n2
Hello world from process 004 out of 016, processor name n1
Hello world from process 009 out of 016, processor name n2
[araim1@maya-usr1 hello_parallel]$

Production runs on the batch partition

Now we've tested our program in several important configurations in the develop partition. We know that it performs correctly, and processes do not hang. We may now want to solve larger problems which are more time consuming, or perhaps we may wish to use more processes. We can promote our code to "production", by simply changing "--partition=develop" to "--partition=batch". We may also want to specify "--qos=short" as before, since the expected run time of our job is several seconds at most. Of course if this were a more substantial program, we might need to specify a longer QOS like normal, medium, or long.

Node selection

The maya cluster has several different types of nodes available, and users may want to select certain kinds of nodes to suit their jobs. In this section we will discuss how to do this.

The following variation of the sinfo command shows some basic information about nodes in maya.

[hu6@maya-usr1 ~]$ sinfo -o "%10N %8z %8m %40f %10G"
NODELIST   S:C:T    MEMORY   FEATURES                                 GRES      
n[34-51]   2:8:1    60908    hpcf2013,e5_2650v2,mic,mic_5110p,michost mic:2     
n34-mic[0- 1:240:1  1        miccard                                  (null)    
n[70-153]  2:4:1    1+       hpcf2010,x5560                           (null)    
n[1-33]    2:8:1    1+       hpcf2013,e5_2650v2                       (null)    
n[52-69]   2:8:1    60908    hpcf2013,e5_2650v2,gpu,gpu_k20           gpu:2     
[hu6@maya-usr1 ~]$ 
The following extentions of the above "sinfo" command may be useful to view the availability of each node type. In the last column, the four numbers correspond to the number of nodes which are currently in the following states: A = allocated, I = idle, O = other, and T = total.
[araim1@maya-usr1 ~]$ sinfo -o "%10N %8z %8m %40f %10G %F"
NODELIST   S:C:T    MEMORY   FEATURES                                 GRES       NODES(A/I/O/T)
n[1-33]    2+:1+:1  64508    hpcf2013,e5_2650v2                       (null)     30/3/0/33
n[34-51]   2:8:1    64508    hpcf2013,e5_2650v2,phi,phi_5110p,michost mic:2      18/0/0/18
n[52-69]   2:8:1    64510    hpcf2013,e5_2650v2,gpu,gpu_k20           gpu:2      16/2/0/18
n[70-153]  2+:1+:1  20108+   hpcf2010,x5560                           (null)     0/82/2/84
[araim1@maya-usr1 ~]$
Nodes marked as allocated might still have available processors. The following command, using %C instead of %F, will show us availability at the processor level.
[araim1@maya-usr1 ~]$ sinfo -o "%10N %8z %8m %40f %10G %C"
NODELIST   S:C:T    MEMORY   FEATURES                                 GRES       CPUS(A/I/O/T)
n[1-33]    2+:1+:1  64508    hpcf2013,e5_2650v2                       (null)     480/48/0/528
n[34-51]   2:8:1    64508    hpcf2013,e5_2650v2,phi,phi_5110p,michost mic:2      288/0/0/288
n[52-69]   2:8:1    64510    hpcf2013,e5_2650v2,gpu,gpu_k20           gpu:2      256/32/0/288
n[70-153]  2+:1+:1  20108+   hpcf2010,x5560                           (null)     0/656/16/672
[araim1@maya-usr1 ~]$ 
Other types of summaries are possible as well; try "man sinfo" for more information.

As we will see below, the "features" correspond to things that can be specified using the --constraint option, and "gres" correspond to things that can be specified by "--gres".

Select nodes by CPU type

To demonstrate node selection on maya, first consider a very simple batch which does not specify any type of node.
#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=3

srun hostname

Download: ../code/select-node-type/default/run.slurm
[hu6@maya-usr1 hpcfweb]$ sbatch run.slurm 
Submitted batch job 21506
[hu6@maya-usr1 hpcfweb]$ cat slurm.err 
[hu6@maya-usr1 hpcfweb]$ cat slurm.out 
n71
n71
n71
n72
n72
n72
[hu6@maya-usr1 hpcfweb]$ 
Notice that we are assigned two nodes from the hpcf2010 equipment. This can be verified by checking the table of hostnames at System Description.

Suppose we would like to use the hpcf2013 nodes instead. This can be accomplished with the "--constraint" option and specifying the feature "hpcf2013". Recall that the list of features was obtained above from the sinfo output.

#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=3
#SBATCH --constraint=hpcf2013

srun hostname

Download: ../code/select-node-type/cpu/run.slurm
[hu6@maya-usr1 hpcfweb]$ sbatch run.slurm 
Submitted batch job 21507
[hu6@maya-usr1 hpcfweb]$ cat slurm.err 
[hu6@maya-usr1 hpcfweb]$ cat slurm.out 
n3
n3
n3
n4
n4
n4
[hu6@maya-usr1 hpcfweb]$

Select GPU nodes

Selecting GPU-enabled nodes can be done in two ways: using the "--constraint" option or the "--gres" option. It is important understand both options as they are useful in different situations, however we believe the "--gres" option will be preferred for the vast majority of GPU jobs.

Specifying "--constraint=gpu" or "--constraint=gpu_k20" says that all all requested nodes in the job must have this feature. It does not say whether we are using zero, one, or two GPUs. In other words, the fact that the node has a GPU is treated as an intrinsic property of the node, and not a resource within the node that can be allocated (like a processor or memory). For this reason, we ask that you also use "--exclusive" to ensure that your job does not interfere with other GPU users.

#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=3
#SBATCH --constraint=gpu_k20
#SBATCH --exclusive

srun hostname

Download: ../code/select-node-type/gpu-constraint/run.slurm
[araim1@maya-usr1 gpu-constraint]$ sbatch run.slurm
[araim1@maya-usr1 gpu-constraint]$ cat slurm.err
[araim1@maya-usr1 gpu-constraint]$ cat slurm.out
n60
n60
n60
n61
n61
n61
[araim1@maya-usr1 gpu-constraint]$
On the other hand, specifying "--gres=gpu" requests one GPU on each node of our job. This allows the scheduler to allocate the remaining CPUs and GPUs to other users.
#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=3
#SBATCH --gres=gpu

srun hostname

Download: ../code/select-node-type/gpu-gres/run.slurm
[araim1@maya-usr1 gpu-constraint]$ sbatch run.slurm
[araim1@maya-usr1 gpu-constraint]$ cat slurm.err
[araim1@maya-usr1 gpu-constraint]$ cat slurm.out
n52
n52
n52
n53
n53
n53
[araim1@maya-usr1 gpu-constraint]$
We can also request one GPU per CPU by specifying "--gres=gpu*cpu" or two GPUs per node by "--gres=gpu:2"

For more detailed instrutions on running your code on a GPU, see CUDA for GPU.

Select Phi nodes

Selecting Phi-enabled nodes is exactly the same as GPU-enabled nodes, except using the appropriate feature names ("mic" or "mic_5110p") for "--constraint" or resources name ("mic") for "--gres". Start with the GPU examples above and make the appropriate substitutions.

For more detailed instrutions on running your code on a Phi, see Intel Phi.

Heterogeneous jobs: mix of node types

3/9/2014: The details of setting up a heterogeneous job are currently under investigation.

Some details about the batch system

A SLURM batch script is a special kind of shell script. As we've seen, it contains information about the job like its name, expected walltime, etc. It also contains the procedure to actually run the job. Read on for some important details about SLURM scripting, as well as a few other features that we didn't mention yet.

For more information, try the following sources

Parts of a SLURM script

Here is a quick reference for the options discussed on this page.
: (colon) Indicates a commented-out line that should be ignored by the scheduler.
#SBATCH Indicates a special line that should be interpreted by the scheduler.
srun ./hello_parallel This is a special command used to execute MPI programs. The command uses directions from SLURM to assign your job to the scheduled nodes.
--job-name=hello_serial This sets the name of the job; the name that shows up in the "Name" column in squeue's output. The name has no significance to the scheduler, but helps make the display more convenient to read.
--output=slurm.out
--error=slurm.err
This tells SLURM where it should send your job's output stream and error stream, respectively. If you would like to prevent either of these streams from being written, set the file name to /dev/null
--partition=batch Set the partition in which your job will run.
--qos=normal Set the QOS in which your job will run.
--nodes=4 Request four nodes.
--ntasks-per-node=8 Request eight tasks to be run on each node. The number of tasks may not exceed the number of processor cores on the node.
--ntasks=11 Request 11 tasks for your job.
--time=1-12:30:00 This option sets the maximum amount of time SLURM will allow your job to run before it is automatically killed. In the example shown, we have requested 1 day, 12 hours, 30 minutes, and 0 seconds. Several other formats are accepted such as "HH:MM:SS" (assuming less than a day). If your specified time is too large for the partition/QOS you've specified, the scheduler will not run your job.
--mail-type=type SLURM can email you when your job reaches certain states. Set type to one of: BEGIN to notify you when your job starts, END for when it ends, FAIL for if it fails to run, or ALL for all of the above. See the example below.
--mail-user=email@umbc.edu Specify a recipient(s) for notification emails (see example below)
--mem-per-cpu=MB Specify a memory limit for each process of your job. The default is 2944
--mem=MB Specify a memory limit for each node of your job. The default is that there is a per-core limit
--exclusive Specify that you need exclusive access to nodes for your job. This is the opposite of "--share".
--share Specify that your job may share nodes with other jobs. This is the opposite of "--exclusive".
--begin=2010-01-20T01:30:00 Tell the scheduler not to attempt to run the job until the given time has passed.
--dependency=afterany:15030:15031 Tell the scheduler not to run the job until jobs with IDs 15030 and 15031 have completed.
--account=pi_name Tell the scheduler to charge this job to pi_name
--constraint=feature_name Tell the scheduler that scheduled nodes for this job must have feature "feature_name"
--gres=resource_name Tell the scheduler that scheduled nodes for this job will use resource "resource_name"

Job scheduling issues

Specifying a time limit

The partitions and QOS's on maya have time limits built in. For example in the "normal" QOS, jobs are limited to an hour overall, and an hour if they require 64 nodes. This is an upper limit. Suppose you will be using 64 nodes, but will only require at most 2 hours. It is beneficial for the scheduler to supply this information, and may also allow your job to be backfilled. See the scheduling policy for more information, and also note the following example.

Suppose the system currently has 14 free nodes, and there are two jobs in the queue waiting to run. Suppose also that no additional nodes will become free in the next few hours. The first queued job ("job #1") requires 16 nodes, and the second job ("job #2") requires only 2 nodes. Since job #1 was queued first, job #2 would normally need to wait behind it. However, if the scheduler sees that job #2 would complete in the time that job #1 would be waiting, it can allow job #2 to skip ahead.

Here is an example where we've specified a time limit of 10 hours, 15 minutes, and 0 seconds. Notice that we've started with the batch script from Running Parallel Jobs, Example 1 and added a single "--time=" statement.

#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=10:15:00

srun ./hello_parallel

Download: ../code/hello_parallel/intel-np1-walltime.slurm

Note that in our experience, SLURM seems to round time limits up to the next minute. For example, specifying "--time=00:00:20" will result in an actual time limit of 1 minute.

Email notifications for job status changes

You can request the scheduler to email you on certain events related to your job. Namely: As an example of how to use this feature, let's ask the scheduler to email us on all three events when running the hello_serial program. Let's start with the batch script developed earlier, and add the options "--mail-type=ALL" and "--mail-user=username@domain.edu". That is, where "username@domain.edu" is your actual email address. After submitting this script, we can check our email and receive the following messages.
From: Simple Linux Utility for Resource Management <slurm@maya-mgt.rs.umbc.edu>
Date: Thu, Jan 14, 2010 at 10:53 AM
Subject: SLURM Job_id=2655 Name=hello_serial Began
To: username@domain.edu
From: Simple Linux Utility for Resource Management <slurm@maya-mgt.rs.umbc.edu>
Date: Thu, Jan 14, 2010 at 10:53 AM
Subject: SLURM Job_id=2655 Name=hello_serial Ended
To: username@domain.edu
Because hello_serial is such a trivial program, the start and end emails appear to have been sent simultaneously. For a more substantial program the waiting time could be significant, both for your job to start and for it to run to completion. In this case email notifications could be useful to you.

Controlling exclusive vs. shared access to nodes

By default, you are not given exclusive access to the nodes you're assigned by the scheduler. This means that your job may run on a node with another user's jobs. You can override this default behavior using the "--exclusive" option.

Here's an example where we reserve an entire node.

#!/bin/bash
#SBATCH --job-name=hello_serial
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --exclusive

./hello_serial

Download:
../code/hello_serial/run-whole-node.slurm
If our job involves multiple nodes, specifying the "--exclusive" flag requests exlusive access to all nodes that will be in use by the job.

We may also explicitly permit sharing of our nodes with the "--share" flag.

#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=4
#SBATCH --share

srun ./hello_parallel

Download: ../code/hello_parallel/intel-share.slurm
Overriding the default shared behavior should not be done arbitrarily. Before using these options in a job, make sure you've confirmed that exclusive access is necessary. If every job was requested with exclusive access, it would have a negative effect on the overall productivity of the cluster.

The memory limit

This section is specific to the hpcf2009 nodes and needs to be updated to reflect for the various node types of maya.
Jobs on maya are limited to a maximum of 23,552 MB per node out of the total 24 GB system memory. The rest is reserved for the operating system. Jobs are run inside a "job container", which protects the cluster against jobs overwhelming the nodes and taking them offline. By default, jobs are limited to 2944 MB per core, based on the number of cores you have requested. If your job goes over the memory limit, it will be killed by the batch system.

The memory limit may be specified per core or per node. To set the limit per core, simply add a line to your submission script as follows:

#SBATCH --mem-per-cpu=4500
where 4500 represents a number of MB. Similarly, to set the limit per node you can use this instead.
#SBATCH --mem=4500
In the serial case, the two options are equivalent. For parallel computing situations it may be more natural to use the per core limit, given that the scheduler has some freedom to assign processes to nodes for you.

If your job is killed because it has exceeded its memory limit, you will receive an error similar to the following in your stderr output. Notice that the effective limit is reported in the error.

slurmd[n1]: error: Job 13902 exceeded 3065856 KB memory limit, being killed
slurmd[n1]: error: *** JOB 13902 CANCELLED AT 2010-04-22T17:21:40 ***
srun: forcing job termination

Note that the memory limit can be useful in conducting performance studies. If your code runs out of physical memory and begins to use swap space, the performance will be severely degraded. For a performance study, this may be considered an invalid result and you may want to try a smaller problem, use more nodes, etc. One way to protect against this is to reserve entire nodes (as discussed elsewhere on this page), and set the memory limit to 23 GB per node (or less). That is about the maximum you can use before swapping starts to occur. Then the batch system will kill your job if it's close enough to swapping.

5/13/2010: Note that when using MVAPICH2, if your job has exclusive access to its assigned nodes (by virtue of the queue you've used - for example the parallel queue, or by the "--exclusive" flag), it will have access to the maximum available memory. This is not the case with OpenMPI. We hope to obtain version of SLURM will support this feature consistently. To avoid confusion in the meantime, we recommend using the "--mem" and "--mem-per_cpu" options as the preferred method of controlling the memory limit.
7/12/2011: The memory limit is being lowered from 23,954 MB (maximum) per node to 23,552 MB. This is being done to further protect nodes against crashing due to low memory. The default per-core limit is being lowered from 2994 MB to 2944 MB accordingly.

Note that a memory limit can be specified even for non-SLURM jobs. This can be useful for interactive jobs on the front end node. For example, running the following command

[araim1@maya-usr1 ~]$ ulimit -S -v 2097152
will limit memory use to 2 GB of any subsequent command in the session.

Requesting an arbitrary number of tasks

So far on this page we've requested some number of nodes, and some number of tasks per node. But what if our application requires a number of tasks like 11, which can not be split evenly among a set of nodes. That is, unless we use one process per node, which isn't a very efficient use of those nodes.

We can split our 11 processes among as few as two nodes, using the following script. Notice that we don't specify anything else like how many nodes to use. The scheduler will figure this out for us, and most likely use the minimum number of nodes (two) to accomodate our tasks.

#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=develop
#SBATCH --ntasks=11

srun ./hello_parallel

Download:
../code/hello_parallel/intel-n11.slurm
Running this yields the following output in slurm.out
[araim1@maya-usr1 hello_parallel]$ cat slurm.out
Hello world from process 009 out of 011, processor name n2
Hello world from process 008 out of 011, processor name n2
Hello world from process 000 out of 011, processor name n1
Hello world from process 001 out of 011, processor name n1
Hello world from process 002 out of 011, processor name n1
Hello world from process 003 out of 011, processor name n1
Hello world from process 004 out of 011, processor name n1
Hello world from process 005 out of 011, processor name n1
Hello world from process 006 out of 011, processor name n2
Hello world from process 010 out of 011, processor name n2
Hello world from process 007 out of 011, processor name n2
[araim1@maya-usr1 hello_parallel]$
Now suppose we want to limit the number of tasks per node to 2. This can be accomplished with the following batch script.
#!/bin/bash
#SBATCH --job-name=hello_parallel
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --ntasks=11
#SBATCH --ntasks-per-node=2

srun ./hello_parallel

Download: ../code/hello_parallel/intel-n11-npn2.slurm
Notice that we needed to move out of the develop queue to demonstrate this scenario. Now we've specified --ntasks-per-node=2 at the top of the script, in addition to --ntasks=11.
[hu6@maya-usr1 hpcfweb]$ sort slurm.out 
Hello world from process 000 out of 011, processor name n71
Hello world from process 001 out of 011, processor name n71
Hello world from process 002 out of 011, processor name n72
Hello world from process 003 out of 011, processor name n72
Hello world from process 004 out of 011, processor name n73
Hello world from process 005 out of 011, processor name n73
Hello world from process 006 out of 011, processor name n74
Hello world from process 007 out of 011, processor name n74
Hello world from process 008 out of 011, processor name n75
Hello world from process 009 out of 011, processor name n75
Hello world from process 010 out of 011, processor name n76
where we've sorted the output to make it easier to read.

It's also possible to use the "--ntasks" and "--nodes" options together, to specify the number of tasks and nodes, but leave the number of tasks per node up to the scheduler. See "man sbatch" for more information about these options.

Setting a 'begin' time

You can tell the scheduler to wait a specified amount of time before attempting to run your job. This is useful for example, if your job requires many nodes. Being a conscientious user, you may want to wait until late at night for your job to run. By adding the following to your batch script, we can have the scheduler wait until 1:30am on 2010-01-20 for example.
#SBATCH --begin=2010-01-20T01:30:00
You can also specify a relative time
#SBATCH --begin=now+1hour
See "man sbatch" for more information.

Dependencies

You may want a job to wait until another one starts or finishes. This can be useful if one job's input depends on the other's output. It can also be useful to ensure that you're not running too many jobs at once. For example, suppose we want our job to wait until either (job with ID's) 15030 or 15031 complete. This can be accomplished by adding the following to our batch script.
#SBATCH --dependency=afterany:15030:15031
You can also specify that both jobs should have finished in a non-error state before the current job can start.
#SBATCH --dependency=afterok:15030:15031
Notice that the above examples required us to note down the job IDs of our dependencies and specify them when launching the new job. Suppose you have a collection of jobs, and your only requirement is that only one should run at a time. A convenient way to accomplish this is with the "singleton" flag.
#!/bin/bash
#SBATCH --job-name=myjob
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --dependency=singleton

# Put some useful work here
sleep 60

Download: ../code/singleton/run.slurm
This job will wait until no other job called "myjob" is running from your account.

See "man sbatch" for more information.

Requeue-ability of your jobs

By default it's assumed that your job can be restarted if a node fails, or if the cluster is about to be brought offline for maintenance. For many jobs this is a safe assumption, but sometimes it may not be.

For example suppose your job appends to an existing data file as it runs. Suppose it runs partially, but then is restarted and then runs to completion. The output will then be incorrect, and it may not be easy for you to recognize. An easy way to avoid this situation is to make sure output files are newly created on each run.

Another way to avoid problems is to specify the following option in your submission script. This will prevent the scheduler from automatically restarting your job if any system failures occur.

#SBATCH --no-requeue

For very long-running jobs, you might also want to consider designing them to save their progress occasionally. This way if it's necessary to restart such a job, it won't need to start completely again from the beginning. See job scheduling issues for more information.

Using scratch storage

Temporary scratch storage is available when you run a job on the compute nodes. The storage is local to each node. You can find the name of your scratch directory in the environment variable "$JOB_SCRATCH_DIR" which is provided by SLURM. Here is an example of how it may be accessed by your batch script.
#!/bin/bash
#SBATCH --job-name=test_scratch
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=develop

echo "My scratch directory is: $JOB_SCRATCH_DIR"
echo "ABCDEFGHIJKLMNOPQRSTUVWXYZ" > $JOB_SCRATCH_DIR/testfile

echo "Here is a listing of my scratch directory:"
ls -l $JOB_SCRATCH_DIR

echo "Here are the contents of $JOB_SCRATCH_DIR/testfile:"
cat $JOB_SCRATCH_DIR/testfile

Download:
../code/test_scratch/test.slurm
Submitting this script should yield something like the following
[araim1@maya-usr1 test_scratch]$ cat slurm.out 
My scratch directory is: /scratch/367738
Here is a listing of my scratch directory
total 4
-rw-rw---- 1 araim1 pi_nagaraj 27 Jun  8 18:36 testfile
Here are the contents of /scratch/367738/testfile
ABCDEFGHIJKLMNOPQRSTUVWXYZ
[araim1@maya-usr1 test_scratch]$ 
You can of course also access $JOB_SCRATCH_DIR from C, MATLAB, or any other language or package. Remember that the files only exist for the duration of your job, so make sure to copy anything you want to keep to a separate location, before your job exits.

Check here for more information about scratch and other storage areas.

Charging computing time to a PI, for users under multiple PIs

If you a member of multiple research groups on maya this will apply to you. When you run a job on maya, the resources you've used (e.g. computing time) are "charged" to your PI. This simply means that there is a record of your group's use of the cluster. This information is leveraged to make sure everyone has access to their fair share of resources (through the fair-share scheduling policy), especially the PIs who have paid for nodes. Therefore, it's important to charge your jobs to the correct PI.

You have a "primary" account which your jobs are charged to by default. To see this, try checking one of your jobs as follows (suppose our job has ID 25097)

[araim1@maya-usr1 ~]$ scontrol show job 25097
JobId=25097 Name=fmMle_MPI
   UserId=araim1(28398) GroupId=pi_nagaraj(1057)
   Priority=4294798165 Account=pi_nagaraj QOS=normal
   JobState=RUNNING Reason=None Dependency=(null)
   TimeLimit=04:00:00 Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0
   SubmitTime=2010-06-30T00:14:24 EligibleTime=2010-06-30T00:14:24
   StartTime=2010-06-30T00:14:24 EndTime=2010-06-30T04:14:24
   SuspendTime=None SecsPreSuspend=0
...
[araim1@maya-usr1 ~]$
Notice the "Account=pi_nagaraj" field - in this example, this is our default account. This should also be the same as our primary Unix group
[araim1@maya-usr1 ~]$ id
uid=28398(araim1) gid=1057(pi_nagaraj) groups=100(users),700(contrib),
701(alloc_node_ssh),1057(pi_nagaraj),32296(pi_gobbert)
[araim1@maya-usr1 ~]$ 
The primary group is given above as "gid=1057(pi_nagaraj)"

Suppose we are also working for another PI "pi_gobbert". When running jobs for that PI, it's only fair that we charge the computing resources to them instead. To accomplish that, we may add the "--account" option to our batch scripts.

#SBATCH --account=pi_gobbert

Note that if you specify an invalid name for the account (a group that does not exist, or which you do not belong to), the scheduler will silently revert back to your default account. You can quickly check the status field in the scontrol output to make sure the option worked.

[araim1@maya-usr1 ~]$ scontrol show job 25097
JobId=25097 Name=fmMle_MPI
   UserId=araim1(28398) GroupId=pi_nagaraj(1057)
   Priority=4294798165 Account=pi_gobbert QOS=normal
...
[araim1@maya-usr1 ~]$

Interactive jobs

Normally interactive programs should be run on the front end node (i.e. not using the scheduler). If you need to run them on the compute nodes for some reason, contact our HPCF Point of Contact as requested in the usage policies.

If you are running a job on a set of compute nodes, it is also possible to interact with those nodes, for example to collect diagnostic information about the jobs memory usage.

Selecting specific nodes

It is possible to select specific compute nodes for your job, although it is usually desirable not to do this. Usually we would rather request only the number of processors, nodes, memory, etc, and let the scheduler find the first available set of nodes which meets our requirements. To select specific nodes, consider the following batch script.
#!/bin/bash
#SBATCH --job-name=hello_world
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --nodelist=n31,n32

srun ./hello_parallel

Download: ../code/specific-nodes/run.slurm
Here we have selected two nodes, namely n31 and n32, and request 8 processes per node.

Jobs stuck in the "PD" state

Your job may become stuck in the "PD" state either if there are not enough nodes available to run your job, or the cluster's scheduler has decided to run other jobs before yours.

A job cannot be run until there are enough free processor cores/nodes to meet its requirement. To illustrate, if somebody submits a job that uses all of the cluster nodes for twelve hours, nobody else can run any jobs until that large job finishes. If you are trying to run a sixteen node job, and there are a set of jobs running which leave less than 16 nodes available, then your job must wait.

When there are a sufficient number of processes/nodes available, the scheduler must decide which job to run next. The decision is based on several factors:

  1. The number of nodes your job uses. A job that takes up the entire cluster will not run very soon. Use the options mentioned earlier to set the number of nodes your job uses.
  2. The maximum length of time that your job claims it will take to run. As mentioned earlier, give a walltime estimate to give the scheduler an idea of how long this will be. Smaller jobs may be allowed to run ahead of larger ones. If you do not give an estimate, the scheduler will assume a default, which is based on the queue you've submitted to.
  3. The job priority. This depends on when you submitted your job (generally first-in-first-out (FIFO) is used) and which queue you use. If you use the perform queue, your job will probably run before jobs in the serial queue.
It is also possible that someone else's job has gotten stuck, or that there is another problem on the cluster. If you suspect that may be the case, run squeue. If there are many jobs whose state ("ST" column) is "R" or "PD" then there are probably no problems on the cluster - there are just a lot of jobs taking up nodes. If a job has been in the "R" state for most of the day, or if you see jobs that are in states other than "PD" or "R" for more than a few seconds, then something is wrong. If this is the case, or if you notice any other strange behavior contact us.

What is the priority of my job?

If your job has been submitted and in the pending state, its waiting time will depend on the currently running jobs and the other pending jobs. We can use the sprio utility to see the priorities of all pending jobs. Suppose we have the following scenario.
[araim1@maya-usr1 ~]$ squeue
  JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)
   4056     batch  users16   araim1  PD       0:00      8 (Resources)
   4061     batch  users21   araim1  PD       0:00      8 (Priority)
   4052     batch  users12   araim1  PD       0:00      8 (Priority)
   4055     batch  users15   araim1  PD       0:00      8 (Priority)
   4057     batch  users17   araim1  PD       0:00      8 (Priority)
   4058     batch  users18   araim1  PD       0:00      8 (Priority)
   4059     batch  users19   araim1  PD       0:00      8 (Priority)
   4060     batch  users20   araim1  PD       0:00      8 (Priority)
   4053     batch  users13   araim1  PD       0:00      8 (Priority)
   4054     batch  users14   araim1  PD       0:00      8 (Priority)
   4062     batch contrib0   araim1  PD       0:00      1 (Priority)
   4063     batch contrib0   araim1  PD       0:00      1 (Priority)
   4051     batch  users11   araim1   R       2:35      8 n[77-84]
   4045     batch  users05   araim1   R       2:36      8 n[29-36]
   4046     batch  users06   araim1   R       2:36      8 n[37-44]
   4047     batch  users07   araim1   R       2:36      8 n[45-52]
   4048     batch  users08   araim1   R       2:36      8 n[53-60]
   4049     batch  users09   araim1   R       2:36      8 n[61-68]
   4050     batch  users10   araim1   R       2:36      8 n[69-76]
   4042     batch  users02   araim1   R       2:37      8 n[5-12]
   4043     batch  users03   araim1   R       2:37      8 n[13-20]
   4044     batch  users04   araim1   R       2:37      8 n[21-28]
   4041     batch  users01   araim1   R       2:37      2 n[3-4]
[araim1@maya-usr1 ~]$
Notice that all compute nodes are in use, and the next job in line to run is 4056. The remaining pending jobs are waiting because they have been assigned a lower priority. If we run sprio, we can see the priorities of the pending jobs.
[araim1@maya-usr1 ~]$ sprio
  JOBID   PRIORITY        AGE  FAIRSHARE    JOBSIZE  PARTITION
   4052        572         14        463         94          0
   4053        572         14        463         94          0
   4054        572         14        463         94          0
   4055        572         14        463         94          0
   4056        572         14        463         94          0
   4057        572         14        463         94          0
   4058        572         14        463         94          0
   4059        572         14        463         94          0
   4060        572         14        463         94          0
   4061        572         14        463         94          0
   4062        415         12        391         11          0
   4063        415         12        391         11          0
[araim1@maya-usr1 ~]$
All jobs except 4062 and 4063 effectively have the same priority, which is given in the second column. A higher number corresponds to a higher priority. Notice that job 4056 (the next job to run) is in the higher priority group, but is not necessarily the earliest job to the submitted. However, jobs 4052 through 4061 were submitted close enough in time that the age factor is about the same. The 3rd to 6th columns are factors that went into computing the priority. They are combined according to some weights (which are subject to change with system load). The weights themselves can be seen through sprio.
[araim1@maya-usr1 ~]$ sprio -w
  JOBID   PRIORITY        AGE  FAIRSHARE    JOBSIZE  PARTITION
Weights                  1000       1000       1000       2000
[araim1@maya-usr1 ~]$
If you compute a weighted sum of the given factors, they will not necessarily add up to the displayed priority. If you are interested in the details, you can check the SLURM website. But for most users, we think that observing the priorities and the factors should be sufficient.

I have a group of long serial jobs. How can I make them run together on one node?

Suppose we have eight serial jobs to run and they will take a very long time, perhaps 5 days. As a considerate user, we should run these jobs together on a single node, if memory and other system resources are not an issue. This will help maximize the availability of the rest of the cluster.

Suppose our serial jobs are launched by Bash scripts named job1.bash through job8.bash. The following script will run them on a single node.

#!/bin/bash
#SBATCH --job-name=serial_group
#SBATCH --output=slurm.out
#SBATCH --error=slurm.err
#SBATCH --partition=batch
#SBATCH --nodes=1
#SBATCH --exclusive

./job1.bash &
./job2.bash &
./job3.bash &
./job4.bash &
./job5.bash &
./job6.bash &
./job7.bash &
./job8.bash &

wait

Download: ../code/bash_together/run.slurm
Notice first that we have requested exclusive access to our node. We could also use "--ntasks-per-node" if we needed less than the eight available cores, and it would be okay for other users' jobs to run on this node as well. We have used the "&" shell feature to launch each job in the background. After the jobs have been launched, we specify a "wait" at the end so that the script will not exit until the eight jobs have completed. Note that if we did not specify "wait", the script would exit, and the scheduler would kill any jobs that have been spawned by the run script.
Spawning processes using shell mechanisms like forking and backgrounding increases the risk of jobs escaping control of the scheduler. We believe that the provided examples are safe to use, but ask that you do not experiment with writing such scripts yourself. If you have would like to accomplish something that we haven't discussed on this site, please contact us.

How can I check my fair-share level?

The sshare command can be used to check your fair-share usage, which is an important factor in the priority of your jobs.
[araim1@maya-usr1 ~]$ sshare
             Account       User Raw Shares Norm Shares   Raw Usage Effectv Usage Fair-share 
-------------------- ---------- ---------- ----------- ----------- ------------- ---------- 
root                                          1.000000           0      1.000000   0.500000 
 contribution                           80    0.714286           0      0.000000   0.857143 
  pi_ithorpe                            16    0.211640           0      0.000000   0.605820 
...
  pi_strow                              11    0.145503           0      0.000000   0.572751 
 community                              20    0.178571           0      0.000000   0.589286 
  pi_nagaraj                             1    0.044643           0      0.000000   0.522321 
   pi_nagaraj            araim1          1    0.044643           0      0.000000   0.522321 
...
  pi_gobbert                             1    0.044643           0      0.000000   0.522321 
   pi_gobbert            araim1          1    0.014881           0      0.000000   0.507440 
[araim1@maya-usr1 ~]$
There are also various viewing options, such as viewing only specific accounts (PIs), and all users under those accounts. Let's look at the usage for all users under the "pi_gobbert" account.
[araim1@maya-usr1 ~]$ sshare -A pi_gobbert -a
Accounts requested:
	: pi_gobbert
             Account       User Raw Shares Norm Shares   Raw Usage Effectv Usage Fair-share 
-------------------- ---------- ---------- ----------- ----------- ------------- ---------- 
pi_gobbert                               1    0.044643           0      0.000000   0.522321 
 pi_gobbert             dtrott1          1    0.014881           0      0.000000   0.507440 
 pi_gobbert              araim1          1    0.014881           0      0.000000   0.507440 
 pi_gobbert             gobbert          1    0.014881           0      0.000000   0.507440 
[araim1@maya-usr1 ~]$ 
For more information about the fair-share calculations, see SLURM's web page for the Multifactor Priority Plugin. There are some specifics we have not discussed here, such as the rate of decay for previous usage. These are set by the system administrators, and are subject to change.