HPC FAQ’s

My Python script doesn’t print anything out until the program ends

For the Python script to print often rather than at the end, the buffer needs to be flushed so output will be written to screen. To enable this, you can do a few things:

  1. Add “-u” as a command line option to python
  2. Set the environment variable:
PYTHONUNBUFFERED=TRUE
  1. Add the “flush=TRUE” option to you print command e.g.:
print(..., flush=True)

How do I run graphics applications when on the cluster

The most common reason for getting the error message

 _tkinter.TclError: couldn't connect to display "localhost:10.0"

 Tkinter.tclerror: no display name and no $display environment variable  

when you attempt to run a graphics application while on the cluster is a failure to ssh with the -X option [ X11 forwarding ]. Log out and then log back in including the -X option as 

 ssh -X <NetID>@argo.orc.gmu.edu 

If your application uses such packages as Matplotlib, Tensorflow and Pytorch, you can use the Agg (Anti-Grain Geometry) rendering engine instead of X11. To do this, include the following code in your Python script

import matplotlib 

matplotlib.use('Agg')

The same effect can be achieved by including the following in your script

import os 

import matplotlib as matpl 

if os.environ.get('DISPLAY','') == '': 

    print('Currently no display found. Using the non-interactive Agg backend') 

    matpl.use('Agg') 

import matplotlib.pyplot as plot 

My jobs keep failing with OUT_OF_MEMORY errors

The ARGO cluster now enforces hard memory limits on jobs.  The default memory limit for a job is 2 GB per core per node.  So if you request 4 cores on a single node your job will be limited to 8 GB.  If your job exceeds the amount of memory allocated it will be killed and the state recorded as OUT_OF_MEMORY.  It is best to request some small increment more that your job actually requires as it is hard to precisely estimate the memory requirements in advance.  You can request a set amount of memory by specifying

#SBATCH --mem=XX[K|M|G|T]
e.g. to request 8 GigaBytes use:
#SBATCH --mem=8G

Where XX represents the amount of memory your job requires plus some small (10%) padding and the suffix K|M|G|T denotes, Kilo-, Mega-, Giga- and Tera- Bytes respectively.

Alternatively, it may be preferable to specify a memory per core limit :

#SBATCH --mem-per-cpu=XX[K|M|G|T]

The sacct command can display the memory used by prior jobs if you wish to review the actual memory consumption and fine-tune your memory request for future runs.  For example:

> sacct -o JobID,JobBane,Start,MaxRSS,state -S mm/dd

here a start is specified in month/day form.

Is Matlab installed on the cluster?

MATLAB is installed, use module avail command to find available versions

See ORC Wiki for more details.

Is R installed on the Cluster?

Yes, R is installed on the ARGO cluster, use module avail command to find the available versions of R.
More Information can be found here: How to run R on ARGO

Is Python installed on cluster?

Python is installed, use “module avail python” command to find various available versions.

Anconda distribution for python is also available. Check with command “module avail anaconda”.

Do you have sample scripts?

The script file will contain all the options one needs to use for a specific job. The “##” is a comment line. But a “#SBATCH” is a line containing submit options. The first line is always “#!” which specifies the beginning of shell script.

#!/bin/bash
#
## Specify Job name if you want
## the short form -J
#SBATCH --job-name=My_Test_Job
##
## Specify a different working directory
## Default working directory is the directory from you submit your job
## Short form -D
#SBATCH --workdir=/path/to/directory/name
##
## Specify output file name
## If you want output and error to be written to different files
## You will need to provide output and error file names
## short form -o
#SBATCH --output=slurm-output-%N-%j.out
## %N is the name of the node on which it ran
## %j is the job-id
## NOTE this format has to be changed if Array job
## filename-%A-%a.out - where A is job ID and a is the array index
##
## Specify error output file name
## short form -e
#SBATCH --error=slurm-error-%N-%j.out
##
## Specify input file
## short form -i
## Send email
#SBATCH --mail-user=
## Email notification for the following types
#SBATCH --mail-type=BEGIN,FAIL,TIME_LIMIT_80
## Some valid types are: NONE,BEGIN,END,FAIL,REQUEUE
##
## Select partition to run this job
## Default partition is all-HiPri - run time limit is 12 hours
## short form -p
#SBATCH --partition=all-LoPri
##
## Quality of Service; Priority
## Contributor's queue needs QoS to be specified for jobs to run
## Everyone is part of normal QoS, so does not have to specified
#SBATCH --qos=normal
##
## Ask for Intel machine using Feature parameter
## short form -C
## Intel Nodes - Proc16, Proc20, Proc24
## AMD nodes - Proc64
#SBATCH --constraint="Proc24"
##
## Ask for 1 node and the number of slots in node
## This can be 16|20|24
## short form -N
#SBATCH --nodes=1
##
## Now ask for number of slots
#SBATCH --tasks-per-node=16
##
## MPI jobs
## If you need to start a 64 slot job, you can ask for 4 nodes with 16 slots each
#SBATCH --nodes 4
#SBATCH --tasks-per-node=16
##
## How much memory job needs specified in MB
## Default Memory is 2048MB
## Memory is specified per CPU
#SBATCH --mem-per-cpu=4096
##
## Load the needed modules
module load
.....
## Start the job
java –Xmx<memneeded> -jar test.jar

For MPI, Matlab, R jobs, please go the ORC-WIKI.

Can we request multiple cores/slots?

You use the command “ –ntasks <number>” either on the command line with sbatch or in the script file to request <number> cores.

NOTE: If you job is inherently multi-threaded (eg. Java jobs), then you need to use this option to specify the number of cores you want for the job.

Where can I find more information about Slurm scheduler?

You can read the man pages of the various commands.  The Slurm documentation is available here: http://slurm.schedmd.com.  Please note that the documentation available at that website is for the latest release.  We are running Slurm version 15.08.6, so man pages will give you the correct description.

How do I delete jobs?

Use the following command to delete submitted jobs:
$ scancel <job id number>

How do I find out information about completed jobs?

$ sacct –j <job id number>  – After the job has completed, you can use this command

The <job id number> is what you get back when you successfully submit your job via sbatch command.

How do I check status of jobs?

squeue -u <userID>  (will list your jobs that are running)

squeue (will list jobs of all users)

Jobs Status:

  • PD”   – Job is queued and waiting to be scheduled.
  • S”       – Job is suspended.
  • R”       – Job is running.
  • CA”     – Job marked for deletion.
  • F”        – Job failed.

 

See man pages (man squeue) for more details.

Where can I find examples of job scripts?

An example script is in /cm/shared/apps/slurm/current/examples directory.   Experiment with “sleeper.sh” to get started.

What are options one can use with sbatch?

Some of the common options that one might use with sbatch are:

  • -J=<name-of-job> – Use <name-of-job> instead of the default job name which is the script file name.
  • -i=/path/to/dir/inputfilename – Use the “inputfilename” as input file for this job.
  • -o=/path/to/dir/outputfilename – Use the “outputfilename” as the output file for this job.
  • -e=/path/to/dir/errorfilename – Use the “errorfilename as the file name for errors encountered in this job.
  • -n=<number> – The number of tasks to run, also specifies number of slots needed.
  • –mem=<MB> – The total memory needed for the job, use if more than the default is needed
  • –mail-user=GMU-NetID@gmu.edu – Send mail to your GMU email account
  • –mail-type=BEGIN,END – Send email before and  after end of job.

Read the man pages (man sbatch) for more sbatch options.

How do I submit jobs?

The command for submitting a batch job is:

$ sbatch <script file name> (The default partition is all-HiPri)

If the command is successful you will see the following:

Your job <job id number> (“script file name”) has been submitted.

You can also submit the job without a script by listing the options (see the next question for options) on the command line.

$ sbatch   [options] <jobname>

What are the partition (queue) names?

 

Partition Name Nodes in Partition Restricted Access
all-HiPri* [001-0039,041-049,051-054,057-070] no
all-LoPri [001-039,041-049,051-054,057-070] no
bigmem-LoPri [034,035,069,070] no
bigmem-HiPri [034,035,069,070] no
gpuq [40,50] no
COS_q [028-035] yes
CS_q [007-024,056] yes
CDS_q [046-049,051] yes

*all-HiPri is the default partition (queue).

all-HiPri and bigmem-HiPri both have a run time limit of 12 hours.  Jobs exceeding the time limit will be killed.  all-LoPri and bigmem-LoPri both have a 5 day run time limit for jobs.   The partitions bigmem-LoPri and bigmem-HiPri are intended for jobs that will require a lot of memory. Access to the queues marked as “restricted access” is limited to members of research groups and departments that have funded nodes in the cluster.

Can I log into individual nodes to submit jobs?

Users should not log into individual nodes to run jobs. Users have to submit jobs to the scheduler on the head node. Compute intensive jobs running on nodes that are not under scheduler control (i.e. directly started on the nodes) will be killed without notice.

Users can log into nodes on which their jobs are running which were previously submitted via the scheduler. This ability to ssh into individual nodes is only for checking on your job/(s) that is currently running on that node. Please note that if users are using this mode of “sshing” into nodes to start new jobs on the nodes without going through the scheduler, then their ability to ssh into nodes to check on jobs will be removed.

Can I run jobs on the head node?

You can use the head node to develop, compile and test a sample of your job before submitting to the queue. Users cannot run computationally intensive jobs on the head nodes. If such jobs are running on the head node, they will killed without notice.

All jobs have to be submitted on the head node via  Slurm scheduler which will schedule your jobs to run on the compute nodes.

I create my script files on Windows, is there anything I need to do before I use these files?

If you create script files on your Windows machine and then copy them over to ARGO cluster, make sure that you run the following command on those files before you use them:

$ dos2unix "/path/to/filename"

Windows based editors put special characters to denote line return or newline. This “dos2unix” command strips the file of these special characters and converts the file into UNIX format.

I use Windows, how I do log into the cluster?

Windows 10 provides OpenSSH by default so a Windows user can use their preferred terminal interface: CMD, PowerShell, or Windows Teminal and use the same ssh or scp commands as a Mac or Linux user.

Detailed information can be found here: Logging Into Argo

I am new to linux, do you have any tutorials?

We don’t have a tutorial as yet, but here is a good one to get started: Linux Tutorial

Do you have a quota for each user?

There is a 50 GB quota on the home directories.  Faculty may request additional storage up to 1 TB in a projects directory which can be shared with other cluster users.  Storage needs beyond 1 TB can be met from our MEMORI storage cluster for which there is an annual fee per TB.  More information on MEMORI storage can be found here.

There are limits to the number of cores and GPU devices that can be used concurrently on the cluster.  These are occasionally adjusted depending on resource pressures but are typically of the order of one sixth of the total cores and one quarter of all GPU devices.

What are modules?

The Argo cluster uses a system called Environment Modules to manage applications. Modules make sure that your environment variables are set up for the software you want to use. When you login the two modules “SLURM” and “GCC” are loaded by default. SLURM is a workload manager for Linux that manages job submission, deletion, and monitoring. The main commands are:

  • module avail” shows all the available modules.
  • module list” shows the modules that you have loaded at the moment.
  • module load name” or “module add name” adds the module “name” to your environment.
  • module unload name” or “module rm name” removes the module “name” from your environment.
  • module show name” or “module display name” gives a description of the module and also shows what it will do to your environment.

Typing “module” gives you a list of available command and arguments.

How do I access the cluster?

You SSH into the head node using the hostname “argo.orc.gmu.edu”.   There are two head nodes of the ARGO cluster – argo-1 and argo-2 and users are logged into one of these in round-robin manner to manage load.    Use your GMU NetID and password to log into the cluster.

How does one get an account on the ARGO cluster?

Faculty, post-doctoral fellows, and students may all get accounts on the ARGO cluster. Sponsorship by a GMU Faculty is required. please see the New User Information for detailed instructions.

How many nodes are in the cluster?

The Cluster currently comprises 72 nodes (70 compute,2 head nodes) with a total of 1356 compute cores and close to 7 TB of RAM.
Here is the summary of the compute nodes:

Nodes CPU Cores RAM Hardware Arch Total Nodes
[1 to 33][36 to 39][58 to 68] 16 64 GB Intel SSE (Sandybridge) 48
34,35 64 512GB AMD Opteron 2
41 to 45 20 96GB Intel Haswell 5
[46 to 49][51 to 54][57] 24 128GB Intel Broadwell 9
69,70 24 512GB Intel Broadwell 2

GPU Nodes:

Nodes CPU Cores GPU Info RAM Hardware Arch Total Nodes
40 24 4x K80 128 GB Intel Haswell 1
50 24 2x K80 128 GB Intel Broadwell 1
55 24 2x K80 512 GB Intel Broadwell 1
56 24 2x K80 256 GB Intel Broadwell 1

What is the Argo Cluster?

The Argo Cluster is a High Performance compute cluster operated by the Office of Research Computing. It is located in the Aquia Data Center on the Fairfax Campus.