The ARGO cluster now enforces hard memory limits on jobs. The default memory limit for a job is 2 GB per core per node. So if you request 4 cores on a single node your job will be limited to 8 GB. If your job exceeds the amount of memory allocated it will be killed and the state recorded as OUT_OF_MEMORY. It is best to request some small increment more that your job actually requires as it is hard to precisely estimate the memory requirements in advance. You can request a set amount of memory by specifying
#SBATCH --mem=XX[K|M|G|T] e.g. to request 8 GigaBytes use: #SBATCH --mem=8G
Where XX represents the amount of memory your job requires plus some small (10%) padding and the suffix K|M|G|T denotes, Kilo-, Mega-, Giga- and Tera- Bytes respectively.
Alternatively, it may be preferable to specify a memory per core limit :
#SBATCH --mem-per-cpu=XX[K|M|G|T]
The sacct command can display the memory used by prior jobs if you wish to review the actual memory consumption and fine-tune your memory request for future runs. For example:
> sacct -o JobID,JobBane,Start,MaxRSS,state -S mm/dd
here a start is specified in month/day form.