Computing Resources

High-Performance Computing
Argo Cluster

The Argo Cluster is a ~2000 core Linux cluster providing a scheduled job environment using SLURM.  Argo supports three workload domains – General, Large Memory, and GPU. The network interconnect is provided by an FDR Infiniband fabric (56 Gbps). Primary file storage is provided by an 875 TB BeeGFS parallel filesystem. Additional storage can be provisioned on request from the MEMORI storage cluster.

Domain# NodesCPUs# CoresMemoryGPUsOther Information
General78E5-2670 @ 2.60GHz AMD Opt 6276 @ 2.3GHz E5-2660 v3 @ 2.60GHz E5-2670 v3 @ 2.30GHz E5-2660 v4 @ 2.00GHz E5-2680 v4 @ 2.40GHz Gold 5120 @ 2.20GHz16/20/24/28/6464 GB - 512 GB
Large Memory5AMD Opt 6276 @ 2.3GH E5-2650 v4 @ 2.20GHz Gold 5120 @ 2.20GHz24/28/64512 GB (4x) 1.5 TB (1x)
GPU10E5-2670 v3 @ 2.30GHz E5-2650 v4 @ 2.20GHz Gold 5120 @ 2.20GHz24/28128 GB (2x) 256 GB (2x) 768 GB (5x) 1.5 TB (1x)8x K80 (1x) 4x K80 (2x) 4xV100-PCIE (2x) 4xV100-SXM2 (2x)VRAM: K80 - 11 GB V100 - 32 GB
Software 

A list of software available on the cluster can be found here Software. We are often able to install software on the cluster on request. If you have specific software that you have bought and would like to use on the cluster, please contact us as we will need to review the license terms. Similarly, if you have an unmet software need please contact us and we can investigate how we can accommodate your requirements. –   Request Help

Hopper Cluster

Availability scheduled for Spring 2021

The Hopper Cluster is designed to be a “private cloud” for research computing at Mason using OpenStack.  The initial phase of Hopper deployment includes compute nodes with two Intel Xeon Gold 6240R 2.4G, 24C/48T processors, and 192 GB of memory.  Providing 4032 cores and 4GB/core of memory.

Hopper has two highspeed networking fabrics: a  redundant Ethernet network comprising 100 Gbps spine switches and 25 Gbps leaf switches, and an HDR Infiniband network providing 100 Gbps to each node.

Planning for future phase deployments for Hopper is already underway and will include a high-performance Flash backed parallel file system, high memory (FAT) nodes, GPU enabled nodes, and GPU virtual host nodes for visualization.

How to Get Started

For Information on eligibility and how to apply for access to ORC HPC resources please see our New User Information page.

General Purpose Computing
Virtual Computing

The ORC has resources to provision virtual servers for research projects that either require a web presence or are not suited to running in a scheduled environment. 

The ORC can also facilitate access to Virtual Desktops and also to external VM hosting with either the NSF XSEDE project or with a number of commercial cloud computing providers.

Virtual machine requests are handled on a case by case basis, please contact us so we can investigate how to meet your needs. – Request Help​.

Containers

The ORC currently provides limited support for running containerized workloads.  We can run some containers using Singularity on the Argo Cluster (see this link for more information about Singularity).

The forthcoming Hopper Cluster will provide greatly expanded capabilities for running containerized workloads.