Office of Research Computing

Policies and Procedures

Responsible Use of Computing Policy

Use of ORC systems and resources requires agreement and compliance with Mason policy: #1301, Responsible Use of Computing.

ORC General SLA

ORC systems and services are provided on a 24/7 best effort basis,  with an expectation of 95% availability.  There is no guarantee of support outside of normal working hours, but staff may ad hoc be able to resolve issues in off-hours.

Use Policies and Procedures

All files stored on ORC systems must be restricted to materials directly required for research being carried out on ORC systems.
Compute jobs should perform all data output to the scratch filesystem

  1. The head nodes are not to be used to run production work. Any long-running or compute-intensive tasks should be run on a compute node using SLURM via either sbatch or salloc.
  2. The AMD login nodes have a single GPU device each to be used only for compiling and limited testing of GPU code
  3. The head/login nodes are shared by many users and can be used for submitting jobs and/or light development, testing or debugging purposes only. These nodes are shared resources which all account holders on the cluster use so one needs to be mindful of how these resources are used. Compute or memory-intensive jobs are not allowed on the head/login nodes and will be terminated if found running on the head node.
  4. We are imposing a resource usage limit on every login node. A user gets 4 cores and 8GB on each head node. This limit applies to all logins and sessions on a head node.
The ORC may be able to install software on the cluster on request subject to conditions:
  1. The license terms and conditions must be reviewed and approved by Mason's legal council. This includes any open source or "free" software as well as any paid commercial software.
  2. Approved requests for new software installation will be reviewed within 3 working days to evaluate the complexity of the request.
  3. An estimate of the time required for installation will be provided and regular updates on progress provided.
  4. We will install any open-source software on the cluster if we can determine that it does not pose any threat to system security and/or there are no technical problems installing it.
  5. If more than 2 users want to use the same software, we will install it as a module.
  6. Please note that it can take up to 5 business days for easy installations and up to 4 weeks for complicated/dependency installs.
  7. Note that as a rule we install the most recent stable version of software, which is generally not the latest version.
Note – all data stored are considered research data. This category includes publicly available datasets and de-identified data with low risk of re-identification. For data that does not match this criterion, please send an email to our ticketing system at orchelp@gmu.edu to discuss storage options. The following directories are available to the cluster users:
  1. /home/$USER – A user’s home space with a hard quota of 60GB.
  2. /projects/ – Project storage space. Faculty may request 1TB project space for each postdoc or doctoral student sponsored by them for cluster accounts. The allocation will be removed 6 months after the postdoc leaves or the doctoral student graduates NOTE: Students using the cluster for a class are not entitled to this storage space as their account is per semester only and they are not expected to have long term storage requirements.
  3. /scratch/$USER – Temporary storage for files used or created during job execution. Currently we do not have a quota. Users can launch jobs from this directory but need to move data to another location as the files older than 90 days (about 3 months) are deleted at the beginning of each month. Users will not be notified before deletion.
  4. /groups/ – This is storage purchased by a faculty member who will sign an SLA providing the ORG code to charge the cost of storage. The current cost is $50/TB/year and will be reviewed annually.
  5. /datasets/ - This directory is intended to store large unencumbered datasets that may be used commonly for research or class use.  The intention is to prevent the proliferation of multiple redundant copies.  Please contact orchelp@gmu.edu if you have a dataset that you believe could be stored in the /datasets directory.
 

Compute or memory-intensive tasks need to be submitted as a batch jobs via the slurm resource manager. Please use the “interactive” partition to do extended testing and debugging instead of using head/login node. Please see the following wiki page: https://wiki.orc.gmu.edu/mkdocs/Getting_Started_with_SLURM/#available-partitions-on-the-cluster

  1. Users are expected to learn about the clusters’ resources and utilize them as optimally as intended.
  2. Users should only request resources (CPU, memory, GPU) they need for the duration of their active jobs and release the resources as soon as they are done.
  3. Inefficient use of resources may be grounds for job termination.
  1. For non-contributors - reservations will only be created if there are no available idle resources and a deadline is imminent.
  2. Only created for research use and not for meeting class project deadlines
  3. Maximum length of reservation we can add at a time: 10 days; can be extended upon request and availability of resources
  4. Maximum GPU resources that can be added to reservation:
    GPUS: 16 GPUs/ 4 nodes
  5. Reservations that remain unused for 12 hours will be revoked
  6. Request should provide details on:
    1.Reason for request and deadlines
    2.Resources being requested
    3.Users who will need access to the reservation

Maintenance will be scheduled the second week of January and July every year and as needed during the calendar year. The impact of this maintenance will depend on the type of upgrades to be performed. Every effort will be made to limit downtime during these maintenance periods. Users will be notified via the ListServ about the upcoming scheduled maintenance.

  1. Faculty must inform ORC about using the Hopper cluster or OpenStack for instruction two weeks before the start of semester.
  2. List of required software for the class and any shared data storage requirements must be provided by the faculty
  3. Each student in the class must apply for a cluster access specifying the course number and the instructor's name. If the student has cluster access, then they should send an email to orchelp@gmu.edu to reactivate their account for the new course.
  4. Class roster must be furnished by the faculty teaching the course and this will be used to verify account requests.
  5. Slurm accounts created are valid only for the semester. The course instructor can request an extension for any student in the course for completion of course work.
  6. Students must use the class account to submit jobs for classwork.
  7. All slurm accounts associated with the course will be deactivated after the semester ends unless the student has an extension from the class instructor.
  1. Students, postdocs, and affiliates need to be sponsored by a faculty.
  2. Account creation can take up to 3 business days, so be patient. Account creation can only happen after the GMU credentials are created for the user - which may take additional time for affiliates.
  3. Tutorial completion and adding Slurm database – 5 working days after the user completion of Blackboard tutorial module.
  4. If your account was associated with a class, you only have access to the cluster during the semester the class was taught. The account is deactivated once the semester has ended.
  1. The use of the cluster is contingent on participating in an active research project that is connected with GMU.
  2. If graduating students need to continue in their collaboration, they should apply for affiliate status.
  3. If affiliate status changes - e.g., sponsor is no longer at Mason, then cluster access will be terminated.
  4. Students who have graduated and no longer have access to the cluster can send an email to orcadmin@gmu.edu requesting their files.

ORC (Office of Research Computing) will audit account status every six months (in January and July) to:

  1. Archive the home directories and disable logins of users who no longer have their GMU active directory accounts because they left the university.
  2. Archive home directories and disable logins of users whose account have been inactive for more than two years.
  3. Archive project directories of faculty who are no longer at GMU

MEMORI Storage Policies

  1. Data stored in volumes provisioned from MEMORI will be subject to regularly scheduled backup.
  2. Storage is provided at a base rate of $50 per terabyte per year. Payment is due at the start of the fiscal year and may be prorated.
  3. A sliding cost scale may be negotiated for purchases over 5 terabytes.
  4. MEMORI is not currently approved for the storage of CUI or other sensitive data.
  1. Daily between 1 AM and 5 AM.
  1. 1 month of daily backups (30)
  2. 3 months of weekly backups (12)
  3. 6x monthly backups (6).
VM Hosting Policies

  1. VMs will be provided based on available resources.
  2. ORC staff will provide support for OS-level installation, patching, and user account setup.
  3. Users will have local administrative access and will be responsible for application-level software installation and patching.
  4. VM requests that include public access will be conditional on review and approval by the Mason IT Security Office.
  5. VMs may be monitored by the Mason IT Security Office.
  6. VMs may be suspended at the request of the Mason IT Security Office if serious security issues are discovered.