Get Started

create account to access computing resources

Get Started with Research Computing

STEP 1: REQUEST YOUR URC RESEARCH ACCOUNT

Faculty, staff, and students who are actively working on faculty-sponsored research projects can request an account.

All faculty and staff in the UNC Charlotte research community are welcome to utilize URC’s resources. All students and sponsored accounts must specify a faculty member as an account sponsor. The specified faculty member would be overseeing the research being performed. Part of the account creation process is confirming sponsorship with the faculty sponsor via email, so please be sure they are aware you are requesting a URC account. In addition, classwork is not allowed on the URC research clusters as they are reserved for research only.

When your account has been created, you will receive an email notification.

STEP 2: ENSURE YOUR DUO IS SET UP

Even before your account is ready, now is a good time to set up your Duo authentication if you have not previously done so. URC utilizes Duo to provide an additional layer of security.

STEP 3: PICK AN SSH CLIENT

The URC environment utilizes a command-line interface (CLI), which means all interaction with our systems is text-based. Logging into URC requires a Secure Shell (SSH) client. There are several options for an SSH client and which one you choose depends on your preferences. Below are several different clients, but you will only need one based on your computer’s operating system. (Links will open in a new tab.)

Please note that URC does not prefer any one client over another as they all should have the necessary capability to connect to Research Computing resources.

Linux
– ssh Client (Included in nearly all distributions)

MacOS
– ssh Client (Included in MacOS)
– iTerm2 (https://iterm2.com/)

Windows
– PuTTY (https://www.chiark.greenend.org.uk/~sgtatham/putty/)
– MobaXTerm (https://mobaxterm.mobatek.net/)
– Windows Subsystem for Linux [WSL] (https://docs.microsoft.com/en-us/windows/wsl/install-win10)

NOTE: While due to differences in cryptographic capabilities, the built-in SSH in Windows PowerShell is not compatible with URC systems.

STEP 4: LOG INTO URC

Once your account is created, you will receive a welcome email providing detailed information on the URC environment. Please be sure to review the email and contact URC support if you have any questions or concerns.

First, you will need to ensure you can connect to hpc.uncc.edu. If you are on campus, please connect to EDUROAM wireless. Otherwise, please connect to the campus VPN (Connecting to the campus VPN).

Next, you will login with your SSH client of choice. Different SSH clients will have different methods for connecting to an SSH server, but regardless of the client you use, you will need these three pieces of information:

Login: (Your NinerNET ID — without “@uncc.edu”)
Password: (Your NinerNET ID Password)
SSH Server: hpc.uncc.edu

If you need assistance logging in, please contact URC Support.

NOTE: When using MobaXterm to connect, do not use the “Start local terminal” option. Instead, create and save a new session fo HPC and connect via the left menu. The “Start local terminal” option will prevent the Duo prompt from displaying and will result in continuous prompting for the password.

STEP 5: CREATE & RUN YOUR FIRST JOB

To run a job on ORION, the name of the primary SLURM partition, you must create what is known as a submit file. The submit file lets the ORION scheduler know what resources your job requires (number of processors, amount of memory, walltime, etc.). The following submit file will be used to run your first job.

#!/bin/bash #SBATCH --partition=Orion
#SBATCH --job-name=basic_slurm_job
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=1:00:00
echo "======================================================"
echo "Start Time : $(date)"
echo "Submit Dir : $SLURM_SUBMIT_DIR"
echo "Job ID/Name : $SLURM_JOBID / $SLURM_JOB_NAME"
echo "Num Tasks : $SLURM_NTASKS total [$SLURM_NNODES nodes @ $SLURM_CPUS_ON_NODE CPUs/node]"
echo "======================================================"
echo "" cd $SLURM_SUBMIT_DIR
echo "Hello World! I ran on compute node $(/bin/hostname -s)"
echo ""
echo "======================================================"
echo "End Time : $(date)"
echo "======================================================"

Let’s take a quick look at each section of the submit file to understand the structure.

The first section contains the scheduler directives. The directives below are running the job in the BASH shell, in the Orion SLURM partition, setting the name of the job to basic_slurm_job, requesting a single core on a single node, and asking for these resources for up to 1 hour:

#!/bin/bash #SBATCH --partition=Orion
#SBATCH --job-name=basic_slurm_job
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=1:00:00

The second section prints out information, such as today’s date, the directory the job resides in, the job’s ID and name, as well as the total number of tasks and a node list:

echo "======================================================"
echo "Start Time : $(date)"
echo "Submit Dir : $SLURM_SUBMIT_DIR"
echo "Job ID/Name : $SLURM_JOBID / $SLURM_JOB_NAME"
echo "Num Tasks : $SLURM_NTASKS total [$SLURM_NNODES nodes @ $SLURM_CPUS_ON_NODE CPUs/node]"
echo "======================================================"
echo ""

The third section is the portion of the submit file where your actual program will be specified. In this example, the job is just returning the directory that contains the SLURM submit script and printing out a message, as well as the compute node name to the output file:

cd $SLURM_SUBMIT_DIR
echo "Hello World! I ran on compute node $(/bin/hostname -s)"

The final section appends the job’s completion time to the output file:

echo ""
echo "======================================================"
echo "End Time : $(date)"
echo "======================================================"

Now that you understand the sections of the submit file, let’s submit it to the scheduler so you can see it run. The above submit file already exists on the system, so all you need to do is copy it:

$ mkdir ~/slurm_submit
$ cp /apps/usr/slurm_scripts/basic-submit.slurm ~/slurm_submit/
$ cd ~/slurm_submit/

(In Linux, the tilde ~ is a shortcut to your home directory.)

Now, submit the basic-submit.slurm file to the scheduler:

$ sbatch basic-submit.slurm
Submitted batch job 242130

When you submit your job to the scheduler, the syntax of your submit file is checked. If there are no issues and your submit file is valid, the scheduler will assign the job an ID (in this case 242060) and will place the job in pending status (PD) while the scheduler works to reserve the resources requested for your job. The more resources your job requests, the longer it may take for your job to move from pending (PD) to running (R). The resources requested for this job are light, so the job should begin running relatively quickly. You can check the status of your job by using the squeue command:

$ squeue -j 242130 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 242130 Orion basic_sl joeuser R 0:00 1 str-c28

Once your job is complete, you will have an output file that contains the output from your job’s execution. In this example, given the job ID of 242130, the output file name will be slurm-242130.out. Looking in this file, you should see the following:

$ cat slurm-242130.out
======================================================
Start Time : Wed Dec 16 13:10:38 EST 2020
Submit Dir : /users/joeuser/slurm_submit
Job ID/Name : 242130 / basic_slurm_job
Num Tasks : 1 total [1 nodes @ 1 CPUs/node]
======================================================
Hello World! I ran on compute node str-c28
======================================================
End Time : Wed Dec 16 13:10:38 EST 2020
======================================================

STEP 6: KEEP LEARNING

Now that you have run your first job, you are ready to learn more about SLURM and the Orion SLURM partition by looking at the ORION & GPU (SLURM) User Notes.