Skip to main content

We've got nodes

The ability to understand the natural world through cause-and-effect relationships is key to learning science. One way to investigate a system is to build up a theoretical model and simulate to predict properties that can be tested in experimental laboratories. Analysis of the data obtained from these simulations need computing power. That's where the Blugold Center for High Performance Computing (aka Blugold Supercomputing Cluster) comes in. Powerful machines use analysis of mathematical models and computer simulations to gain an understanding of systems ranging from subatomic particles to celestial bodies. Datasets, however big they are, provide valuable insights into studies involving biology, biochemistry, biomedical sciences, bioinformatics, chemistry, computer science, cybersecurity, economics, engineering, geography, material science, mathematics, and physics. Furthermore, researchers are using artificial intelligence (deep learning, machine learning, etc.) to find out hidden patterns that can have a significant impact on the medical and healthcare fields.

A resource for all

What this means for you is your learning and discovery in all science-related disciplines really takes off when you work with the two Blugold Supercomputing Clusters (BGSC and BOSE). BGSC is a smaller cluster and contains 24 nodes and 364 cores. BOSE is the newest cluster consisting of 61 nodes and 3904 cores. By delivering large amounts of information at a fast pace, high-performance computing through the clusters has become a central tool to enhance learning, research, and discovery. The two clusters are available for all faculty and students and integrate into many courses and promote campus-wide collaborations in faculty-student research.

All UWEC students, regardless of their major, are able to use the Blugold Supercomputing Clusters. 

Learn more


BGSC Admins

Meet the admins and learn more about their responsibilities.



Working from off campus? Not sure what coding you need? We have a few things listed out for you to make it easier than ever. Don't see your question below? Check out our HPC Wiki (requires UWEC network) for additional guides. You can always contact us for any additional questions or challenges you run into.

Accessing the cluster

If you are working off-campus, you will need to use the UWEC VPN. Please visit our help area for more information.

Your account for the clusters is the same as your university login information that you use for other UWEC services, though the username must be lowercase.

Cluster Hostname Port SSH Command VPN Required
BGSC 22 ssh Off Campus Only
BOSE (Special Access Required) 50022 ssh -p 50022 Always
Submitting a job through Slurm

To facilitate job scheduling on our clusters, we use the software 'Slurm' to look at what resources you need and run your script on one (or more) of the compute nodes once there is availability.

Step 1: Identify Module / Software

The first step to submitting a job on the cluster is to determine if the software you need is available on the cluster, and if so, what is it called. Use the command below to see all available software currently installed on the cluster you are using.

module avail

Once you identified what it is, you can use the following command to make it available for your current logged in session:

module load <software>/<version>

After using the above command, you'll have access to the commands to run the software. Note that you must do this directly in the submission script (Step 2) each time you want to use it.

Step 2: Submission Script

The next step to submitting a job on the cluster is to create a submission file that contains a list of all the resources you need and what commands you'd like Slurm to run on the nodes. 

Here is an example of a Slurm submission script that you can start with:


#SBATCH --partition=week                                                 #Partition to submit to

#SBATCH --time=0-04:00:00                                                   #Time limit for this job (DD-HH-MM-SS)

#SBATCH --nodes=1                                                              #Nodes to be used for this job during runtime

#SBATCH --ntasks-per-node=1                                           #Number of CPU's Per node

#SBATCH --mem=1G                                                           #Total memory required for this job (1G = 1 Gigabyte)

#SBATCH --job-name="Slurm Sample"                              #Name of this job in work queue

#SBATCH --output=ssample.out                                         #Output file name

#SBATCH --output=ssample.err                                          #Error file name

#SBATCH --mail-user=<your desired email address>       #Email to send notifications to

#SBATCH --mail-type=ALL                                                  #Email notification type (BEGIN, END, FAIL, ALL)

#SBATCH --gpus=#                        # How many GPU cards do you need? (Only needed for GPU-based jobs on BOSE)


# Your Commands Below

module load <software>/<version>  # Load the required software

./command option1 option2   # Run the command for the software


The flags listed below are subject to change to the needs of your job.

#SBATCH --time=0-04:00:00       (The time format here is DD-HH:MM:SS, example is 4 hours)

#SBATCH --ntasks-per-node=1

#SBATCH --mem=150

#SBATCH --job-name="Slurm Sample"

#SBATCH --output=ssample.out

#SBATCH --output=ssample.err

#SBATCH --gpus=#


Step 3: Submit the job

Once you have your submission script created and all ready to go, use the following command to submit it:

sbatch <your script name>


Step 4: View your jobs

After your job is submitted, you can use the following commands to see all jobs in the submission queue as well as specifically the ones you submitted.

myjobs   <-- View your submitted jobs

squeue    <-- View all jobs on the queue

Requesting new / updated software

Do you have software that you want installed on either of our clusters (BGSC2 and/or BOSE) that is needed for your research or class project? We have an eForm available for you to fill out, which you can access by clicking the link below and selecting the "Software Request" option.

Start eForm Now (UWEC Login Required)

Please note that software installs are subject to approval and we are unable to install licensed software unless you have access to the license for your group.

Requesting access to BOSE

To access the BOSE cluster, you'll need approval before you are able to connect to it. To request access, please fill out the eForm by clicking the link below and selecting the "Request Access to BOSE" option.

Start eForm Now (UWEC Login Required)

Reserving a node for your classroom

In your class, are you planning to use one of the supercomputing clusters for activities such as homework and/or exams? To ensure availability of cluster resources, you are able to reserve one or more nodes ahead of time. To request a reservation, use the eForm link below and choose the "Reserve a Node" option.

Start eForm Now (UWEC Login Required)

Please note that reservation requests are subject to approval and we cannot guarantee we're able to fully honor your request. We also are not taking reservations for research groups at this time.

Apply NowBe a Blugold Schedule a VisitTry us on for size Request InfoGet the details
We want to hear from you

Please enter your name

Please enter a valid email address

Please enter a valid phone number

Please enter a message