University of Calgary
UofC Navigation

Terminus QuickStart Guide

About this QuickStart Guide

Note: During the fall of 2012, Terminus is being reconfigured as Storm.  Terminus account requests are being redirected to Storm.  See the Storm QuickStart Guide for more information.

This QuickStart guide gives a overview of the Terminus cluster at the University of Calgary.

It was intended to be read by new account holders getting started on Terminus, covering such topics as the Terminus hardware and performance characteristics, available software, usage policies and how to log in and run jobs.  Now it is just here for historical purposes and the Terminus pages are no longer being maintained.

For Terminus-related questions not answered here, please write to support@hpc.ucalgary.ca .

Introduction

A large equipment donation from HP Labs, along with additional contributions from project partners Alberta Innovation & Science (AI&S), Western Economic Diversification (WED) Canada, HP Canada, and the University of Calgary, led to the installation of a cluster with more than 1000 cores at the University of Calgary in the fall of 2008. The primary purpose of the acquisition was for research into grid and utility computing. However, a portion of the cluster is being made available for High Performance Computing (HPC) cycles to WestGrid and other approved researchers. Terminus is the name given to that part of the machine configured for HPC use.

The size of the Terminus cluster will vary depending upon the cycles required by researchers and by the awards of the HP Labs Resource Allocation Committee (HPL-RAC). We expect at least 25% of the cycles on Terminus to be used by Alberta HPC researchers, and a further 25% to be used by our colleagues throughout WestGrid.  Although it can be used for signficant computations, keep in mind that it is essentially an experimental environment that may be subject to reconfiguration from time to time.

Terminus, a Linux Opteron cluster similar to the WestGrid Matrix cluster, is intended for parallel jobs that can take advantage of its high-capacity, low-latency Infiniband interconnect.

Accounts

Note: During the fall of 2012, Terminus is being reconfigured as Storm.  Terminus account requests are being redirected to Storm.  See the Storm QuickStart Guide for more information.

Hardware

Processors

Terminus is a cluster comprised of up to twenty HP C7000 chassis. Each chassis houses sixteen BL465c G1 CTO Blades. Each blade (compute node) contains two dual-core 2.4 GHz AMD Opteron processors. So, if fully deployed towards an HPC task load, Terminus could have more than 1000 cores (20 chassis x 16 blades/chassis x 2 processors/blade x 2 cores/processor).

Approximately half the nodes have 4GB of memory and about half have 8 GB. One node has 16 GB.

Interconnect

The compute nodes communicate via Infiniband, a high-bandwidth, low-latency network.

Storage

There is about 9.5 TB of disk space allocated for home directories and global scratch space. Each user has a subdirectory in /scratch.

The compute nodes are connected to an HP SFS Storage Array by Infiniband.

Software

Compilers

GNU, Portland Group and Intel compilers are available. The setup of the environment for using the compilers is handled through the module command. An overview of modules on WestGrid is largely applicable to Terminus.

To list available modules, type:

module avail

To see currently loaded modules, type:

module list

By default, modules are installed on Terminus to set up to use PGI compilers and to support parallel programming with MPI (including the determination of which compilers are used by the wrapper scripts mpicc, mpif90, etc.).

To set up the environment to use Intel compilers instead, use:

module load intel

MPI programmers using Portland Group compilers will have to make sure that an Intel compiler module is not loaded, for otherwise, mpif90 and similar commands will use the Intel compilers.

Modules are unloaded with commands like

module unload intel

Modules may also be needed at runtime. A typical batch script for an MPI program compiled with the Intel compiler includes the lines:

source /opt/Modules/default/init/modules.bash
module load intel
module load mpi

Application software

Look for installed software under /usr/apps. MATLAB and Gaussian are available only for University of Calgary researchers. FLUENT and VASP are available only to approved license holders. VMD access requires agreement to certain license conditions. Write to support@hpc.ucalgary.ca if you need access to any of these restricted packages or additional software installed.

GROMACS 4 performs well on Terminus and may be used by any WestGrid researchers.

Using Terminus

To log in to Terminus, connect to terminus.ucalgary.ca using an ssh (secure shell) client. For more information about connecting and setting up your environment, the WestGrid QuickStart Guide for New Users may be helpful.

The Terminus login node may be used for short interactive runs during development. Production runs should be submitted as batch jobs. Batch jobs are submitted through SLURM (unlike WestGrid systems, which use TORQUE) and scheduled using Moab (as on WestGrid). Processors may also be reserved for interactive sessions, in a similar manner to batch jobs.

See Running Jobs for more information about submitting batch jobs and reserving processors for interactive work on Terminus.

There is a 21-day maximum walltime limit for jobs on Terminus.

A user may run a maximum of 256 jobs at one time, using a maximum of 256 processors.  It may be possible to accommodate large jobs by special request during maintenance periods.

Support

Send Terminus-specific questions to support@hpc.ucalgary.ca. if issues being discussed overlap with work being done on WestGrid systems, you may write to support@westgrid.ca, as Terminus support personnel are also on the WestGrid list.


Updated 2012-10-25.

 


Please send corrections or suggestions about the hpc.ucalgary.ca site to support@hpc.ucalgary.ca.