Cluster Specifications

Software environment

All nodes on the cluster runs CentOS 7 which is updated on a regular basis. The job scheduler is SGE 8.1.9 (Son of Grid Engine) which provides queues for both communal and lab-priority tasks.

Hardware

Compute Nodes

Compute nodes
87
Physical cores
1832
CPU
2.6-3.4 GHz
RAM
48-768 GiB
Swap
4 GiB
Local /scratch
0.1-1.8 TiB
Local /tmp
4 GiB

Most compute nodes have Intel processors, while others have AMD processes. Each compute node has a local drive, which is either a hard disk drive (HDD), a solid state drive (SSD), or even a Non-Volatile Memory Express (NVMe) drive. For additional details on the compute nodes, see the Details section below.

The compute nodes can only be utilized by submitting jobs via the scheduler - it is not possible to explicitly log in to compute nodes.

Login Nodes

The cluster can be accessed via SSH to one of two login nodes:

  1. wynlog1: wynlog1.compbio.ucsf.edu
  2. wynlog2: wynlog2.cc.ucsf.edu

Data Transfer Nodes

For transferring large data files, it is recommended to use the dedicate data transfer node:

  1. wyndt1: wyndt1.compbio.ucsf.edu

which has a 10 Gbps connection - providing a file transfer speed of up to (theoretical) 1.25 GB/s = 4.5 TB/h. As the login nodes, the transfer node can be accessed via SSH.

Comment: You can also transfer data via the login nodes, but since those only have 1 Gbps connections, you will see much lower transfer rates.

Development Nodes

The cluster has development nodes for the purpose of validating scripts, prototyping pipelines, compiling software, and more. Development nodes can be accessed from the login nodes.

Node # Physical Cores CPU RAM Local /scratch
qb3-dev1 8 2.66 GHz 16 GiB 0.125 TiB

The development nodes have Intel Xeon CPU E5430 @ 2.66 GHz processors and local solid state drives (SSDs).

Scratch Storage

The Wynton cluster provides two types of scratch storage:

There are no per-user quotas in these scratch spaces. Files not added or modified during the last two weeks will be automatically deleted on a nightly basis. Note, files with old timestamps that were “added” to the scratch place during this period will not be deleted, which covers the use case where files with old timestamps are extracted from tar.gz file. (Details: tmpwatch --ctime --dirmtime --all --force is used for the cleanup.)

User and Lab Storage

Each user may use up to 200 GiB disk space in the home directory. Research groups can add additional storage space by either mounting their existing storage or purchase new. Important, please note that the Wynton HPC storage is not backed up. Users and labs are responsible to back up their own data outside of Wynton.

Network

The compute nodes are connected using 10 Gbps Ethernet. The cluster connects to NSF’s Pacific Research Platform at a speed of 100 Gbps.

Details

All Compute Nodes

Source: host_table.tsv produced on using wyntonquery.