The majority of the compute nodes have Intel processors, while a few have AMD processes. Each compute node has a local
/scratch drive (see above for size), which is either a hard disk drive (HDD), a solid state drive (SSD), or even a Non-Volatile Memory Express (NVMe) drive. Each node has a tiny
/tmp drive (4-8 GiB).
The compute nodes can only be utilized by submitting jobs via the scheduler - it is not possible to explicitly log in to compute nodes.
The cluster can be accessed via SSH to one of two login nodes:
For transferring large data files, it is recommended to use one of the dedicate data transfer nodes:
which both has a 10 Gbps connection - providing a file transfer speed of up to (theoretical) 1.25 GB/s = 4.5 TB/h. As for the login nodes, the transfer nodes can be accessed via SSH.
Comment: You can also transfer data via the login nodes, but since those only have 1 Gbps connections, you will see much lower transfer rates.
The cluster has development nodes for the purpose of validating scripts, prototyping pipelines, compiling software, and more. Development nodes can be accessed from the login nodes.
|dev1||8||16 GiB||0.11 TiB||Intel Xeon E5430 2.66GHz|
|dev2||32||512 GiB||1.1 TiB||Intel Xeon E5-2640 v3 2.60GHz|
|dev3||32||512 GiB||1.1 TiB||Intel Xeon E5-2640 v3 2.60GHz|
|gpudev1||12||48 GiB||0.37 TiB||Intel Xeon X5650 2.67GHz||GeForce GTX 980 Ti|
Comment: Please use the GPU development node only if you need to build or prototype GPU software.
The Wynton cluster provides two types of scratch storage:
/scratch/- 0.1-1.8 TiB/node storage unique to each compute node (can only be accessed from the specific compute node).
/wynton/scratch/- 492 TiB storage (BeeGFS) accessible from everywhere.
There are no per-user quotas in these scratch spaces. Files not added or modified during the last two weeks will be automatically deleted on a nightly basis. Note, files with old timestamps that were “added” to the scratch place during this period will not be deleted, which covers the use case where files with old timestamps are extracted from tar.gz file. (Details:
tmpwatch --ctime --dirmtime --all --force is used for the cleanup.)
/wynton/home: 383 TiB storage space
/wynton/group: 3800 TB (= 3.8 PB) storage space
Each user may use up to 500 GiB disk space in the home directory. Research groups can add additional storage space under
/wynton/group by either mounting their existing storage or purchase new.
The majority of the compute nodes are connected to the local network with 1 Gbps and 10 Gbps network cards while a few got 40 Gbps cards.
The cluster itself connects to NSF’s Pacific Research Platform at a speed of 100 Gbps - providing a file transfer speed of up to (theoretical) 12.5 GB/s = 45 TB/h.