HPC Hardware

For information about the previous cluster, see DLX 2010 or click on the History link at the left.

UK HPC - Hardware

The Lipscomb HPC Cluster (dlx.uky.edu) was built for UK by Dell Inc and is rated at just over 140 Teraflops, counting all CPUs and GPUs. Here are some pictures of the equipment being delivered and installed.

Basic Nodes

  • 256 Nodes (4096 cores), ~95 Teraflops
  • Dell C6220 Server, 4 nodes per 2U chassis
  • Dual Intel E5-2670 8 Core (Sandy Bridge) @ 2.6 GHz
  • 2 sockets/node x 8 cores/socket = 16 cores/node
  • 64 GB/node of 1600 MHz RAM
  • 500 GB local (internal) SATA disk
  • Linux OS (RHEL)

Hi-Mem 'Fat' Nodes

  • 8 Nodes (256 cores), ~4.9 Teraflops
  • Dell R820, one node per 2U
  • Quad Intel E5-4640 8 core (Sandy Bridge) @ 2.4 GHz
  • 4 sockets/node x 8 cores/socket = 32 cores/node
  • 512 GB/node of 1600 Mhz RAM
  • 4 x 1 TB local (internal) NLSAS disk
  • Linux OS (RHEL)

GPU Nodes

  • 24 Nodes (384 CPU cores, 48 GPUs), 33.6 Teraflops (8.9 CPU and 24.7 GPU)
  • Dell C6220 Server, 4 nodes per 2U chassis
  • Dual Intel E5-2670 8 Core (Sandy Bridge) @ 2.6 GHz
  • 2 sockets/node x 8 cores/socket = 16 CPU cores/node
  • 64 GB/node of 1600 MHz RAM
  • 500 GB local (internal) SATA disk
  • Dell C410x PCIe Expansion System, 8 cards (16 max) per 3U chassis
  • 8 NVIDIA M2075 GPU Cards, configured 2 per node
  • Linux OS (RHEL)

GPU 'Legacy' Nodes

  • Bought August 2011 for testing and experimentation.
  • 4 Nodes (48 CPU cores, 16 GPUs)
  • Dell C6100 Server, 4 nodes per 2U chassis
  • Dual Intel Xeon X5650 (Westmere) @ 2.66 GHz.
  • 2 sockets/node x 6 cores/socket = 12 CPU cores/node
  • 32 GB/node
  • 250 GB local (internal) SAS disk
  • Dell C410x PCIe Expansion System, 16 cards per 3U chassis
  • 16 NVIDIA M2070 GPU Cards, configured 4 per node
  • Linux OS (RHEL)

Login Nodes

  • 2 Nodes (32 cores)
  • Dell R720, one node per 2U
  • Dual Intel E5-2670 8 Core (Sandy Bridge) @ 2.6 GHz
  • 2 sockets/node x 8 cores/socket = 16 cores/node
  • 128 GB/node of 1600 Mhz RAM
  • 500 GB local (internal) SATA disk
  • Linux OS (RHEL)

Admin Nodes

  • 2 Nodes (32 cores)
  • Dell R720, one node per 2U
  • Dual Intel E5-2670 8 Core (Sandy Bridge) @ 2.6 GHz
  • 2 sockets/node x 8 cores/socket = 16 cores/node
  • 32 GB/node of 1600 Mhz RAM
  • 500 GB local (internal) SATA disk
  • Linux OS (RHEL)

Interconnect Fabric

  • Mellanox Fourteen Data Rate (FDR) Infiniband
  • 2:1 over-subscription, 14.0625 Gbit/s

Global cluster filesystem

  • DDN GridScaler SFA12K storage appliance with the IBM GPFS
  • 580 2TB 7,200 RPM 6Gb/s SAS drives (data)
  • 20 600GB 15K RPM 6Gb/s SAS drives (metadata)
  • 1160 TB raw with about 928 TB usable
  • Read: 25 GB/s throughput and 780,000 IO/S
  • Write: 22 GB/s throughput and 690,000 IO/S

Other Information

  • Fills most of 11 equipment racks.
  • Uses about 140 KW when loaded
  • Dedicated TSM node for fast backups and access to near-line storage

859-218-HELP (859-218-4357) 218help@uky.edu