HPC - HP Cluster
Description of the HP Superdome complex
The new system is a cluster of four HP Superdome servers. Three nodes are 64-way and one is 32-way for a total of 224 processors.
The nodes utilize the PA-8700 running at 750 MHz for an aggregate (peak) rating of approximately 672 GFLOPS. Each machine has two gigabytes of main memory per processor for a total of 448 gigabytes.
Overall, there are 5 Terabytes of total disk space; 734 GB of mirrored user space and a mininum of 500 GB's of scratch space per
The four nodes are interconnected by a high speed, low latency HP/Myrinet Hyperfabric network, and each node also has a Gigbabit connection to the campus network. Although each Superdome operates as a fully independent machine running HP-UX, the message passing architecture allows jobs to use processors across more than one host machine.
HP Technical Documentation
For detailed technical documentation on the HP-UX operating system as well as manuals for HP hardware and software see HP's Web site at docs.hp.com.
Transition from the HP N-4000 Complex to the Superdome Complex
Transition from the HP Exemplar to the new HP N-4000 complex involves a significant number of changes. The purpose of this
document is to communicate these changes so that users may take full advantage of UKy's new supercomputer.
The single most important change is that the HP N-4000 complex is a cluster of individual nodes with a shared filesystem for home directories rather than a single machine with 96 processors. The new machines use a processor that is approximately twice as fast as the processors on the previous SPP Exemplar.
NOTE: The new systems support 64-bit computation; to use it compile your programs with the +DA2.0W option. You must define variables to the appropriate length to make effective use of this.
The cluster runs HPUX 11.0. Consequently, several software packages that could not be run on the HP Exemplar have been updated to run on the new HP cluster. For details on the availability of software on the HP N-4000 complex see the Software Documenation..
For complete details on specific changes for users transitioning to the new cluster see the Transition Documentation.