HPC FAQ - GPUs
Supercomputer documentation is always a work in progress! Please email questions, corrections, or suggestions to the HPC support team at firstname.lastname@example.org as usual. Thanks!
Please don't run non-GPU code on the GPU nodes!
There are four GPU enabled nodes on the DLX supercomputing cluster. The nodes are identical to the basic compute nodes (12 cores with 36 GB of RAM), except that each node has four Nvidia M2070 GPUs attached. GPU enabled code often runs many times faster than on a CPU.
The limit on the GPU queue is one day (24 hours).
If you do not put a time limit on jobs submitted to the GPU queue they will wait in the queue forever!
Add #SBATCH -t 24:00:00 to your batch job script before submitting it.
Frequently Asked Questions
- 1. How do I run a job in the GPU queue?
- 2. How do I use Amber with GPUs?
- 3. How do I use Abacus with GPUs?
- 4. How do I use NAMD with GPUs?
- 5. Can I write my own GPU code?
To run a job with GPU enabled code put the SBATCH option into your job script:
Or add the partition flag to the sbatch command.
One or the other is enough, you don't need to do both.
Only the PMEMD module in Amber 11 is GPU enabled, but the Amber sample jobs that CCS tested ran much faster when using GPUs.
See the page Amber on GPUs for information on running Amber on the GPU nodes.
This information will be coming soon.
If you are interested in GPU enabling your own code, then see the extensive Nvidia GPU developer info on the Nvidea web page http://developer.nvidia.com/gpu-computing-sdk.
Note that the "SDK" is a misnomer; this is mostly sample code. The Toolkit is the development environment, which you establish by loading the CUDA module (module load cuda).