HPC Amber on GPUs

Supercomputer documentation is always a work in progress! Please email questions, corrections, or suggestions to the HPC support team at help-hpc@uky.edu as usual. Thanks!

Running Amber on GPUs

The Amber PMEMD module can be run on the GPU Nodes, and in many cases it will run much faster than on the Basic Nodes. (Only the PMEMD module is GPU enabled with Amber 11). Make a script to run your job on the GPU nodes, using these two samples:

Use one GPU on one node

#!/bin/bash
#SBATCH --tasks=1
#SBATCH --tasks-per-node=1
#SBATCH --partition=GPU
mpiexec pmemd.cuda.MPI -O -i mdin -o mdout -p prmtop -c inpcrd -r rstcrd –ref refcrd -x mdcrd

Use several GPUs on several nodes

#!/bin/bash
#SBATCH --tasks=
#SBATCH --tasks-per-node=
#SBATCH --partition=GPU
export GPU_DIRECT=1
mpiexec pmemd.cuda.MPI -O -i mdin -o mdout -p prmtop -c inpcrd -r rstcrd –ref refcrd -x mdcrd

Notes on using multiple GPUs

Export GPU_DIRECT=1 Must be included in your script.
tasks=nn Total number of GPUs your job will use.
tasks-per-node=nn Number of GPUs per node.
The number of nodes used will be calculated
Tasks Tasks/Node Nodes Used
4 4 1
4 2 2
4 1 4
8 2 4

Submitting a job

Make sure the Amber module is loaded before submitting a job. This only needs to be done once per login.

module load amber/openmpi/11p20

Submit the script to the batch scheduler as usual.

sbatch –b scriptname

Experiment with your code to find the most efficient number GPUs and nodes to use. Start with one GPU on one node, then see if you get a significant speedup using more more GPUs per node or more nodes.

859-218-HELP (859-218-4357) 218help@uky.edu