Skip to main content

amber

About

"Amber" refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (which are in the public domain, and are used in a variety of simulation programs); and a package of molecular simulation programs which includes source code and demos.

Amber is distributed in two parts: AmberTools and Amber. You can use AmberTools without Amber, but not vice versa.

Amber 16 is compiled with AmberTools 17

When citing Amber16 or AmberTools17 please use the following: D.A. Case, D.S. Cerutti, T.E. Cheatham, III, T.A. Darden, R.E. Duke, T.J. Giese, H. Gohlke, A.W. Goetz, D. Greene, N. Homeyer, S. Izadi, A. Kovalenko, T.S. Lee, S. LeGrand, P. Li, C. Lin, J. Liu, T. Luchko, R. Luo, D. Mermelstein, K.M. Merz, G. Monard, H. Nguyen, I. Omelyan, A. Onufriev, F. Pan, R. Qi, D.R. Roe, A. Roitberg, C. Sagui, C.L. Simmerling, W.M. Botello-Smith, J. Swails, R.C. Walker, J. Wang, R.M. Wolf, X. Wu, L. Xiao, D.M. York and P.A. Kollman (2017), AMBER 2017, University of California, San Francisco.

Versions and Availability

Module Names for amber on qb
Machine Version Module Name
qb2 14 amber/14/CUDA-65-INTEL-140-MVAPICH2-2.0
qb2 14 amber/14/INTEL-140-MVAPICH2-2.0
qb2 16 amber/16/INTEL-140-MVAPICH2-2.0
▶ Module FAQ?

The information here is applicable to LSU HPC and LONI systems.

Shells

A user may choose between using /bin/bash and /bin/tcsh. Details about each shell follows.

/bin/bash

System resource file: /etc/profile

When one access the shell, the following user files are read in if they exist (in order):

  1. ~/.bash_profile (anything sent to STDOUT or STDERR will cause things like rsync to break)
  2. ~/.bashrc (interactive login only)
  3. ~/.profile

When a user logs out of an interactive session, the file ~/.bash_logout is executed if it exists.

The default value of the environmental variable, PATH, is set automatically using SoftEnv. See below for more information.

/bin/tcsh

The file ~/.cshrc is used to customize the user's environment if his login shell is /bin/tcsh.

Modules

Modules is a utility which helps users manage the complex business of setting up their shell environment in the face of potentially conflicting application versions and libraries.

Default Setup

When a user logs in, the system looks for a file named .modules in their home directory. This file contains module commands to set up the initial shell environment.

Viewing Available Modules

The command

$ module avail

displays a list of all the modules available. The list will look something like:

--- some stuff deleted ---
velvet/1.2.10/INTEL-14.0.2
vmatch/2.2.2

---------------- /usr/local/packages/Modules/modulefiles/admin -----------------
EasyBuild/1.11.1       GCC/4.9.0              INTEL-140-MPICH/3.1.1
EasyBuild/1.13.0       INTEL/14.0.2           INTEL-140-MVAPICH2/2.0
--- some stuff deleted ---

The module names take the form appname/version/compiler, providing the application name, the version, and information about how it was compiled (if needed).

Managing Modules

Besides avail, there are other basic module commands to use for manipulating the environment. These include:

add/load mod1 mod2 ... modn . . . Add modules
rm/unload mod1 mod2 ... modn  . . Remove modules
switch/swap mod . . . . . . . . . Switch or swap one module for another
display/show  . . . . . . . . . . List modules loaded in the environment
avail . . . . . . . . . . . . . . List available module names
whatis mod1 mod2 ... modn . . . . Describe listed modules

The -h option to module will list all available commands.

Module is currently available only on SuperMIC.

Usage

Make sure softenv keys are matched up with corresponding versions of the compiler and MPI library. For instance on the SuperMike:

+amber-14-Intel-13.0.0-openmpi-1.6.2-CUDA-5.0
+openmpi-1.6.2-Intel-13.0.0
+cuda-5.0

Amber is normally run via a PBS job script. To run Amber 16 in batch, remember to include either #PBS -V (when the module key of Amber 16 has already been loaded) or module load amber/16/INTEL-140-MVAPICH2-2.0 in the PBS script.

MPI

Note: the usual executable name used is pmemd (serial, not recommended) or pmemd.MPI (parallel).

On SuperMIC and QB2, use "pmemd.MPI" to run Amber. Below is a sample script which runs Amber with 2 nodes (40 CPU cores):

	#!/bin/bash
	#PBS -A my_allocation
	#PBS -q checkpt
	#PBS -l nodes=2:ppn=20
	#PBS -l walltime=HH:MM:SS
	#PBS -j oe
	#PBS -N JOB_NAME
	#PBS -V

	cd $PBS_O_WORKDIR
	mpirun -np 40 $AMBERHOME/bin/pmemd.MPI -O -i mdin.CPU -o mdout -p prmtop -c inpcrd
    

GPU acceleration

Note: the usual executable name used for Amber 16 GPU acceleration is pmemd.cuda (serial) or pmemd.cuda.MPI (parallel).

pmemd.cuda and pmemd.cuda.MPI in Amber 16 was built with Intel 15.0.0 compiler and CUDA 7.5, both of which are required for the Amber GPU building. Please load Intel 15.0.0 compiler and CUDA 7.5 into your user environment in order to run pmemd.cuda.

Only pmemd.cuda is recommended for GPU acceleration on SuperMIC, as each comupte node on SuperMIC only has 1 GPU, and please do not attempt to run regular GPU MD runs across multiple nodes. Infiniband is way too slow these days to keep up with the computation speed of the GPUs.

Using hybrid queue is required if running on SuperMIC.

On SuperMIC and QB2, use "pmemd.cuda" to run Amber 16 with GPU acceleration in serial. Below is a sample script which runs Amber 16 on 1 node:

		#!/bin/bash
		#PBS -A my_allocation
		#PBS -q hybrid
		#PBS -l nodes=1:ppn=20
		#PBS -l walltime=HH:MM:SS
		#PBS -j oe
		#PBS -N JOB_NAME
		#PBS -V

		cd $PBS_O_WORKDIR
		$AMBERHOME/bin/pmemd.cuda -O -i mdin.GPU -o mdout_gpu -p prmtop -c inpcrd
    

GPU acceleration must use hybrid node if using SuperMIC. Note pmemd.cuda is a serial program, so no parallel exe such as mpirun is required, or set mpirun -np 1

On QB2 , as each compute node has two GPUs, "pmemd.cuda.MPI" can be used to run Amber 16 with GPU acceleration in parallel. Below is a sample script which runs Amber 16 on 1 node (2 GPUs) on QB2:

  		#!/bin/bash
  		#PBS -A my_allocation
  		#PBS -q hybrid
  		#PBS -l nodes=1:ppn=20
  		#PBS -l walltime=HH:MM:SS
  		#PBS -j oe
  		#PBS -N JOB_NAME
  		#PBS -V

  		cd $PBS_O_WORKDIR
		mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI -O -i mdin.GPU -o mdout_2gpu -p prmtop -c inpcrd -ref inpcrd
      

Use -np # where # is the number of GPUs you are requesting, NOT the number of CPUs. Note pmemd.cuda.MPI is significantly faster than pmemd.cuda only for the production run of a large model.

Resources

  • The Amber Home Page has a variety of on-line resources available, including manuals and tutorials.

Last modified: October 31 2017 10:16:26.

  • https://daveducation.org/
  • https://devacharyacouncil.com/
  • https://lafloria.nl/
  • https://leasedesk.nl/
  • https://oranjmedya.com/
  • https://benitomo.com/
  • https://ichibanmax.com/
  • https://collaresdonkys.com/
  • https://armatsdemanresa.org/
  • https://cristaleriamunoz.es/
  • https://bkmsaglik.com/
  • https://fatihhukuk.com/
  • https://euromat.es/
  • https://www.apanefa.es/
  • https://nmsindia.org/
  • https://crystalstones.gr/
  • https://www.amdrckgmu.com/
  • https://mercadocountry.com.br/
  • https://midasgold.gr/
  • https://kiberlit.com/
  • https://kemetbbg.org/
  • https://nmskalwara.org
  • https://edyscloud.com/
  • https://www.ktl-escaleras.es/
  • https://skynetjoe.com/