Skip to main content

autodock

About

AutoDock is a suite of automated docking tools, autogrid and autodock. It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.

Versions and Availability

▶ Display Softenv Keys for autodock on all clusters
Machine Version Softenv Key
supermike2 4.2.3 +autodock-4.2.3-Intel-13.0.0
▶ Softenv FAQ?

The information here is applicable to LSU HPC and LONI systems.

Shells

A user may choose between using /bin/bash and /bin/tcsh. Details about each shell follows.

/bin/bash

System resource file: /etc/profile

When one access the shell, the following user files are read in if they exist (in order):

  1. ~/.bash_profile (anything sent to STDOUT or STDERR will cause things like rsync to break)
  2. ~/.bashrc (interactive login only)
  3. ~/.profile

When a user logs out of an interactive session, the file ~/.bash_logout is executed if it exists.

The default value of the environmental variable, PATH, is set automatically using SoftEnv. See below for more information.

/bin/tcsh

The file ~/.cshrc is used to customize the user's environment if his login shell is /bin/tcsh.

Softenv

SoftEnv is a utility that is supposed to help users manage complex user environments with potentially conflicting application versions and libraries.

System Default Path

When a user logs in, the system /etc/profile or /etc/csh.cshrc (depending on login shell, and mirrored from csm:/cfmroot/etc/profile) calls /usr/local/packages/softenv-1.6.2/bin/use.softenv.sh to set up the default path via the SoftEnv database.

SoftEnv looks for a user's ~/.soft file and updates the variables and paths accordingly.

Viewing Available Packages

The command softenv will provide a list of available packages. The listing will look something like:

$ softenv
These are the macros available:
*   @default
These are the keywords explicitly available:
+amber-8                       Applications: 'Amber', version: 8 Amber is a
+apache-ant-1.6.5              Ant, Java based XML make system version: 1.6.
+charm-5.9                     Applications: 'Charm++', version: 5.9 Charm++
+default                       this is the default environment...nukes /etc/
+essl-4.2                      Libraries: 'ESSL', version: 4.2 ESSL is a sta
+gaussian-03                   Applications: 'Gaussian', version: 03 Gaussia
... some stuff deleted ...
Managing SoftEnv

The file ~/.soft in the user's home directory is where the different packages are managed. Add the +keyword into your .soft file. For instance, ff one wants to add the Amber Molecular Dynamics package into their environment, the end of the .soft file should look like this:

+amber-8

@default

To update the environment after modifying this file, one simply uses the resoft command:

% resoft

The command soft can be used to manipulate the environment from the command line. It takes the form:

$ soft add/delete +keyword

Using this method of adding or removing keywords requires the user to pay attention to possible order dependencies. That is, best results require the user to remove keywords in the reverse order in which they were added. It is handy to test out individual keys, but can lead to trouble if changing multiple keys. Changing the .soft file and issuing the resoft is the recommended way of dealing with multiple changes.

▶ Display Module Names for autodock on all clusters.
Machine Version Module
None Available N/A N/A
▶ Module FAQ?

The information here is applicable to LSU HPC and LONI systems.

Shells

A user may choose between using /bin/bash and /bin/tcsh. Details about each shell follows.

/bin/bash

System resource file: /etc/profile

When one access the shell, the following user files are read in if they exist (in order):

  1. ~/.bash_profile (anything sent to STDOUT or STDERR will cause things like rsync to break)
  2. ~/.bashrc (interactive login only)
  3. ~/.profile

When a user logs out of an interactive session, the file ~/.bash_logout is executed if it exists.

The default value of the environmental variable, PATH, is set automatically using SoftEnv. See below for more information.

/bin/tcsh

The file ~/.cshrc is used to customize the user's environment if his login shell is /bin/tcsh.

Modules

Modules is a utility which helps users manage the complex business of setting up their shell environment in the face of potentially conflicting application versions and libraries.

Default Setup

When a user logs in, the system looks for a file named .modules in their home directory. This file contains module commands to set up the initial shell environment.

Viewing Available Modules

The command

$ module avail

displays a list of all the modules available. The list will look something like:

--- some stuff deleted ---
velvet/1.2.10/INTEL-14.0.2
vmatch/2.2.2

---------------- /usr/local/packages/Modules/modulefiles/admin -----------------
EasyBuild/1.11.1       GCC/4.9.0              INTEL-140-MPICH/3.1.1
EasyBuild/1.13.0       INTEL/14.0.2           INTEL-140-MVAPICH2/2.0
--- some stuff deleted ---

The module names take the form appname/version/compiler, providing the application name, the version, and information about how it was compiled (if needed).

Managing Modules

Besides avail, there are other basic module commands to use for manipulating the environment. These include:

add/load mod1 mod2 ... modn . . . Add modules
rm/unload mod1 mod2 ... modn  . . Remove modules
switch/swap mod . . . . . . . . . Switch or swap one module for another
display/show  . . . . . . . . . . List modules loaded in the environment
avail . . . . . . . . . . . . . . List available module names
whatis mod1 mod2 ... modn . . . . Describe listed modules

The -h option to module will list all available commands.

Module is currently available only on SuperMIC.

Usage

Please be aware that AutoDock and AutoGrid are serial (non-parallel) codes. It should be run in a single queue which will use one processor core. Running in any other queue will cause cores to be idle, but the job will be charged for all cores.

AutoGrid is required to be executed prior to Autodock

usage: AutoGrid4 	-p parameter_filename
			-l log_filename
			-d (increment debug level)
			-h (display this message)
			--version (Print audogrid version)

usage: AutoDock4 	-p parameter_filename
			-l log_filename
			-k (Keep original residue numbers)
			-i (Ignore header-checking)
			-t (Parse PDBQT file for torsions,then stop.)
			-d (Increment debug level)
			-C (Print copyright notice)
			--version (Print autodock version)
			--help (Display this message)
  

To successfully run a autodock simulation, first run autogrid simulation to generate a map around receptor atoms around which potentials are to be computed.

▶ Open Example?
 #!/bin/bash
 
#PBS -A your_allocation
#PBS -q single
#    Note: a single queue is not present on Queen Bee,
#    use workq or checkpt, but you will be charged for all cores.
#PBS -M your_email
# Change ppn to match cluster (4, 8 or 16) if no single queue.
#PBS -l nodes=1:ppn=1
#PBS -l walltime=06:00:00
#PBS -V
#PBS -o AutoGrid_test.out
#PBS -e AutoGrid_test.err
#PBS -N autogridtest
 
export EXEC=autogrid4 
export INPUT=hsg1.gpf
export OUTPUT=hsg1.glg
export WORK_DIR=$PBS_O_WORKDIR
 
cd $WORK_DIR
 
$EXEC -p $INPUT -l $OUTPUT

Submit your script using qsub.

▶ QSub FAQ?

Portable Batch System: qsub

qsub

All HPC@LSU clusters use the Portable Batch System (PBS) for production processing. Jobs are submitted to PBS using the qsub command. A PBS job file is basically a shell script which also contains directives for PBS.

Usage
$ qsub job_script

Where job_script is the name of the file containing the script.

PBS Directives

PBS directives take the form:

#PBS -X value

Where X is one of many single letter options, and value is the desired setting. All PBS directives must appear before any active shell statement.

Example Job Script
 #!/bin/bash
 #
 # Use "workq" as the job queue, and specify the allocation code.
 #
 #PBS -q workq
 #PBS -A your_allocation_code
 # 
 # Assuming you want to run 16 processes, and each node supports 4 processes, 
 # you need to ask for a total of 4 nodes. The number of processes per node 
 # will vary from machine to machine, so double-check that your have the right 
 # values before submitting the job.
 #
 #PBS -l nodes=4:ppn=4
 # 
 # Set the maximum wall-clock time. In this case, 10 minutes.
 #
 #PBS -l walltime=00:10:00
 # 
 # Specify the name of a file which will receive all standard output,
 # and merge standard error with standard output.
 #
 #PBS -o /scratch/myName/parallel/output
 #PBS -j oe
 # 
 # Give the job a name so it can be easily tracked with qstat.
 #
 #PBS -N MyParJob
 #
 # That is it for PBS instructions. The rest of the file is a shell script.
 # 
 # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS:
 #
 #   1. Copy the necessary files from your home directory to your scratch directory.
 #   2. Execute in your scratch directory.
 #   3. Copy any necessary files back to your home directory.

 # Let's mark the time things get started.

 date

 # Set some handy environment variables.

 export HOME_DIR=/home/$USER/parallel
 export WORK_DIR=/scratch/myName/parallel
 
 # Set a variable that will be used to tell MPI how many processes will be run.
 # This makes sure MPI gets the same information provided to PBS above.

 export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'`

 # Copy the files, jump to WORK_DIR, and execute! The program is named "hydro".

 cp $HOME_DIR/hydro $WORK_DIR
 cd $WORK_DIR
 mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/hydro

 # Mark the time processing ends.

 date
 
 # And we're out'a here!

 exit 0

If you have successfully run the autogrid simulation to completion, you can run the autodock simulation

▶ Open Example?
 #!/bin/bash
 
#PBS -A your_allocation
#PBS -q single
#    Note: a single queue is not present on Queen Bee,
#    use workq or checkpt, but you will be charged for all cores.
#PBS -M your_email
# Change ppn to match cluster (4, 8 or 16) if no single queue.
#PBS -l nodes=1:ppn=1
#PBS -l walltime=06:00:00
#PBS -V
#PBS -o AutoDock_test.out
#PBS -e AutoDock_test.err
#PBS -N autogridtest
 
export EXEC=autodock4 
export INPUT=ind.dpf
export OUTPUT=ind.dlg
export WORK_DIR=$PBS_O_WORKDIR
 
cd $WORK_DIR
 
$EXEC -p $INPUT -l $OUTPUT

Submit your script using qsub.

Note:To run a autodock calculations successfully, your autogrid job must complete without errors. If you are comfortable with submit scripts, you can use the following script to submit both the autogrid and autodock job using PBS Job Chains and Dependencies

▶ PBS Job Chains and Dependencies FAQ?

PBS Job Chains

Quite often, a single simulation requires multiple long runs which must be processed in sequence. One method for creating a sequence of batch jobs is to execute the qsub to submit its successor. We strongly discourage recursive, or "self-submitting," scripts since for some jobs, chaining isn't an option. When your job hits the time limit, the batch system kills them and the command to submit a subsequent job is not processed.

PBS allows users to move the logic for chaining from the script and into the scheduler. This is done with a command line option:

$ qsub -W depend=afterok:<jobid> <job_script>

This tells the job scheduler that the script being submitted should not start until jobid completes successfully. The following conditions are supported:

afterok:<jobid>
Job is scheduled if the job <jobid> exits without errors or is successfully completed.
afternotok:<jobid>
Job is scheduled if job <jobid> exited with errors.
afterany:<jobid>
Job is scheduled if the job <jobid> exits with or without errors.

One method to simplify this process is to write multiple batch scripts, job1.pbs, job2.pbs, job3.pbs etc and submit them using the following script:

#!/bin/bash
 
FIRST=$(qsub job1.pbs)
echo $FIRST
SECOND=$(qsub -W depend=afterany:$FIRST job2.pbs)
echo $SECOND
THIRD=$(qsub -W depend=afterany:$SECOND job3.pbs)
echo $THIRD

Modify the script according to number of job chained jobs required. The Job <$FIRST> will be placed in queue while the jobs <$SECOND> and <$THIRD> will be placed in queue with the "Not Queued" (NQ) flag in Batch Hold. When <$FIRST> is completed, the NQ flag will be replaced with the "Queued" (Q) flag and will be moved to the active queue.

A few words of caution: If you list the dependency as "afterok"/"afternotok" and your job exits with/without errors then your subsequent jobs will be killed due to "dependency not met".

Resources

Last modified: August 21 2017 10:47:37.

  1. https://www.jmrbuildersanddevelopers.com/
  2. https://www.sabaremedies.com/
  3. https://drakkargames.com/
  4. https://montigliohnas.cl/
  5. http://virtualtrailroom.com/
  6. https://coach-blavier.com/
  7. https://streamwisetv.eu/
  8. https://plumberscol.com/
  9. https://www.n3locks.com/
  10. http://setiapharibegitujadinya-ws.george.shared.1984.is/
  11. https://mpotransit.org/
  12. https://manchainformacion.com/
  13. https://angin88.cah.edu.mx/