MP cluster!

monika OR outside campus monikaMP (jobs)

DOCS ganglia nagios compilers ( AMD Open64 Intel ) OpenMPI IntelMPI torque PBS maui scheduler

DB APPS alpine ebt ebttool emacs gnuplot-4.2 gromacs-4.5.3 mathematica-7.0 nautilus octave R-2.10.1 xmgrace xppaut

ACCESS XMING (Xserver for GUI to Win) Putty & Puttygen (terminal connection with keys)

MAIN CONTACT Milan Predota (predota AT prf.jcu.cz)
ADMINISTRATOR Pavel Fibich (pavel.fibich AT prf.jcu.cz)

MACHINES SPEED

the tested system is quartz surfaces with aqueous solution in between (16286 atoms, 3584 of them are frozen)
mp2-mp6 ~5 ns/day
mp8 (only CPU) ~8 ns/day
mp9 (only CPU) ~11 ns/day
mp10-mp12 (only CPU) ~17 ns/day
mp8 (both CPU and GPU) ~23 ns/day
mp9 (both CPU and GPU) ~52 ns/day
mp10-mp12 (both CPU and GPU) ~97 ns/day

RULES

HOWTO

show jobs in terminal, by GUI (db1 only)

qstat
xpbs

show job state

qstat JOB_NO
tracejob JOB_NO

kill my job, kill all my jobs

qdel JOB_NO
qdel all

submit job

qsub scriptName

interactive job on 4 processors on DB (you'll get shell)

qsub -I -l nodes=1:ppn=4:batch

send email at job end

#PBS -m e

script for 1 processor job on DB

#PBS -N jobname # choose alpha-numeric job name, starting wich character
#PBS -m e # you get mail at the end of job
#PBS -l nodes=1:ppn=1:batch # 1 machine (nodes), 1 processor (ppn) with batch property is requested
#PBS -j oe # std output and error output in the same file
cp -r /home/MYLOGIN/computeFolder /scratch/MYLOGIN/
cd /scratch/MYLOGIN/computeFolder
./binary parameters
cp -r /scratch/MYLOGIN/computeFolder/results /home/MYLOGIN/computeFolder/ || { echo "copying failed"; exit 1; }
rm -r /scratch/MYLOGIN/computeFolder

script for 4 processors IntelMPI job on MP

#PBS -N jobname
#PBS -m e
#PBS -l nodes=1:ppn=4:mbatch
#PBS -j oe
cat $PBS_NODEFILE
rm -r /scratch/MYLOGIN/computeFolder
mkdir -p /scratch/MYLOGIN/computeFolder
cp -r /home/MYLOGIN/computeFolder/* /scratch/MYLOGIN/computeFolder
cd /scratch/MYLOGIN/computeFolder
mpirun -r ssh -machinefile $PBS_NODEFILE -np 4 ./binary parameters
cp -r /scratch/MYLOGIN/computeFolder/* /home/MYLOGIN/computeFolder || { echo "copying failed"; exit 1; }
rm -r /scratch/MYLOGIN/computeFolder

script for plumed MPI GPU gromacs 5.1.4

source /opt/gromacs-5.1.4p/bin/bin/GMXRC
gmx_mpi mdrun -deffnm ...

script for MPI GPU gromacs 5.1.4 (default on mp8-12)

gmx_mpi mdrun -deffnm ...

script for MPI GPU gromacs 5.1.4 (default on mp8-12) limiting number of threads to 4

OMP_NUM_THREADS=4 gmx_mpi mdrun -deffnm ...

Usage:
substitute jobname for alpha-numeric job name starting with character
substitute MYLOGIN for your own
substitute computeFolder for directory of your job relative to your home directory
Example: to submit job with data in /home/predota/fedl/pok replace computefolder by fedl/pok

The script will create working directory /scratch/MYLOGIN/computefolder (including parent directories if needed), copy all files from /home/MYLOGIN/computefolder there, and run the job there.
Complete path to binary (exe_file) or relative path to scratch/MYLOGIN/computefolder must be specified. Most likely, you will either:
a) have the binary in your /home/MYLOGIN/computefolder directory and it will be copied to /scratch/MYLOGIN/computefolder, so a.out or so will work
b) specify the binary using absolute path to your home. This is completely O.K., as the binary is read only during its start.

After completing of the job, the data will be copied back to /home/MYLOGIN/computefolder. Uncomment last line if you want to delete the local working directory of the job.

If you keep last line commented out, the content of the working directory will remain in the /scratch/MYLOGIN/computefolder of the computer which executed the job - you will find its name in the output of the job thanks to command 'cat $PBS_NODEFILE' of the script. The line 'rm -r /scratch/MYLOGIN/computefolder'just after 'cat $PBS_NODEFILE' avoids possible conflict with older local files in this case, otherwise is redundant.
The /scratch directory of each mp2-mp7 machine is 420GB, so there is a lot of space.

Runnning on mp1 is partially discouraged for these reasons:
1) as a front end and repository of /home, it is preferred machine for interactive operations and (if necessary) short time moderately extensive file operations in /home.
2) the /scratch directory on mp1 exists, but its space is limited (<30 GB). Extensive file operations should be carried out in /scratch of other machines not to prevent serving of mp1 to all machines.
3) if all other machines are busy and you want to load mp1, there is no need to use /scratch, as both /scratch and /home are physically on mp1. In that case, use a script

#PBS -N jobname
#PBS -m e
#PBS -l nodes=1:ppn=4:mp1
#PBS -j oe
cat $PBS_NODEFILE
cd /
home/MYLOGIN/computefolder
mpirun -machinefile $PBS_NODEFILE -np 4 binary

script for 1 thread serial job on MP

#PBS -N jobname
#PBS -m e
#PBS -l nodes=1:ppn=1:w2
#PBS -j oe
cat $PBS_NODEFILE
rm -r /scratch/MYLOGIN/computefolder
mkdir -p /scratch/MYLOGIN/computefolder
cp -r /home/MYLOGIN/computefolder/* /scratch/MYLOGIN/computefolder
cd /scratch/MYLOGIN/computefolder binary cp -r /scratch/MYLOGIN/computefolder/* /home/MYLOGIN/computefolder
#rm -r /scratch/MYLOGIN/computefolder

gromacs
Default version is 5.0.5, /etc/bash.bashrc contains

source /opt/gromacs-5.0.5/bin/bin/GMXRC

To use double or gpu version use
source /opt/gromacs-5.0.5/bin_double/bin/GMXRC
source /opt/gromacs-5.0.5/bin_gpu/bin/GMXRC

To use MPI version, add following before running gromacs
export PATH=/opt/openmpi/1.8.5/bin:$PATH
export LD_LIBRARY_PATH=/opt/openmpi/1.8.5/lib:$LD_LIBRARY_PATH

For the older 4.5.3 version, use one of
source /opt/gromacs-4.5.3/bin/bin/GMXRC.bash
source /opt/gromacs-4.5.3/bin/bin/GMXRC.csh