DOCS ganglia nagios compilers ( AMD Open64 Intel ) OpenMPI IntelMPI torque PBS maui scheduler
DB APPS alpine ebt ebttool emacs gnuplot-4.2 gromacs-4.5.3 mathematica-7.0 nautilus octave R-2.10.1 xmgrace xppaut
ACCESS XMING (Xserver for GUI to Win) Putty & Puttygen (terminal connection with keys)
MAIN CONTACT Milan Predota (predota AT prf.jcu.cz)
ADMINISTRATOR Pavel Fibich (pavel.fibich AT prf.jcu.cz)
the tested system is quartz surfaces with aqueous solution in between (16286 atoms, 3584 of them are frozen) | |
mp2-mp6 | ~5 ns/day |
mp8 (only CPU) | ~8 ns/day |
mp9 (only CPU) | ~11 ns/day |
mp10-mp12 (only CPU) | ~17 ns/day |
mp8 (both CPU and GPU) | ~23 ns/day |
mp9 (both CPU and GPU) | ~52 ns/day |
mp10-mp12 (both CPU and GPU) | ~97 ns/day |
show jobs in terminal, by GUI (db1 only)
show job state
kill my job, kill all my jobs
submit job
interactive job on 4 processors on DB (you'll get shell)
send email at job end
script for 1 processor job on DB
script for 4 processors IntelMPI job on MP
script for plumed MPI GPU gromacs 5.1.4
script for MPI GPU gromacs 5.1.4 (default on mp8-12)
script for MPI GPU gromacs 5.1.4 (default on mp8-12) limiting number of threads to 4
Usage:
substitute jobname for alpha-numeric job name starting with character
substitute MYLOGIN for your own
substitute computeFolder for directory of your job relative to your home directory
Example: to submit job with data in /home/predota/fedl/pok replace computefolder by fedl/pok
The script will create working directory /scratch/MYLOGIN/computefolder (including parent directories if needed), copy all files from /home/MYLOGIN/computefolder there, and run the job there.
Complete path to binary (exe_file) or relative path to scratch/MYLOGIN/computefolder must be specified. Most likely, you will either:
a) have the binary in your /home/MYLOGIN/computefolder directory and it will be copied to /scratch/MYLOGIN/computefolder, so a.out or so will work
b) specify the binary using absolute path to your home. This is completely O.K., as the binary is read only during its start.
After completing of the job, the data will be copied back to /home/MYLOGIN/computefolder. Uncomment last line if you want to delete the local working directory of the job.
If you keep last line commented out, the content of the working directory will remain in the /scratch/MYLOGIN/computefolder of the computer which executed the job - you will find its name in the output of the job thanks to command 'cat $PBS_NODEFILE' of the script.
The line 'rm -r /scratch/MYLOGIN/computefolder'just after 'cat $PBS_NODEFILE' avoids possible conflict with older local files in this case, otherwise is redundant.
The /scratch directory of each mp2-mp7 machine is 420GB, so there is a lot of space.
Runnning on mp1 is partially discouraged for these reasons:
1) as a front end and repository of /home, it is preferred machine for interactive operations and (if necessary) short time moderately extensive file operations in /home.
2) the /scratch directory on mp1 exists, but its space is limited (<30 GB). Extensive file operations should be carried out in /scratch of other machines not to prevent serving of mp1 to all machines.
3) if all other machines are busy and you want to load mp1, there is no need to use /scratch, as both /scratch and /home are physically on mp1. In that case, use a script
script for 1 thread serial job on MP
gromacs
Default version is 5.0.5, /etc/bash.bashrc contains