Dear all,
I think my PBS job script is not appropriate.
I would like to know how to optimise my equilibration rate on two nodes?
The specs of the cluster I am using is as follows:
Linux master.hpc 2.6.18-274.7.1.el5 #1 SMP Thu Oct 20 16:21:01 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
The CPU information of my cluster is:
Intel(R) Xeon(R) CPU X3220
===== Processor composition =====
Processors(CPUs) : 4
Packages(sockets) : 1
Cores per package : 4
Threads per core : 1
===== Processor identification =====
Processor Thread Id. Core Id. Package Id.
0 0 0 0
1 0 2 0
2 0 1 0
3 0 3 0
===== Placement on packages =====
Package Id. Core Id. Processors
0 0,2,1,3 0,1,2,3
===== Cache sharing =====
Cache Size Processors
L1 32 KB no sharing
L2 4 MB (0,2)(1,3)
I am concerned about the "load" as pestat shows 0.00* for my jobs running on dual nodes. When running on one node, the load was showing 0.99*, which I think is not running at full capability?
I am aware that the current installation runs on SSE2, while the log file generated by the NVT run suggested using SSE4.1.
Also, with regardf to the domain decomposition flag in the PBS run script, the effect/performance should be the same
with or without domain decomposition being specified, as the mdrun would guesstimate one for the job?
This is the job script that I use:
#!/bin/bash
#PBS -l nodes=2:west:ppn=12
#PBS -l ncpus=24
#PBS -l walltime=10:00:00
#PBS -N NVT
NPROCS=`wc -l < $PBS_NODEFILE`
hostname
date
gromacs="/usr/local/gromacs/share/gromacs/top/gmx_mpi mdrun -s topol.tpr -cpi state.cpt"
cd $PBS_O_WORKDIR
cp $PBS_NODEFILE nodefile
gmx_mpi mdrun -s topol.tpr -cpi state.cpt
-- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.