Dear all, 


I think my PBS job script is not appropriate.

I would like to know how to optimise my equilibration rate on two nodes? 



The specs of the cluster I am using is as follows:

Linux master.hpc 2.6.18-274.7.1.el5 #1 SMP Thu Oct 20 16:21:01 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux



The CPU information of my cluster is:


Intel(R) Xeon(R)  CPU X3220  

=====  Processor composition  =====

Processors(CPUs)  : 4

Packages(sockets) : 1

Cores per package : 4

Threads per core  : 1

=====  Processor identification  =====

Processor       Thread Id.      Core Id.        Package Id.

0               0               0               0   

1               0               2               0   

2               0               1               0   

3               0               3               0   

=====  Placement on packages  =====

Package Id.     Core Id.        Processors

0               0,2,1,3         0,1,2,3

=====  Cache sharing  =====

Cache   Size            Processors

L1      32  KB          no sharing

L2      4   MB          (0,2)(1,3)



My pestat with job ID 29515 (38400 atoms) and 29513 (102576 atoms):

  node    state  load    pmem ncpu   mem   resi usrs tasks  jobids/users
  x001      excl   11.99   32114  12  35934    833  16/2   12    29503 DBS
  x002      excl   12.03   32114  12  35934   2303  3/2   12    29505 HYV
  x003      excl   0.00*  32114  12  35934    219  0/0   12    29515 Kester
  x004      excl   11.87   32114  12  35934    727  16/1   12    29506 NJP
  x005      excl   0.00*  32114  12  35934    188  0/0   12    29513 Kester
  x006      excl   11.99   64386  12  68206   2309  6/2   12    29508 HYV
  x007      excl   12.00   64383  12  68203   1125  8/2   12    29418 DBS
  x008      excl   0.99*  32111  12  35931    479  7/2   12    29513 Kester
  x009      excl   0.99*  32111  12  35931    286  1/1   12    29515 Kester

I am concerned about the "load" as pestat shows 0.00* for my jobs running on dual nodes. When running on one node, the load was showing 0.99*, which I think is not running at full capability?


I am aware that the current installation runs on SSE2, while the log file generated by the NVT run suggested using SSE4.1.

Also, with regardf to the domain decomposition flag in the PBS run script, the effect/performance should be the same

with or without domain decomposition being specified, as the mdrun would guesstimate one for the job?



This is the job script that I use:


#!/bin/bash

#PBS -l nodes=2:west:ppn=12

#PBS -l ncpus=24

#PBS -l walltime=10:00:00

#PBS -N NVT


NPROCS=`wc -l < $PBS_NODEFILE`


hostname

date


gromacs="/usr/local/gromacs/share/gromacs/top/gmx_mpi mdrun -s topol.tpr -cpi state.cpt"


cd $PBS_O_WORKDIR

cp $PBS_NODEFILE nodefile


gmx_mpi mdrun -s topol.tpr -cpi state.cpt



I have been unable to get help from my research group as I am the only one that uses GROMACS, and I am new to this too.
Searching on Google and GROMACS mail list brought little help as there are so many different types of PCs. 
Any advice is greatly appreciated, thanks in advance!


Regards,
Kester

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Reply via email to