Re: [gmx-users] Performance of 4.6.1 vs. 4.5.5

2013-03-09 Thread Mark Abraham
On Sat, Mar 9, 2013 at 6:53 AM, Christopher Neale 
chris.ne...@mail.utoronto.ca wrote:

 Dear users:

 I am seeing a 140% performance boost when moving from gromacs 4.5.5 to
 4.6.1 when I run a simulation on a single node. However, I am only seeing
 a 110% performance boost when running on multiple nodes. Does anyone else
 see this? Note that I am not using the verlet cutoff scheme.


What's the processor and network for those runs?

I'm not sure that this is a problem, but I was surprised to see how big the
 difference was between 1 and 2 nodes, while for 2-10 nodes I saw a reliable
 10% performance boost.


Not sure what you mean by reliable 10% performance boost. Reporting
actual ns/day rates would be clearer. Is a 140% performance boost a
factor of 1.4 more ns/day or a factor of 2.4 more ns/day?

Please note that, while I compiled the fftw (with sse2) and gromacs 4.6.1,
 I did not compile the 4.5.5 version that I am comparing to (or its fftw) so
 the difference might be in compilation options.


Indeed.


 Still, I wonder why the benefits of 4.6.1 are so fantastic on 1 node but
 fall off to good-but-not-amazing on  1 node.


Finding the answer would start by examining the changes in the timing
breakdowns in your .log files. Switching from using in-memory MPI to
network MPI is a significant cost on busy/weak networks.

The system is about 43K atoms. I have not tested this with other systems or
 cutoffs.

 My mdp file follows. Thank you for any advice.


Your system is probably not calculating energies very much. 4.6 uses
force-only kernels if that's all you need from it.

Mark

Chris.

 constraints = all-bonds
 lincs-iter =  1
 lincs-order =  6
 constraint_algorithm =  lincs
 integrator = sd
 dt = 0.002
 tinit = 0
 nsteps = 25
 nstcomm = 1
 nstxout = 25
 nstvout = 25
 nstfout = 25
 nstxtcout = 5
 nstenergy = 5
 nstlist = 10
 nstlog=0
 ns_type = grid
 vdwtype = switch
 rlist = 1.0
 rlistlong = 1.6
 rvdw = 1.5
 rvdw-switch = 1.4
 rcoulomb = 1.0
 coulombtype = PME
 ewald-rtol = 1e-5
 optimize_fft = yes
 fourierspacing = 0.12
 fourier_nx = 0
 fourier_ny = 0
 fourier_nz = 0
 pme_order = 4
 tc_grps =  System
 tau_t   =  1.0
 ld_seed =  -1
 ref_t = 310
 gen_temp = 310
 gen_vel = yes
 unconstrained_start = no
 gen_seed = -1
 Pcoupl = berendsen
 pcoupltype = semiisotropic
 tau_p = 4 4
 compressibility = 4.5e-5 4.5e-5
 ref_p = 1.0 1.0
 dispcorr = EnerPres

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs with Intel Xeon Phi coprocessors ?

2013-03-09 Thread Mark Abraham
No idea. It was not a development target for 4.6, and there are no explicit
plans for considering Xeon Phi at this time. It would be interesting to
hear whether people can get benefit from them from our existing OpenMP
parallelism support for (particularly) PME + Verlet kernels.

Roughly speaking, 4.6 delivered a lot of performance improvements within
(and in spite of) an ageing C code base. The plans for 5.0 target a very
limited number of new features, chief of which is a transition to using
C++. We are targeting the kind of massive utilisation of threads that seems
to us likely to feature in the future of HPC. So far, using Intel's TBB as
a test bed is something we are considering, though we may end up rolling
our own code instead. Either way, a straightforward port of GROMACS that
works well on Xeon Phi seems fairly likely to me. Just not this year ;-)
(Unless anyone has hardware and developer time to donate!)

Mark

On Sat, Mar 9, 2013 at 2:43 AM, Christopher Neale 
chris.ne...@mail.utoronto.ca wrote:

 Dear users:

 does anybody have any experience with gromacs on a cluster in which each
 node is composed of 1 or 2 x86 processors plus an Intel Xeon Phi
 coprocessor? Can gromacs make use of the xeon phi coprocessor? If not, does
 anybody know if that is in the pipeline?

 Thank you,
 Chris.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Performance of 4.6.1 vs. 4.5.5

2013-03-09 Thread Szilárd Páll
As Mark said, we need concrete details to answer the question:
- log files (all four of them: 1/2 nodes, 4.5/4.6)
- hardware (CPUs, network)
- compilers
The 4.6 log files contain much of the second and third point except the
network.

Note that you can compare the performance summary table's entries one by
one and see what has changed.

I suspect that the answer is simply load imbalance, but we'll have to see
the numbers to know.


--
Szilárd


On Sat, Mar 9, 2013 at 3:00 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 On Sat, Mar 9, 2013 at 6:53 AM, Christopher Neale 
 chris.ne...@mail.utoronto.ca wrote:

  Dear users:
 
  I am seeing a 140% performance boost when moving from gromacs 4.5.5 to
  4.6.1 when I run a simulation on a single node. However, I am only
 seeing
  a 110% performance boost when running on multiple nodes. Does anyone else
  see this? Note that I am not using the verlet cutoff scheme.
 

 What's the processor and network for those runs?

 I'm not sure that this is a problem, but I was surprised to see how big the
  difference was between 1 and 2 nodes, while for 2-10 nodes I saw a
 reliable
  10% performance boost.
 

 Not sure what you mean by reliable 10% performance boost. Reporting
 actual ns/day rates would be clearer. Is a 140% performance boost a
 factor of 1.4 more ns/day or a factor of 2.4 more ns/day?

 Please note that, while I compiled the fftw (with sse2) and gromacs 4.6.1,
  I did not compile the 4.5.5 version that I am comparing to (or its fftw)
 so
  the difference might be in compilation options.


 Indeed.


  Still, I wonder why the benefits of 4.6.1 are so fantastic on 1 node but
  fall off to good-but-not-amazing on  1 node.
 

 Finding the answer would start by examining the changes in the timing
 breakdowns in your .log files. Switching from using in-memory MPI to
 network MPI is a significant cost on busy/weak networks.

 The system is about 43K atoms. I have not tested this with other systems or
  cutoffs.
 
  My mdp file follows. Thank you for any advice.
 

 Your system is probably not calculating energies very much. 4.6 uses
 force-only kernels if that's all you need from it.

 Mark

 Chris.
 
  constraints = all-bonds
  lincs-iter =  1
  lincs-order =  6
  constraint_algorithm =  lincs
  integrator = sd
  dt = 0.002
  tinit = 0
  nsteps = 25
  nstcomm = 1
  nstxout = 25
  nstvout = 25
  nstfout = 25
  nstxtcout = 5
  nstenergy = 5
  nstlist = 10
  nstlog=0
  ns_type = grid
  vdwtype = switch
  rlist = 1.0
  rlistlong = 1.6
  rvdw = 1.5
  rvdw-switch = 1.4
  rcoulomb = 1.0
  coulombtype = PME
  ewald-rtol = 1e-5
  optimize_fft = yes
  fourierspacing = 0.12
  fourier_nx = 0
  fourier_ny = 0
  fourier_nz = 0
  pme_order = 4
  tc_grps =  System
  tau_t   =  1.0
  ld_seed =  -1
  ref_t = 310
  gen_temp = 310
  gen_vel = yes
  unconstrained_start = no
  gen_seed = -1
  Pcoupl = berendsen
  pcoupltype = semiisotropic
  tau_p = 4 4
  compressibility = 4.5e-5 4.5e-5
  ref_p = 1.0 1.0
  dispcorr = EnerPres
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] query regarding mk_angndx

2013-03-09 Thread Kavyashree M
-- Forwarded message --
From: Kavyashree M hmkv...@gmail.com
Date: Fri, Mar 8, 2013 at 10:45 PM
Subject: query regarding mk_angndx
To: Discussion list for GROMACS users gmx-users@gromacs.org


Dear users,

I used mkang_ndx to create an index file with dihedral angles.
Input was:
mk_angndx  -s  a.tpr  -n  angle.ndx   -type   dihedral
output angle.ndx read like this -
[ Phi=180.0_2_43.93 ]
 52018192237353627323031
395957586176747566716970
...

According to my understanding the numbers indicate the 4 atoms
defining the particular dihedral angle.
But when I checked the pdb file for these atoms-
ATOM  5  CA  MET A   1 111.430  40.170 113.130  1.00  0.00
ATOM 18  C   MET A   1 112.060  41.020 112.030  1.00  0.00
ATOM 19  O   MET A   1 111.910  42.240 112.010  1.00  0.00
ATOM 20  N   GLN A   2 112.940  40.430 111.220  1.00  0.00

I could not make out how this defines phi?
Kindly clarify my confusion.

Thank you
kavya
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Calculation of coordination number for lithium ion

2013-03-09 Thread Justin Lemkul


Please keep all Gromacs-related questions on the gmx-user mailing list.  I am 
not a private tutor.


On 3/8/13 7:20 PM, qzg00...@nifty.com wrote:

Dear Dr. Justin

Using g_rdf , I want to calculate coordination number of water molecule around 
Li+ in a Li
salt aqueous solution. In order to calculate the radial distribution function, 
two Groups
should be selected as 'Reference' and '1 group'. For the Li+ , which groups 
should I select
among 0(system) -9(sideChain-H)?  I think that I should select System for 
Reference and
again System for 1 group.  Is this correct?



A system-system RDF is likely useless.  If you want the RDF of water around 
lithium, it should be easy to select lithium as the reference group and water 
(or even better, OW atoms in an index group of your creation) as the group 
around that central group.  Selecting protein-related terms (i.e. within the 
first 10 default groups) makes no sense in light of what you're trying to measure.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: RDF for INVACCUO simulation

2013-03-09 Thread Justin Lemkul



On 3/9/13 12:34 AM, Keerthana S.P Periasamy wrote:


DEAR ALL

 For RDF calculation in invaccuo simulation I have given the 
command as follows
   ./g_rdf_d -f outputdsvaccum_md.trr -s outputdsvaccum_md.tpr -n index.ndx
   -nopbc -o rdfss.xvg

  I am getting the graph which I have attached with this mail. Can you suggest 
whether my command and the graph is correct and I am proceeding in the right 
way.



Attachments are not allowed to the list, but you can provide links to where 
files can be downloaded.  That said, no one is going to be able to tell you 
anything about the sensibility of your data from such a (nonexistent) 
description.  Nor is anyone likely to invest time in doing your homework for you ;)


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] error in gromacs 4.0.7-Source code file: domdec.c, line: 5888

2013-03-09 Thread Hamid Mosaddeghi
Dear users

I used gromacs for my system include CNT-water-ion-protein (400,000atom), I
used grompp without error.

after used mdrun with 16 node on cluster, I get this error:

Reading file nvt.tpr, VERSION 4.0.7 (single precision)
Loaded with Money


NOTE: Periodic molecules: can not easily determine the required minimum
bonded cut-off, using half the non-bonded cut-off


Will use 15 particle-particle and 1 PME only nodes
This is a guess, check the performance at the end of the log file

---
Program mdrun, VERSION 4.0.7
Source code file: domdec.c, line: 5888

Fatal error:
There is no domain decomposition for 15 nodes that is compatible with the
given box and a minimum cell size of 3.75 nm
Change the number of nodes or mdrun option -rdd or -dds
Look in the log file for details on the domain decomposition
---

Good Music Saves your Soul (Lemmy)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun on CPU 0 out of 16

---

my box size is 14*14*18.

I used mdrun -rdd 1 , gromacs run without error is it correct or not?

I get this error in NVT (Em run without error.

Best Regards.

Hamid Mosaddeghi



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/error-in-gromacs-4-0-7-Source-code-file-domdec-c-line-5888-tp5006245.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: gromacs VERSION 4.0.7-There is no domain decomposition.....

2013-03-09 Thread Christoph Junghans
Dear Hamid,

please post these kind of questions on the users and not the developers list.

2013/3/9 Hamid Mosaddeghi hamid5920...@yahoo.com:
 Dear users

 I used gromacs for my system include CNT-water-ion-protein (400,000atom), I
 used grompp without error.

 after used mdrun with 16 node on cluster, I get this error:

 Reading file nvt.tpr, VERSION 4.0.7 (single precision)
 Loaded with Money


 NOTE: Periodic molecules: can not easily determine the required minimum
 bonded cut-off, using half the non-bonded cut-off


 Will use 15 particle-particle and 1 PME only nodes
 This is a guess, check the performance at the end of the log file

 ---
 Program mdrun, VERSION 4.0.7
 Source code file: domdec.c, line: 5888

 Fatal error:
 There is no domain decomposition for 15 nodes that is compatible with the
 given box and a minimum cell size of 3.75 nm
 Change the number of nodes or mdrun option -rdd or -dds
 Look in the log file for details on the domain decomposition
 ---

 Good Music Saves your Soul (Lemmy)

 Error on node 0, will try to stop all the nodes
 Halting parallel program mdrun on CPU 0 out of 16

 ---

 my box size is 14*14*18.

 I used mdrun -rdd 1 , gromacs run without error is it correct or not?
If you are sure that the maximum distance of your bonded interactions
is 1nm, you can do that.

It seems like mdrun was not able to find a decomposition
automatically, try to give one by hand:
$ mdrun -dd 2 2 4

Btw, your gromacs is 2 major versions behind, it might be a good idea to update.

Christoph

 I get this error in NVT (Em run without error).

 Best Regards.

 Hamid Mosaddeghi



 --
 View this message in context: 
 http://gromacs.5086.n6.nabble.com/gromacs-VERSION-4-0-7-There-is-no-domain-decomposition-tp5006246.html
 Sent from the GROMACS Developers Forum mailing list archive at Nabble.com.
 --
 gmx-developers mailing list
 gmx-develop...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-developers
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-developers-requ...@gromacs.org.



--
Christoph Junghans
Web: http://www.compphys.de
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists