[gmx-users] Question about scaling

2012-11-12 Thread Thomas Schlesier

Dear all,
i did some scaling tests for a cluster and i'm a little bit clueless 
about the results.

So first the setup:

Cluster:
Saxonid 6100, Opteron 6272 16C 2.100GHz, Infiniband QDR
GROMACS version: 4.0.7 and 4.5.5
Compiler:   GCC 4.7.0
MPI: Intel MPI 4.0.3.008
FFT-library: ACML 5.1.0 fma4

System:
895 spce water molecules
Simulation time: 750 ps (0.002 fs timestep)
Cut-off: 1.0 nm
but with long-range correction ( DispCorr = EnerPres ; PME (standard 
settings) - but in each case no extra CPU solely for PME)

V-rescale thermostat and Parrinello-Rahman barostat

I get the following timings (seconds), whereas is calculated as the time 
which would be needed for 1 CPU (so if a job on 2 CPUs took X s the time 
would be 2 * X s).

These timings were taken from the *.log file, at the end of the
'real cycle and time accounting' - section.

Timings:
gmx-version 1cpu2cpu4cpu
4.0.7   422333843540
4.5.5   378032552878

I'm a little bit clueless about the results. I always thought, that if i 
have a non-interacting system and double the amount of CPUs, i would get 
a simulation which takes only half the time (so the times as defined 
above would be equal). If the system does have interactions, i would 
lose some performance due to communication. Due to node imbalance there 
could be a further loss of performance.


Keeping this in mind, i can only explain the timings for version 4.0.7 
2cpu - 4cpu (2cpu a little bit faster, since going to 4cpu leads to 
more communication - loss of performance).


All the other timings, especially that 1cpu takes in each case longer 
than the other cases, i do not understand.
Probalby the system is too small and / or the simulation time is too 
short for a scaling test. But i would assume that the amount of time to 
setup the simulation would be equal for all three cases of one 
GROMACS-version.
Only other explaination, which comes to my mind, would be that something 
went wrong during the installation of the programs...


Please, can somebody enlighten me?

Greetings
Thomas
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Question about scaling

2012-11-12 Thread Carsten Kutzner
Hi Thomas,

On Nov 12, 2012, at 5:18 PM, Thomas Schlesier schl...@uni-mainz.de wrote:

 Dear all,
 i did some scaling tests for a cluster and i'm a little bit clueless about 
 the results.
 So first the setup:
 
 Cluster:
 Saxonid 6100, Opteron 6272 16C 2.100GHz, Infiniband QDR
 GROMACS version: 4.0.7 and 4.5.5
 Compiler: GCC 4.7.0
 MPI: Intel MPI 4.0.3.008
 FFT-library: ACML 5.1.0 fma4
 
 System:
 895 spce water molecules
this is a somewhat small system I would say.

 Simulation time: 750 ps (0.002 fs timestep)
 Cut-off: 1.0 nm
 but with long-range correction ( DispCorr = EnerPres ; PME (standard 
 settings) - but in each case no extra CPU solely for PME)
 V-rescale thermostat and Parrinello-Rahman barostat
 
 I get the following timings (seconds), whereas is calculated as the time 
 which would be needed for 1 CPU (so if a job on 2 CPUs took X s the time 
 would be 2 * X s).
 These timings were taken from the *.log file, at the end of the
 'real cycle and time accounting' - section.
 
 Timings:
 gmx-version   1cpu2cpu4cpu
 4.0.7 422333843540
 4.5.5 378032552878
Do you mean CPUs or CPU cores? Are you using the IB network or are you running 
single-node?

 
 I'm a little bit clueless about the results. I always thought, that if i have 
 a non-interacting system and double the amount of CPUs, i
You do use PME, which means a global interaction of all charges.

 would get a simulation which takes only half the time (so the times as 
 defined above would be equal). If the system does have interactions, i would 
 lose some performance due to communication. Due to node imbalance there could 
 be a further loss of performance.
 
 Keeping this in mind, i can only explain the timings for version 4.0.7 2cpu 
 - 4cpu (2cpu a little bit faster, since going to 4cpu leads to more 
 communication - loss of performance).
 
 All the other timings, especially that 1cpu takes in each case longer than 
 the other cases, i do not understand.
 Probalby the system is too small and / or the simulation time is too short 
 for a scaling test. But i would assume that the amount of time to setup the 
 simulation would be equal for all three cases of one GROMACS-version.
 Only other explaination, which comes to my mind, would be that something went 
 wrong during the installation of the programs…
You might want to take a closer look at the timings in the md.log output files, 
this will 
give you a clue where the bottleneck is, and also tell you about the 
communication-computation 
ratio.

Best,
  Carsten


 
 Please, can somebody enlighten me?
 
 Greetings
 Thomas
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www interface 
 or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Question about scaling charges

2006-08-31 Thread Arthur Roberts
Hi, all,

 I was talking to a friend of mine that works in
an MD lab.  He said that the charges from QM/MM or
semiempirical need to be scaled to charges in the
force field.  For example, a heme will have specific
charges in the force field, while hemes that represent
different bound states may have a different charge
distribution, depending on which method is used to
calculate it.  Let us say that the force field charge
is +1 for simplicity and the charge is +2 by using
other methods.  The scaling would suggest that you
would reduce the charge of the heme by one half, so
that it would be compatible to the force field. 
However, this doesn't make sense to me, since partial
charges should always be related to an electron and
therefore, in principle, should never have to be
scaled.  I would appreciate anyone's input on it.

Best wishes,
Art
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php