[gmx-users] Simulations on GPU

2010-10-11 Thread Igor Leontyev
   0.0

  Energies (kJ/mol)
 PotentialKinetic En.   Total EnergyTemperature   Constr. rmsd
  -5.53230e+051.12594e+05   -4.40636e+052.73506e+021.03777e-06

Writing checkpoint, step 1 at Fri Oct  8 17:06:02 2010


<==  ###  ==>
<  A V E R A G E S  >
<==  ###  ==>

Statistics over 11 steps using 11 frames

  Energies (kJ/mol)
 PotentialKinetic En.   Total EnergyTemperature   Constr. rmsd
  -5.49797e+051.14636e+05   -4.35160e+052.78468e+020.0e+00

 Box-X  Box-Y  Box-Z
   1.73572e+121.19301e-402.31720e+11

  Total Virial (kJ/mol)
   0.0e+000.0e+000.0e+00
   0.0e+000.0e+000.0e+00
   0.0e+000.0e+000.0e+00

  Pressure (bar)
   0.0e+000.0e+000.0e+00
   0.0e+000.0e+000.0e+00
   0.0e+000.0e+000.0e+00

  Total Dipole (D)
   0.0e+000.0e+000.0e+00

 Epot (kJ/mol)Coul-SR  LJ-SRCoul-14  LJ-14
glu242side-glu242side0.0e+000.0e+000.0e+00 
0.0e+00

glu242side-rest0.0e+000.0e+000.0e+000.0e+00
 rest-rest0.0e+000.0e+000.0e+000.0e+00

Post-simulation ~15s memtest in progress...
Memory test completed without errors.

M E G A - F L O P S   A C C O U N T I N G

  RF=Reaction-Field  FE=Free Energy  SCFE=Soft-Core/Free Energy
  T=TabulatedW3=SPC/TIP3pW4=TIP4p (single or pairs)
  NF=No Forces

Computing:   M-Number M-Flops  % Flops
-
Lincs0.011934   0.716 8.0
Lincs-Mat0.106200   0.425 4.7
Constraint-V 0.046449   0.372 4.2
Settle   0.023010   7.43283.1
-
Total   8.945   100.0
-


R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

Computing: Nodes Number G-CyclesSeconds %
---
Write traj.1 113.0331.1 0.2
Rest   11978.521  716.999.8
---
Total  11981.554  718.0   100.0
---

OpenMM run - timing based on wallclock.

  NODE (s)   Real (s)  (%)
  Time:717.970717.970100.0
  11:57
  (Mnbf/s)   (MFlops)   (ns/day)  (hour/ns)
Performance:  0.000  0.012  1.204 19.942
Finished mdrun on node 0 Fri Oct  8 17:06:02 2010
////


Igor Leontyev wrote:

Finally, I compiled and ran simulations with gpu version of gromacs-4.5.1.
There were several issues:

1) Precompiled OpenMM2.0 libraries and headers must be downloaded (which
requires registration on their web page) and installed, otherwise cmake
doesn't find some source files.

2) cmake should be called outside the original source directory with the
path of the directory as an argument.

3) To run the obtained mdrun-gpu binary the CUDA dev driver should be
installed, otherwise the program does not find 'CUDA'. This step appeared 
to

be the most problematic for me. According to OpenMM manual the driver must
be installed with turned off x-windows service which can be done by the
command "init 3". In Ubuntu this command has no effect, while switching 
the

graphical interface off/on is done by

"sudo service gdm stop/start"

It turned out that in Ubuntu-10.04 the CUDA driver installation script 
does

not work properly even with turned off gdm. This issue and its solution is
described at http://ubuntuforums.org/showthread.php?t=1467074

Thank you for comments,

Igor



Szilárd Páll wrote:
Dear Igor,

Your output look _very_ weird, it seems as if CMake internal
variable(s) were not initialized, which I have no clue how could have
happened - the build generator works just fine for me. The only thing
I can think of is that maybe your CMakeCache is corrupted.

Could you please rerun cmake in a _clean_ build directory? Also, are
you able to run cmake for CPU build (no -D options)?

--
Szilárd


Szilárd wrote:


The beta versions are all outdated, could you please use the latest
source distribution (4.5.1) instead (or git from the
release-4-5-patches bra

[gmx-users] Gromacs benchmarking results

2010-10-11 Thread Igor Leontyev

Vivek,
thank you for interesting results. What is the configuration of your 
cluster: cpu, #cores per node, connection between nodes? It might be 
informative to represent your results by the computational efficiency, i.e. 
Performance/(number of cores). It would be indeed interesting to see the 
results for the latest version of gromacs.


Suppose we are considering buying a cluster. What is the optimal 
configuration for the maximum efficiency per dollar spent? More 
particularly, is the cluster of nodes with 2 x 12-core Opteron 2.2Ghz = 24 
cores of  2.2Ghz more efficient than cluster of nodes with 2 x 6-core Xeon 
2.66Ghz = 12 nodes of 2.66Ghz for the same price?

Thanks,
Igor


On 2010-10-11 12.20, Mark Abraham wrote:
g_tune_pme in 4.5.2 is your friend here. Otherwise, stay at 48 or below,
probably.
Mark


David van der Spoel wrote:
And try it with 4.5! Much better performance.



 > Hi all,
 > I have some gromacs benchmarking results to share.
 > I have tried the lysozyme example distributed in the benchmarking
 > suite "gmxbench-3.0.tar.gz". I have used the case provided in d.lzm/
 > with pme.mdp parameter file.
 >
 > Following is the performance (in ns/day) I got with gromacs-
 > 4.0.5 on
 > my cluster with different number of processors. I have tried different
 > npme for each run and reporting here the best one.
 >
 > #processors
 > #npme Performance ( ns/day)
 >
 > 1 2.898
 >
 > 24 8 44.698
 >
 > 48 16 75.262
 >
 > 72 24 86.835
 >
 > 84 20 102.249
 >
 > 96 16 97.079
 > 120 40 97.738
 > 144 64 93.204
 > 156 76 89.534
 >
 > I am looking for some more benchmarks on the same example set, to
 > compare these results. Please, share if somebody has tried it.
 >
 > Expecting some helpful comments and suggestions to improve the
 > performance further.
 >
 > Thanks,
 > Vivek Sharma 



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Self Diffusion contant

2010-10-11 Thread Igor Leontyev
It is an interesting question. Is it not worthwhile to have a separate forum 
board for such methodological issues?
Some time ago I raised the similar question regarding the convergence of 
Self Diffusion Coefficient. The reasoning given in the paper mentioned by 
Javier seems to be related to my issue too. Indeed if diffusion have several 
regimes on different timescales then msd plot should have several linear 
regions with different slopes. For lipid membrane "The diffusion 
coefficients measured on short time scales come from fast motions of lipids 
in a local free volume, while diffusion coefficients at long times comes 
from the Brownian motion of lipids in a viscous fluid" I am wondering 
whether the same reasoning is applicable for liquid water where timescales 
are essentially shorter than those for lipid membrane? Can someone comment 
the results at the following link?

http://lists.gromacs.org/pipermail/gmx-users/2010-June/052119.html

Thanks,
Igor

This seems to be pleasable reasoning for
there are several linear comparison with experiment should be


Javier Cerezo wrote:



This is the normal behaviour for a MSD vs. time plot. Actually, as set
by default in g_msd, neither the first seconds nor the last ones are
taking into consideration to calculate the difussion constant.

The explanation for the non-liear curves might be the following. On one
hand, at the begining, the behavior is not brownian -- in some places it
is refered as "free-flight" or  just as difussion at short times
(JCP,125,204703 is a good ref) -- On the other hand, as the average is
obtained by taking different starting times along the trajectory, there
would be much more points to average corresponding to the first seconds
of MSD, but for large times, just the few reference times corresponding
to the begining of the trajectory will contribute to the average and the
these final values turn to be not reliable.

So, the slope at the last 2 ns is not related to any physical event. You
should take the linear part corresponding to the middle of the time
range. Anyway, check if that region is large enough, if not you might
enlarge you simulation time (maybe 40-50ns).

Javier


El 07/10/10 18:36, tekle...@ualberta.ca escribió:

Dear Gromacs,

I have been calculating the self Diffusion constant of my system.
Surfactants in a different solvents of the same volume. After
simulation for 20ns I found the following data for the trajectory of
the mean square displacement.

# D[   TPA] = 0.2039 (+/- 0.0503) (1e-5 cm^2/s)
 0   0
 2   0.0105286
 4   0.0162435
 6   0.0212711
 80.026031
10   0.0307584
120.035134
14   0.0393323
16   0.0434628
18   0.0475354
20   0.0516609
 -
 -
 -
 -
 -
 -
   920 1.16467
   922 1.16756
   924  1.1703
   926 1.17267
   928 1.17383
   930 1.17483
   932 1.17581
   934 1.17754
   936 1.17957
   938 1.18199
   940  1.1829
   942 1.18596
   944 1.18871
   946 1.19099
   948 1.19219
   950 1.19321
   952 1.19445
   954 1.19613
   956 1.19838

 -
 -
 -
 -
 -
 -
 10576 11.7747
 10578  11.785
 10580 11.7817
 10582 11.7833
 10584 11.7847
 10586  11.784
 10588 11.7855
 10590 11.7904
 10592 11.7926
 10594 11.7943
 10596 11.8036
 10598 11.8141
 10600 11.8112

 -
 -
 -
 -
 -
 -
 19960 36.4106
 19962 36.2607
 19964 39.9243
 19966 39.7493
 19968 39.6744
 19970 39.5838
 19972 39.6723
 19974 39.6374
 19976  39.518
 19978 39.4935
 19980 39.3834
 19982 39.1136
 19984 42.3888
 19986  42.168
 19988 42.1337
 19990 41.9395
 19992 42.0065
 19994 42.0993
 19996 41.8652
 19998 41.8419
 2 41.9419
 20002 41.6049


From my data, the graph shows a linear trend until 18ns but as soon as
it reaches around 19, 20ns it dramatically increases the MSD value.
Since the surfactants form aggregation I was expecting the MSD curve
to go down. Is any explanation for that. Why? suddenly increases the
MSD curve. Which is then the correct slop then!


Thank you

Rob





--
Javier CEREZO BASTIDA
Estudiante de Doctorado
-
Dpto. Química-Física
Universidad de Murcia
30100 MURCIA (España)
Tlf.(+34)868887434 



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use th

[gmx-users] Simulations on GPU

2010-10-08 Thread Igor Leontyev
Finally, I compiled and ran simulations with gpu version of gromacs-4.5.1. 
There were several issues:




1) Precompiled OpenMM2.0 libraries and headers must be downloaded (which 
requires registration on their web page) and installed, otherwise cmake 
doesn't find some source files.


2) cmake should be called outside the original source directory with the 
path of the directory as an argument.


3) To run the obtained mdrun-gpu binary the CUDA dev driver should be 
installed, otherwise the program does not find 'CUDA'. This step appeared to 
be the most problematic for me. According to OpenMM manual the driver must 
be installed with turned off x-windows service which can be done by the 
command "init 3". In Ubuntu this command has no effect, while switching the 
graphical interface off/on is done by




"sudo service gdm stop/start"



It turned out that in Ubuntu-10.04 the CUDA driver installation script does 
not work properly even with turned off gdm. This issue and its solution is 
described at http://ubuntuforums.org/showthread.php?t=1467074




Thank you for comments,



Igor


Szilárd Páll wrote:
Dear Igor,

Your output look _very_ weird, it seems as if CMake internal
variable(s) were not initialized, which I have no clue how could have
happened - the build generator works just fine for me. The only thing
I can think of is that maybe your CMakeCache is corrupted.

Could you please rerun cmake in a _clean_ build directory? Also, are
you able to run cmake for CPU build (no -D options)?

--
Szilárd



On Wed, Oct 6, 2010 at 2:48 AM, Igor Leontyev  
wrote:

Szilárd wrote:


The beta versions are all outdated, could you please use the latest
source distribution (4.5.1) instead (or git from the
release-4-5-patches branch)?


The result is the same for both the distribution 4.5.1 and git from the
release-4-5-patches. See the output bellow.
=

PATH=/usr/local/opt/bin/mpi/openmpi-1.4.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
LD_LIBRARY_PATH=/usr/local/opt/bin/mpi/openmpi-1.4.2/lib:/home/leontyev/programs/bin/cuda/lib64:
CPPFLAGS=-I//usr/local/opt/bin/gromacs/fftw-3.2.2/single_sse/include
-I//usr/local/opt/bin/mpi/openmpi-1.4.2/include
LDFLAGS=-L//usr/local/opt/bin/gromacs/fftw-3.2.2/single_sse/lib
-L//usr/local/opt/bin/mpi/openmpi-1.4.2/lib
OPENMM_ROOT_DIR=/home/leontyev/programs/bin/gromacs/gromacs-4.5.1-git/openmm

cmake src -DGMX_OPENMM=ON -DGMX_THREADS=OFF
-DCMAKE_INSTALL_PREFIX=/home/leontyev/programs/bin/gromacs/gromacs-4.5.1-git
CMake Error at gmxlib/CMakeLists.txt:124 (set_target_properties):
set_target_properties called with incorrect number of arguments.


CMake Error at gmxlib/CMakeLists.txt:126 (install):
install TARGETS given no ARCHIVE DESTINATION for static library target
"gmx".


CMake Error at mdlib/CMakeLists.txt:11 (set_target_properties):
set_target_properties called with incorrect number of arguments.


CMake Error at mdlib/CMakeLists.txt:13 (install):
install TARGETS given no ARCHIVE DESTINATION for static library target
"md".


CMake Error at kernel/CMakeLists.txt:43 (set_target_properties):
set_target_properties called with incorrect number of arguments.


CMake Error at kernel/CMakeLists.txt:44 (set_target_properties):
set_target_properties called with incorrect number of arguments.


CMake Error at kernel/gmx_gpu_utils/CMakeLists.txt:18
(CUDA_INCLUDE_DIRECTORIES):
Unknown CMake command "CUDA_INCLUDE_DIRECTORIES".


CMake Warning (dev) in CMakeLists.txt:
No cmake_minimum_required command is present. A line of code such as

cmake_minimum_required(VERSION 2.8)

should be added at the top of the file. The version specified may be 
lower

if you wish to support older CMake versions for this project. For more
information run "cmake --help-policy CMP".
This warning is for project developers. Use -Wno-dev to suppress it.

-- Configuring incomplete, errors occurred!


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Simulations on GPU

2010-10-05 Thread Igor Leontyev

Szilárd wrote:

The beta versions are all outdated, could you please use the latest
source distribution (4.5.1) instead (or git from the
release-4-5-patches branch)?


The result is the same for both the distribution 4.5.1 and git from the 
release-4-5-patches. See the output bellow.

=

PATH=/usr/local/opt/bin/mpi/openmpi-1.4.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
LD_LIBRARY_PATH=/usr/local/opt/bin/mpi/openmpi-1.4.2/lib:/home/leontyev/programs/bin/cuda/lib64:
CPPFLAGS=-I//usr/local/opt/bin/gromacs/fftw-3.2.2/single_sse/include 
-I//usr/local/opt/bin/mpi/openmpi-1.4.2/include
LDFLAGS=-L//usr/local/opt/bin/gromacs/fftw-3.2.2/single_sse/lib 
-L//usr/local/opt/bin/mpi/openmpi-1.4.2/lib
OPENMM_ROOT_DIR=/home/leontyev/programs/bin/gromacs/gromacs-4.5.1-git/openmm

cmake 
src -DGMX_OPENMM=ON -DGMX_THREADS=OFF -DCMAKE_INSTALL_PREFIX=/home/leontyev/programs/bin/gromacs/gromacs-4.5.1-git

CMake Error at gmxlib/CMakeLists.txt:124 (set_target_properties):
 set_target_properties called with incorrect number of arguments.


CMake Error at gmxlib/CMakeLists.txt:126 (install):
 install TARGETS given no ARCHIVE DESTINATION for static library target
 "gmx".


CMake Error at mdlib/CMakeLists.txt:11 (set_target_properties):
 set_target_properties called with incorrect number of arguments.


CMake Error at mdlib/CMakeLists.txt:13 (install):
 install TARGETS given no ARCHIVE DESTINATION for static library target
 "md".


CMake Error at kernel/CMakeLists.txt:43 (set_target_properties):
 set_target_properties called with incorrect number of arguments.


CMake Error at kernel/CMakeLists.txt:44 (set_target_properties):
 set_target_properties called with incorrect number of arguments.


CMake Error at kernel/gmx_gpu_utils/CMakeLists.txt:18 
(CUDA_INCLUDE_DIRECTORIES):

 Unknown CMake command "CUDA_INCLUDE_DIRECTORIES".


CMake Warning (dev) in CMakeLists.txt:
 No cmake_minimum_required command is present.  A line of code such as

   cmake_minimum_required(VERSION 2.8)

 should be added at the top of the file.  The version specified may be 
lower

 if you wish to support older CMake versions for this project.  For more
 information run "cmake --help-policy CMP".
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Configuring incomplete, errors occurred! 



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] No convergence in Diffusion Coefficient

2010-06-28 Thread Igor Leontyev



On 2010-06-29 05.50, Igor Leontyev wrote:

See the table bellow, there is no convergence of the Self-Diffusion
Coefficient (Dself) over the trajectory length. Dself is obtained for NPT
box of 1024 SPC/E water molecules by the Einstein's relation (via RMSD).
In
Gromacs Manual or Alien&Tildesley's book I didn't find issues related to
the
problem. Is there any idea why I can not achieve the convergence?

Description:
To figure out what the trajectory length is needed for an accurate
simulation of the self-diffusion coefficient I performed the following
test.
Split the continuous 10ns long trajectory on parts and calculated Dself
for
each of the parts, e.g. the splitting on N parts gives N values of Dself
obtained on trjlenth=10ns/N trajectory part. For the variety of N values
we
can calculate the average  and dispersion Disper. The converged
trajectory length is found as the trjlenth value at which the dispersion
is
sufficiently small and  equal to the value obtained for the whole
10ns trajectory. But it turned out that Dself does not converge (See the
columns 3 and 4 in the Table).

Just for the comparison I carried out the same test for the Dielectric
Constant Eps (See the columns 5 and 6 in the Table) and the converged
trajectory length is about 2.5-5ns which correlates with the length
reported
in the literature.




On 2010-06-29 David van der Spoel wrote:
You don't say how you compute the Dself.


Some details are given on the top of my initial message. The command line
is: "g_msd -trestart 10 -dt 0.5"

The obtained rmds dependence on t is a perfect straight line.


On 2010-06-29 David van der Spoel wrote:
Your Dself varies from 2.48 to 2.63,
 and if you drop the 10 ns and 4.8 ps measurements it varies from
2.48 to 2.54. Not too much I would say.


For the fine tuning of water model parameters such a precision is not
enough. If the scattering is between 2.4 and 2.6 it is not clear in which
direction one need to modify Charges or vdW parameters to improve Dself.
Moreover, the approaching of   to its infinitely averaged value is
irregular, i.e. as you mentioned the value obtained with the best statistics
(10 ns trajectory) should be dropped while averaging over shorter
trajectories seems to produce better results. I observe no convergence even
for the 100ns trajectory, i.e. improving the statistics to infinity does not
decrease the uncertainty of Dself. By definition it just means there is no
convergence.

The convergence is observed for Dielectric Constant. It regularly approaches
to the converged value 72.


On 2010-06-29 David van der Spoel wrote: g_msd tries to do something
semi-intelligent, by fitting the MSD to a
straight line from 10 to 90% of the length of the graph. You should
avoid using the very first few ps and the final bit of the graph.


It should not significantly distort results if the trajectory length is in
the range of nanosec. Isn't it?


Another way of getting statistics is by spitting the system in 2-N
sub-boxes and computing the Dself for each of these.




trjlenth N  Disper  Disper
ps 1e-5 cm^2/s

1.0 1 2.6305 0 72.0 0
5000.0 2 2.4591 .0929 71.9 1.1
2500.0 4 2.4554 .0291 71.6 4.7
1250.0 8 2.4816 .0618 71.1 5.2
625.0 16 2.4848 .0786 70.6 7.5
312.5 32 2.4993 .0848 67.7 9.4
156.2 64 2.5128 .1082 63.3 11.6
78.1 128 2.4993 .1119 56.3 14.1
39.0 256 2.5072 .1303 43.6 14.9
19.5 512 2.5133 .1713 31.0 12.4
9.7 1030 2.5449 .2128 19.6 8.0
4.8 2083 2.6318 .2373 12.0 4.9


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] No convergence in Diffusion Coefficient

2010-06-28 Thread Igor Leontyev

See the table bellow, there is no convergence of the Self-Diffusion
Coefficient (Dself) over the trajectory length. Dself is obtained for NPT
box of 1024 SPC/E water molecules by the Einstein's relation (via RMSD). In
Gromacs Manual or Alien&Tildesley's book I didn't find issues related to the
problem. Is there any idea why I can not achieve the convergence?

Description:
To figure out what the trajectory length is needed for an accurate
simulation of the self-diffusion coefficient I performed the following test.
Split the continuous 10ns long trajectory on parts and calculated Dself for
each of the parts, e.g. the splitting on N parts gives N values of Dself
obtained on trjlenth=10ns/N trajectory part. For the variety of N values we
can calculate the average  and dispersion Disper. The converged
trajectory length is found as the trjlenth value at which the dispersion is
sufficiently small and  equal to the value obtained for the whole
10ns trajectory. But it turned out that Dself does not converge (See the
columns 3 and 4 in the Table).

Just for the comparison I carried out the same test for the Dielectric
Constant Eps (See the columns 5 and 6 in the Table) and the converged
trajectory length is about 2.5-5ns which correlates with the length reported
in the literature.


trjlenth   N Disper Disper
   ps   1e-5 cm^2/s

1.0   1  2.63050  72.0   0
5000.0 2  2.4591.0929   71.9  1.1
2500.0 4  2.4554.0291   71.6  4.7
1250.0 8  2.4816.0618   71.1  5.2
625.0 16  2.4848.0786   70.6  7.5
312.5 32  2.4993.0848   67.7  9.4
156.2 64  2.5128.1082   63.311.6
78.1 128  2.4993.1119   56.314.1
39.0 256  2.5072.1303   43.614.9
19.5 512  2.5133.1713   31.012.4
9.7 1030  2.5449.2128   19.6  8.0
4.8 2083  2.6318.2373   12.0  4.9


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Optimal Hardware for Gromacs

2010-04-28 Thread Igor Leontyev

Hi,
there is a question for hardware experts. What is the optimal hardware to
achieve better scalability in parallel MD simulations of 10K-100K atoms?

Probably, it should be a cluster of multi-cpu, multi-core units with fast
interconnection. In this case, what is the optimal=performance/price
configuration for the units? Or in greater details the choice should be:
1) Intel or AMD?
2) Server cpus (Xeon/Opteron) or Desktop cpus (i7/Phenom)?
(The problem is that for the cheaper option (Desktop cpus) there is no
multi-cpu
motherboards available on a market. May be somebody knows the appropriate MB
model.)
3) 4-, 6- or 8-core cpus? See Opteron 6134 which has 8 cores of 2.3 Ghz.
4) Is there network solution faster than 1 Gbps for a reasonable price?

Or may be the optimum is some preassembled workstation or cluster
available on a market for a reasonable price.

Resuming the subject: what is the optimal hardware with a budget ~$10K?
April 2010

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] How to avoid the error:"Shake blockcrossing nodeboundaries"

2009-03-25 Thread Igor Leontyev

Leontyev Igor wrote:

I just switched from the version 3.3 to 4.0. It turned out that the
4.0
version does not allow to run a parallel simulation of my protein in
vacuum. The protein consists of 2 chains and 4 separated (no bonds
with
chains) co-factors. For vacuum simulation 'pbc=no' which makes to use
particle decomposition option "-pd" of mdrun. In this case the
automatic
particle distribution over the nodes leads to the error:
"Fatal error:
Shake block crossing node boundaries
constraint between atoms (11191,11193)"

In the previous version 3.3 I used manual balancing with the "-load"
option to avoid the problem. In the current version 4.0 I did not
find
anything similar for the particle decomposition. Is there a way to
run
parallel simulations of the protein in vacuum?


I'd suggest updating to 4.0.4 for the copious bug fixes, one of which
might solve your problem. I can't think of a good reason offhand why
PD
or DD should be necessary for non-PBC simulations in vacuo - try both.
If you've still got your problem, let us know.

Mark



Gromacs-4.0.4, which I use, does not allow DD in non-PBC simulations.
Only
PD options is available. But PD has no flexibility to manually
redistribute
particles over the nodes. As written in the manual "With PD only whole
molecules can be assigned to a processor". Does it mean that there is
no way
to start PD parallel simulations of whole protein? In other words, does
it
means that there is no way to run parallel simulation of protein in
vacuum?



You can do it the way all the other programs do: only use constraints on
bonds involving H, and reducing the timestep to 1 fs.


I try to run MD with only constrained h-bonds (constraints = hbonds)
which allow 2fs timestep. The timestep 1fs would be needed if there will
be vibrating (unconstrained) h-bonds. But you suggest to use constraints
or it was a misprint?


This is subject to discussion, see e.g. gromacs manual. Actually, with all
bonds constrained 2 fs is already a large time step, and with only bonds
containing H constrained 1 fs is also quite large. A further discussion
can be found in the P-Lincs paper IIRC (J. Chem. Theor. Comp. 4 (2008) p.
116).



Thank you for the reference. The subject is what I would like to know in
more details. Regarding the problem that "Shake block crossing node
boundaries", the "-load" option implemented in gromacs-3.3.3 seems to remain
the most computationally efficient due to the larger, at least technically,
2fs timestep.

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] How to avoid the error: "Shake blockcrossing nodeboundaries"

2009-03-25 Thread Igor Leontyev

Leontyev Igor wrote:

I just switched from the version 3.3 to 4.0. It turned out that the 4.0
version does not allow to run a parallel simulation of my protein in
vacuum. The protein consists of 2 chains and 4 separated (no bonds with
chains) co-factors. For vacuum simulation 'pbc=no' which makes to use
particle decomposition option "-pd" of mdrun. In this case the 
automatic

particle distribution over the nodes leads to the error:
"Fatal error:
Shake block crossing node boundaries
constraint between atoms (11191,11193)"

In the previous version 3.3 I used manual balancing with the "-load"
option to avoid the problem. In the current version 4.0 I did not find
anything similar for the particle decomposition. Is there a way to run
parallel simulations of the protein in vacuum?


I'd suggest updating to 4.0.4 for the copious bug fixes, one of which
might solve your problem. I can't think of a good reason offhand why PD
or DD should be necessary for non-PBC simulations in vacuo - try both.
If you've still got your problem, let us know.

Mark



Gromacs-4.0.4, which I use, does not allow DD in non-PBC simulations. 
Only
PD options is available. But PD has no flexibility to manually 
redistribute

particles over the nodes. As written in the manual "With PD only whole
molecules can be assigned to a processor". Does it mean that there is no 
way
to start PD parallel simulations of whole protein? In other words, does 
it
means that there is no way to run parallel simulation of protein in 
vacuum?


You can do it the way all the other programs do: only use constraints on 
bonds involving H, and reducing the timestep to 1 fs.


I try to run MD with only constrained h-bonds (constraints = hbonds) which 
allow 2fs timestep. The timestep 1fs would be needed if there will be 
vibrating (unconstrained) h-bonds. But you suggest to use constraints or it 
was a misprint? 


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re:Re: [gmx-users] How to avoid the error: "Shake block crossing nodeboundaries"

2009-03-25 Thread Igor Leontyev

Leontyev Igor wrote:

I just switched from the version 3.3 to 4.0. It turned out that the 4.0
version does not allow to run a parallel simulation of my protein in
vacuum. The protein consists of 2 chains and 4 separated (no bonds with
chains) co-factors. For vacuum simulation 'pbc=no' which makes to use
particle decomposition option "-pd" of mdrun. In this case the automatic
particle distribution over the nodes leads to the error:
"Fatal error:
Shake block crossing node boundaries
constraint between atoms (11191,11193)"

In the previous version 3.3 I used manual balancing with the "-load"
option to avoid the problem. In the current version 4.0 I did not find
anything similar for the particle decomposition. Is there a way to run
parallel simulations of the protein in vacuum?


I'd suggest updating to 4.0.4 for the copious bug fixes, one of which
might solve your problem. I can't think of a good reason offhand why PD
or DD should be necessary for non-PBC simulations in vacuo - try both.
If you've still got your problem, let us know.

Mark



Gromacs-4.0.4, which I use, does not allow DD in non-PBC simulations. Only
PD options is available. But PD has no flexibility to manually redistribute
particles over the nodes. As written in the manual "With PD only whole
molecules can be assigned to a processor". Does it mean that there is no way
to start PD parallel simulations of whole protein? In other words, does it
means that there is no way to run parallel simulation of protein in vacuum?

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php