[gmx-users] different results when using different number cpus

2007-12-05 Thread Dechang Li
 Dear all,

  I used Gromacs3.3.1 to do a simulation about two proteins in water(tip3p).
I run two similar simulations, one for 2 cpus, while the other for 16 cpus.
The two simulations have the same .gro, .top, and the same .mdp files. I found
the results were not the same. In the 2 cpus simulation, the two proteins 
run closer and closer. But they run apart in the 16 cpus simulation.
   Is that normal the different results when using different number cpus? The 
size of my simulation box is 9*7*7.







Best regards,

2007-12-5


=   
Dechang Li, PhD Candidate
Department of Engineering Mechanics
Tsinghua University
Beijing 100084
PR China 

Tel:   +86-10-62773779(O) 
Email: [EMAIL PROTECTED]
= 
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Gromacs on IBM cluster

2007-12-05 Thread Marius Retegan
Dear GMX users,
I would like to give some feedback about my experience regarding the
compilation of Gromacs 3.3.1 on the IBM cluster using the IBM
compilers. Since I was unable to test a compilation with gcc, because
gcc is not instaled, I've decided to recompile Gromacs with the IBM
compilers using this this compilation script where I've disabled the
optimizations for xlc_r and xlf_r.
=
export CC='xlc_r'
export F77='xlf_r'
export FFLAGS='-O0 -q64'
export CFLAGS='-O0 -q64'
export AR='ar -X 64'
export LDFLAGS='-L/p5cecic/home/mretegan/software/fftw2_64/lib'
export CPPFLAGS='-I/p5cecic/home/mretegan/software/fftw2_64/include'
#export FFLAGS='-O2 -qarch=pwr5 -qtune=pwr5 -qmaxmem=-1 -qstrict'
#export CFLAGS='-O2 -qarch=pwr5 -qtune=pwr5 -qmaxmem=-1 -qstrict'


./configure --enable-double --with-qmmm-cpmd  --with-fft=fftw2
--prefix=/p5cecic/home/mretegan/software/gromacs_64_fftw2

With the new binaries I don't have problems when running grompp. Also
I've also tested the binaries with the Gromacs tests found on the
Wiki. All test have passed. So I think it's a good idea to disable all
optimization when compiling on a IBM machine, and if everything runs
smoothly start using optimizations for the compilers and test again.

With respect
Marius Retegan

On Oct 31, 2007 4:36 PM, Marius Retegan [EMAIL PROTECTED] wrote:
 I have aprox. 81000 atoms. The system worked on a Itanium 2 cluster.
 On the IBM machine I've used the IBM compilers.
 I'm going to give it a try with gcc.
 Thank you
 Marius Retegan


 On 10/31/07, David van der Spoel [EMAIL PROTECTED] wrote:
  Marius Retegan wrote:
   32 Gb on each node of the cluster.
   Maybe I should add that I've also ran CPMD and cp2k jobs on the
   cluster but I've never had memory problems.
   Marius Retegan
  It could still be too little, since this is the additional memory. What
  kind of system are you using, how many atoms? Does a small water box
  work? There have been problems with compilation on IBM machines as well,
  in particular when using IBM compilers. Recompiling with gcc resolves that.
 
  
   On 10/30/07, David van der Spoel [EMAIL PROTECTED] wrote:
   Marius Retegan wrote:
   Dear Gromacs users
  
   I'm having some troubles running grompp on a IBM cluster P575 with AIX
   5.3 installed.
   This is the error message that I'm getting:
   
   processing coordinates...
   double-checking input for internal consistency...
   renumbering atomtypes...
   ---
   Program grompp_d, VERSION 3.3.1
   Source code file: smalloc.c, line: 113
  
   Fatal error:
   calloc for nbsnew (nelem=677329, elsize=148, file grompp.c, line 723)
   This says it wants one Gb of RAM. How much do you have?
   You can run grompp on a machine with a lot of memory and the mdrun on
   your cluster.
   ---
  
   I'm An Oakman (Pulp Fiction)
   : Not enough space
   
   While digging into the archive I've managed to find this post
   http://www.gromacs.org/pipermail/gmx-users/2006-February/020066.html,
   which basically says that there is not enough memory for the job. My
   job is lunched with LoadLeveler where I can define the @resources =
   ConsumableMemory (value), but if this is not defined in the
   LoadLeveler script, I think that the program should thake as much
   memory as it requires.
   So my question is why does Gromacs, a program renowned for low memory
   requirements, give this error message?
  
   Thank you
   Marius Retegan
   ___
   gmx-users mailing listgmx-users@gromacs.org
   http://www.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at http://www.gromacs.org/search before 
   posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to [EMAIL PROTECTED]
   Can't post? Read http://www.gromacs.org/mailing_lists/users.php
  
   --
   David.
   
   David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
   Dept. of Cell and Molecular Biology, Uppsala University.
   Husargatan 3, Box 596,  75124 Uppsala, Sweden
   phone:  46 18 471 4205  fax: 46 18 511 755
   [EMAIL PROTECTED][EMAIL PROTECTED]   http://folding.bmc.uu.se
   
   ___
   gmx-users mailing listgmx-users@gromacs.org
   http://www.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at http://www.gromacs.org/search before 
   posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to [EMAIL PROTECTED]
   Can't post? Read 

Re: [gmx-users] different results when using different number cpus

2007-12-05 Thread Berk Hess

Hi,

With Gromacs and (nearly) all other MD packages you will never be able
to get binary identical results when running on different number of CPUs.
Since MD is chaotic, the results can be very different.

Berk.



From: Carsten Kutzner [EMAIL PROTECTED]
Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
To: Discussion list for GROMACS users gmx-users@gromacs.org
Subject: Re: [gmx-users] different results when using different number cpus
Date: Wed, 05 Dec 2007 14:10:06 +0100

Hi Dechang,

it is normal that results are not binary identical if you compare the
same MD system on different numbers of processors. If you use PME then
you will probably get slightly different charge grids for 2 and for 16
processors - since the charge grid has to be divisible by the number of
CPUs in x- and y-direction. Even if you manually set the grid dimensions
to be the same for both cases, your simulations could diverge when using
version 3.x of the FFTW. This version has a build-in timer and chooses
the fastest of several algorithms which could be another even in two
runs on the same number of processors - depending on the timing results.
With different algorithms you get slight differences in the last digit
of the computed numbers (rounding / truncation / order of evaluation)
which will then grow during the simulation and lead to diverging
trajectories. Of course the averaged properties of the simulation are
unaffected by those differences and should be the same if averaged long
enough.
You could use FFTW 2.x and manually set the FFT grid size to the same
value for the 2 and 16 CPU case - but I am not shure if this is enough
to get binary identical results.
You could also repeat your simulations several times with (slightly)
different starting conditions (maybe different starting velocities) to
get a better picture of the average behaviour of your system. If in all
16 processor cases you see the proteins diverge and in all 2 processor
cases you see them converge, I would guess something is wrong.

Hope that helps,
  Carsten


Dechang Li wrote:
  Dear all,

 I used Gromacs3.3.1 to do a simulation about two proteins in 
water(tip3p).
 I run two similar simulations, one for 2 cpus, while the other for 16 
cpus.
 The two simulations have the same .gro, .top, and the same .mdp files. I 
found
 the results were not the same. In the 2 cpus simulation, the two 
proteins

 run closer and closer. But they run apart in the 16 cpus simulation.
Is that normal the different results when using different number 
cpus? The

 size of my simulation box is 9*7*7.







 Best regards,

 2007-12-5
 

 =
 Dechang Li, PhD Candidate
 Department of Engineering Mechanics
 Tsinghua University
 Beijing 100084
 PR China

 Tel:   +86-10-62773779(O)
 Email: [EMAIL PROTECTED]
 =¡¡


 

 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before 
posting!

 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


_
Play online games with your friends with Messenger 
http://www.join.msn.com/messenger/overview


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] different results when using different number cpus

2007-12-05 Thread chris . neale

Message: 4
Date: Wed, 05 Dec 2007 14:19:28 +0100
From: Berk Hess [EMAIL PROTECTED]
Subject: Re: [gmx-users] different results when using different number
cpus
To: gmx-users@gromacs.org
Message-ID: [EMAIL PROTECTED]
Content-Type: text/plain; format=flowed

Hi,

With Gromacs and (nearly) all other MD packages you will never be able
to get binary identical results when running on different number of CPUs.
Since MD is chaotic, the results can be very different.

Berk.


I can confirm that I get the same thing when running a repeat of a  
simulation segment twice on 4 cpus with gromacs-3.3.1 and fftw-3.1.2.  
Further, while trying to debug a collegues parameters that give a  
lincs error after long periods of simulation time on a single  
processor I find that a proper restart from just prior to the crash  
does not lead to an exact repeat of the error (although an error does  
eventually occur). This was unfortunate since my plan was to save the  
.trr every 100ps and then do a restart in which I saved the .xtc every  
integration step to get a good look at the problem. Carsten's comments  
about fftw3.x is useful since I have been using fftw-3.1.2. Note that  
I did not test to see if a run on 1cpu will generate an identical  
trajectory, only that the lincs error is not exactly reproduced. I did  
the restart using .trr/.edr and set  
gen_vel=no;unconstrained_start=yes; for the restart.


I agree that statistical properties will be properly reproduced, but I  
can imagine situations in which a proper restart would be identical:  
e.g. an interest in the dynamics of quick rare processes in which one  
might run for a long time while saving .xtc and .trr infrequently and  
then restarting at the proper place while saving .xtc very frequently  
in order to capture the dynamics of an identified transition.






From: Carsten Kutzner [EMAIL PROTECTED]
Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
To: Discussion list for GROMACS users gmx-users@gromacs.org
Subject: Re: [gmx-users] different results when using different number cpus
Date: Wed, 05 Dec 2007 14:10:06 +0100

Hi Dechang,

it is normal that results are not binary identical if you compare the
same MD system on different numbers of processors. If you use PME then
you will probably get slightly different charge grids for 2 and for 16
processors - since the charge grid has to be divisible by the number of
CPUs in x- and y-direction. Even if you manually set the grid dimensions
to be the same for both cases, your simulations could diverge when using
version 3.x of the FFTW. This version has a build-in timer and chooses
the fastest of several algorithms which could be another even in two
runs on the same number of processors - depending on the timing results.
With different algorithms you get slight differences in the last digit
of the computed numbers (rounding / truncation / order of evaluation)
which will then grow during the simulation and lead to diverging
trajectories. Of course the averaged properties of the simulation are
unaffected by those differences and should be the same if averaged long
enough.
You could use FFTW 2.x and manually set the FFT grid size to the same
value for the 2 and 16 CPU case - but I am not shure if this is enough
to get binary identical results.
You could also repeat your simulations several times with (slightly)
different starting conditions (maybe different starting velocities) to
get a better picture of the average behaviour of your system. If in all
16 processor cases you see the proteins diverge and in all 2 processor
cases you see them converge, I would guess something is wrong.

Hope that helps,
  Carsten


Dechang Li wrote:
  Dear all,

 I used Gromacs3.3.1 to do a simulation about two proteins in
water(tip3p).
 I run two similar simulations, one for 2 cpus, while the other for 16
cpus.
 The two simulations have the same .gro, .top, and the same .mdp files. I
found
 the results were not the same. In the 2 cpus simulation, the two
proteins
 run closer and closer. But they run apart in the 16 cpus simulation.
Is that normal the different results when using different number
cpus? The
 size of my simulation box is 9*7*7.







 Best regards,

 2007-12-5
 

 =
 Dechang Li, PhD Candidate
 Department of Engineering Mechanics
 Tsinghua University
 Beijing 100084
 PR China

 Tel:   +86-10-62773779(O)
 Email: [EMAIL PROTECTED]
 =¡¡




___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Re: fullerene topology

2007-12-05 Thread Adam Fraser
I forgot to ask if there is a forcefield for gromacs that will handle
fullerenes.  Is there?

-Adam

On Dec 5, 2007 1:17 PM, Adam Fraser [EMAIL PROTECTED] wrote:

 I'm trying to either build or find topology files for buckminster
 fullerene (C60).

 Does anyone know where I could find such files?

 If not, does anyone know of literature that would help me build C60?

 I already have a pdb of the structure... I just need accurate partial
 charges to build the topology file with.  Even then, I'm not confident in
 how well this will model fullerene because I just read this:

 C60 has a tendency of avoiding having double bonds within the
 pentagonal rings which makes electron delocalisation poor, and
 results in the fact that C60 is not superaromatic. C60 behaves
 very much like an electron deficient alkene...
 source: http://www.ch.ic.ac.uk/local/projects/unwin/Fullerenes.html

 I greatly appreciate any help offered,
 thank you,
 Adam
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Ambconv running

2007-12-05 Thread Shozeb Haider

Hi,

I am trying to use AMBCONV. However, it gives me a segmentation fault 
when I try to run it. I have seen that other users have posted a similar 
problem with the program on the mailing list. It seems to me that 
AMBCONV only uses older Amber formats. I have even generated that using 
set default oldprmtopformat on command. However I get the same 
segmentation fault. One user (David Evans) have mentioned that


You can generate these from new format files using a utility in the amber 
package, but they will have an extra (7th) digit
on the fouth line which will cause ambconv to crash.

Does any one know what he means by the extra (7th) digit on the fourth line. Which file is he referring to ? The prmtop or the rst ? 


Any answers will be greatly appreciated.

Best wishes

Shozeb Haider 
The London School of Pharmacy




___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] fullerene topology

2007-12-05 Thread Adam Fraser
I'm trying to either build or find topology files for buckminster fullerene
(C60).

Does anyone know where I could find such files?

If not, does anyone know of literature that would help me build C60?

I already have a pdb of the structure... I just need accurate partial
charges to build the topology file with.  Even then, I'm not confident in
how well this will model fullerene because I just read this:

C60 has a tendency of avoiding having double bonds within the
pentagonal rings which makes electron delocalisation poor, and
results in the fact that C60 is not superaromatic. C60 behaves
very much like an electron deficient alkene...
source: http://www.ch.ic.ac.uk/local/projects/unwin/Fullerenes.html

I greatly appreciate any help offered,
thank you,
Adam
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

RE: [gmx-users] fullerene topology

2007-12-05 Thread Dallas B. Warren
Have you looked at the information / forcefields for carbon nanotubes?
 

Catch ya,

Dr. Dallas Warren
Lecturer
Department of Pharmaceutical Biology and Pharmacology
Victorian College of Pharmacy, Monash University
381 Royal Parade, Parkville VIC 3010
[EMAIL PROTECTED]
+61 3 9903 9524
-
When the only tool you own is a hammer, every problem begins to resemble
a nail. 

 




From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Adam Fraser
Sent: Thursday, 6 December 2007 5:17 AM
To: Discussion list for GROMACS users
Subject: [gmx-users] fullerene topology


I'm trying to either build or find topology files for
buckminster fullerene (C60).

Does anyone know where I could find such files?

If not, does anyone know of literature that would help me build
C60?

I already have a pdb of the structure... I just need accurate
partial charges to build the topology file with.  Even then, I'm not
confident in how well this will model fullerene because I just read
this:


C60 has a tendency of avoiding having double bonds within the 
pentagonal rings which makes electron delocalisation poor, and  
results in the fact that C60 is not superaromatic. C60 behaves

very much like an electron deficient alkene...
source:
http://www.ch.ic.ac.uk/local/projects/unwin/Fullerenes.html


I greatly appreciate any help offered,
thank you,
Adam 

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] different results when using different number cpus

2007-12-05 Thread Carsten Kutzner
Hi Dechang,

it is normal that results are not binary identical if you compare the
same MD system on different numbers of processors. If you use PME then
you will probably get slightly different charge grids for 2 and for 16
processors - since the charge grid has to be divisible by the number of
CPUs in x- and y-direction. Even if you manually set the grid dimensions
to be the same for both cases, your simulations could diverge when using
version 3.x of the FFTW. This version has a build-in timer and chooses
the fastest of several algorithms which could be another even in two
runs on the same number of processors - depending on the timing results.
With different algorithms you get slight differences in the last digit
of the computed numbers (rounding / truncation / order of evaluation)
which will then grow during the simulation and lead to diverging
trajectories. Of course the averaged properties of the simulation are
unaffected by those differences and should be the same if averaged long
enough.
You could use FFTW 2.x and manually set the FFT grid size to the same
value for the 2 and 16 CPU case - but I am not shure if this is enough
to get binary identical results.
You could also repeat your simulations several times with (slightly)
different starting conditions (maybe different starting velocities) to
get a better picture of the average behaviour of your system. If in all
16 processor cases you see the proteins diverge and in all 2 processor
cases you see them converge, I would guess something is wrong.

Hope that helps,
  Carsten


Dechang Li wrote:
  Dear all,
   
   I used Gromacs3.3.1 to do a simulation about two proteins in water(tip3p).
 I run two similar simulations, one for 2 cpus, while the other for 16 cpus.
 The two simulations have the same .gro, .top, and the same .mdp files. I found
 the results were not the same. In the 2 cpus simulation, the two proteins 
 run closer and closer. But they run apart in the 16 cpus simulation.
Is that normal the different results when using different number cpus? The 
 size of my simulation box is 9*7*7.
 
 
 
 
 
 
 
 Best regards,
 
 2007-12-5
 
 
 = 
 Dechang Li, PhD Candidate
 Department of Engineering Mechanics
 Tsinghua University
 Beijing 100084
 PR China 
 
 Tel:   +86-10-62773779(O) 
 Email: [EMAIL PROTECTED]
 = 
 
 
 
 
 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-- 
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Scaling Coulomb interactions with lambda, on a pair-wise basis

2007-12-05 Thread Matt Wyczalkowski

Hi,

I am looking to scale non-bonded interactions between two atoms with the 
lambda parameter, while keeping other interactions unchanged.  I am not 
sure how to do this for Coulomb interactions.


Scaling the Lennard-Jones interactions between a specific atom pair 
seems straightforward, by setting VA, WA, VB, WB for that pair in the 
[PAIRS] directive.  However, in order to modify Coulomb interactions 
between an atom pair, it seems I need to modify qA and qB for each atom 
-- this then affects the interactions between this atom and all other 
atoms as well, something I need to avoid.


Is there a way to scale Coulomb interactions for a specific pair of 
atoms only?


Thanks in advance --

Matt

--
Matt Wyczalkowski
Doctoral Candidate, Biomedical Engineering
Pappu Lab: http://lima.wustl.edu
Washington University in St. Louis
[EMAIL PROTECTED]


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Ambconv running

2007-12-05 Thread Yang Ye

you shall use AMBER's
new2oldparm  new  old

to get the old format file.

I remember that David Mobley has another ambconv script on his website 
(http://www.alchemistry.org/) but it is not accessible right now.


Regards,
Yang Ye

On 12/6/2007 3:52 AM, Shozeb Haider wrote:

Hi,

I am trying to use AMBCONV. However, it gives me a segmentation fault 
when I try to run it. I have seen that other users have posted a 
similar problem with the program on the mailing list. It seems to me 
that AMBCONV only uses older Amber formats. I have even generated that 
using set default oldprmtopformat on command. However I get the same 
segmentation fault. One user (David Evans) have mentioned that


You can generate these from new format files using a utility in the 
amber package, but they will have an extra (7th) digit

on the fouth line which will cause ambconv to crash.

Does any one know what he means by the extra (7th) digit on the fourth 
line. Which file is he referring to ? The prmtop or the rst ?

Any answers will be greatly appreciated.

Best wishes

Shozeb Haider The London School of Pharmacy



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before 
posting!
Please don't post (un)subscribe requests to the list. Use the www 
interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php




___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] different results when using different number cpus

2007-12-05 Thread Yang Ye
1. how long is the simulation?
2. did you start from equilibration (with gen_vel=yes) or production md?
3. ...

On 12/5/2007 8:28 PM, Dechang Li wrote:
  Dear all,
   
   I used Gromacs3.3.1 to do a simulation about two proteins in water(tip3p).
 I run two similar simulations, one for 2 cpus, while the other for 16 cpus.
 The two simulations have the same .gro, .top, and the same .mdp files. I found
 the results were not the same. In the 2 cpus simulation, the two proteins 
 run closer and closer. But they run apart in the 16 cpus simulation.
Is that normal the different results when using different number cpus? The 
 size of my simulation box is 9*7*7.







 Best regards,

 2007-12-5
 

 = 
 Dechang Li, PhD Candidate
 Department of Engineering Mechanics
 Tsinghua University
 Beijing 100084
 PR China 

 Tel:   +86-10-62773779(O) 
 Email: [EMAIL PROTECTED]
 = 
   
 

 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Replica Exchange MD using Gromacs

2007-12-05 Thread Monika Sharma
Thank you very much all for your consideration and fruitful advices.
Regards,
Monika

On Wed, 2007-12-05 at 18:15 +1100, Mark Abraham wrote:
 Xavier Periole wrote:
  
  Dear Monika,
  
  the setup of a REMD simulation is actually quite straightforward.
  In the following I describe steps that would lead you to have a
  REMD simulation running on a given system. The success of the
  simulation will depend entirely on the problem you are addressing
  and the criterion you judge it. Although REMD simulation are
  helping in increasing sampling they do not provide the ultimate
  answer. This should be kept in mind.
 
 I've added a section on replica-exchange to 
 http://wiki.gromacs.org/index.php/Steps_to_Perform_a_Simulation which 
 people may wish to review.
 
 Mark
 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php
 



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Re: gmx-users Digest, Vol 44, Issue 14

2007-12-05 Thread Li Zhenhai
Hi, Mark

You mean I should use gcc to compile the fftw, or both of fftw and gromacs? 

In fact I want to install a parellel gromacs, so perhaps I can install 
fftw with gcc, and install gromacs with xlc and mpcc?

Thanks for your response.

Li Zhenhai
Department of Engineering Mechanics
Tsinghua University
Beijing 100084
China
Tel: 86-10-62773779
E-mail: [EMAIL PROTECTED]

2007-12-06

Mark wrote:

I'd suggest using gcc, or a gcc-compatibility mode of your compiler, if it
exists.

Mark



--

___
gmx-users mailing list
gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] different results when using different number cpus

2007-12-05 Thread David van der Spoel
Yang Ye wrote:
 1. how long is the simulation?
 2. did you start from equilibration (with gen_vel=yes) or production md?
 3. ...
 
 On 12/5/2007 8:28 PM, Dechang Li wrote:
  Dear all,
  
   I used Gromacs3.3.1 to do a simulation about two proteins in water(tip3p).
 I run two similar simulations, one for 2 cpus, while the other for 16 cpus.
 The two simulations have the same .gro, .top, and the same .mdp files. I 
 found
 the results were not the same. In the 2 cpus simulation, the two proteins 
 run closer and closer. But they run apart in the 16 cpus simulation.
Is that normal the different results when using different number cpus? 
 The 
 size of my simulation box is 9*7*7.

Answer is yes.
http://wiki.gromacs.org/index.php/Reproducibility









 Best regards,

 2007-12-5
 

 =
 Dechang Li, PhD Candidate
 Department of Engineering Mechanics
 Tsinghua University
 Beijing 100084
 PR China 

 Tel:   +86-10-62773779(O) 
 Email: [EMAIL PROTECTED]
 = 
   
 

 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php
 
 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php


-- 
David van der Spoel, Ph.D.
Molec. Biophys. group, Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205. Fax: +4618511755.
[EMAIL PROTECTED]   [EMAIL PROTECTED]   http://folding.bmc.uu.se
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php