Re: [gmx-users] old patch for slow networks

2008-04-01 Thread Carsten Kutzner

Jones de Andrade wrote:

Hi all.

I'm writing this message to ask a simple and direct question: the 
available patch of gmx3 for slow networks, that includes the 
ordered-all-to-all procedures, seems fo be indicated for version 3.1.

Hi Jones,

this is actually a patch for gromacs 3.3.1.


It's still necessary for slow networks using gmx3.3?


Whether it is necessary depends on your setup. In the patched version 
the all-to-all communication happens in an orderly fashion such that TCP 
packet drops are avoided. So the patch will speed up your parallel 
performance if you have packet drops in the first place. An indication 
for such drops is when you make speedup measurements on 1, 2, 3, ... 
nodes (each node may have multiple CPUs) and your performance curve is 
not smooth but abruptly drops down when you go to more than two nodes.


With the patch comes a test program for all-to-all problems.

Hope that helps,
  Carsten



Thanks a lot in advance...

Sincerally yours,

Jones




___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


RE: [gmx-users] Blue Gene/P

2008-04-01 Thread Berk Hess
Hi,

Gromacs 4 has blue gene optimized inner loops, which give a factor of 2
performance improvement.
On a BG/L it runs at 20-25% of the speed of x86 cores, but it scales
quite well.

Berk.


 Date: Mon, 31 Mar 2008 17:29:57 -0700
 From: [EMAIL PROTECTED]
 To: gmx-users@gromacs.org
 Subject: Re: [gmx-users] Blue Gene/P
 
 Thanks. I've seen the BG/L argument, but BG/P processors are faster. I 
 have no idea how much though, and am looking for actual numbers or if 
 those are not available a guess based on actual BG/L numbers and some 
 argument about scaling to a BG/P.
  
 Cheers,
 Peter
 
 Mark Abraham wrote:
  Peter Tieleman wrote:
  Hi,
 
  Has anyone run benchmarks on a BlueGene/P ?
 
  The word from IBM in December was that since GROMACS 3.x would lack 
  both assembly inner-loops and threading on BlueGene/L, that it wasn't 
  worthwhile running GROMACS on onw. I figure that goes for BlueGene/P too.
 
  Mark
  ___
  gmx-users mailing listgmx-users@gromacs.org
  http://www.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at http://www.gromacs.org/search before 
  posting!
  Please don't post (un)subscribe requests to the list. Use the www 
  interface or send it to [EMAIL PROTECTED]
  Can't post? Read http://www.gromacs.org/mailing_lists/users.php
 
 
 
 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

_
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] how to simulate 2 structures together

2008-04-01 Thread Radhika Jaswal
Hii
I want to simmulate and minimize two protein structures together in water,
can you tell me  any tutorial online or send me any information.

Radhika

On 4/1/08, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:

 Send gmx-users mailing list submissions to
 gmx-users@gromacs.org

 To subscribe or unsubscribe via the World Wide Web, visit
 http://www.gromacs.org/mailman/listinfo/gmx-users
 or, via email, send a message with subject or body 'help' to
 [EMAIL PROTECTED]

 You can reach the person managing the list at
 [EMAIL PROTECTED]

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of gmx-users digest...


 Today's Topics:

1. RE: Blue Gene/P (Berk Hess)
2. Re: Blue Gene/P (Hannes Loeffler)


 --

 Message: 1
 Date: Tue, 1 Apr 2008 09:34:54 +0200
 From: Berk Hess [EMAIL PROTECTED]
 Subject: RE: [gmx-users] Blue Gene/P
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID: [EMAIL PROTECTED]
 Content-Type: text/plain; charset=iso-8859-1

 Hi,

 Gromacs 4 has blue gene optimized inner loops, which give a factor of 2
 performance improvement.
 On a BG/L it runs at 20-25% of the speed of x86 cores, but it scales
 quite well.

 Berk.


  Date: Mon, 31 Mar 2008 17:29:57 -0700
  From: [EMAIL PROTECTED]
  To: gmx-users@gromacs.org
  Subject: Re: [gmx-users] Blue Gene/P
 
  Thanks. I've seen the BG/L argument, but BG/P processors are faster. I
  have no idea how much though, and am looking for actual numbers or if
  those are not available a guess based on actual BG/L numbers and some
  argument about scaling to a BG/P.
 
  Cheers,
  Peter
 
  Mark Abraham wrote:
   Peter Tieleman wrote:
   Hi,
  
   Has anyone run benchmarks on a BlueGene/P ?
  
   The word from IBM in December was that since GROMACS 3.x would lack
   both assembly inner-loops and threading on BlueGene/L, that it wasn't
   worthwhile running GROMACS on onw. I figure that goes for BlueGene/P
 too.
  
   Mark
   ___
   gmx-users mailing listgmx-users@gromacs.org
   http://www.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at http://www.gromacs.org/search before
   posting!
   Please don't post (un)subscribe requests to the list. Use the www
   interface or send it to [EMAIL PROTECTED]
   Can't post? Read http://www.gromacs.org/mailing_lists/users.php
  
  
 
  ___
  gmx-users mailing listgmx-users@gromacs.org
  http://www.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at http://www.gromacs.org/search before
 posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to [EMAIL PROTECTED]
  Can't post? Read http://www.gromacs.org/mailing_lists/users.php

 _
 Express yourself instantly with MSN Messenger! Download today it's FREE!
 http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 http://www.gromacs.org/pipermail/gmx-users/attachments/20080401/8d0d044f/attachment-0001.html

 --

 Message: 2
 Date: Tue, 01 Apr 2008 09:10:09 +0100
 From: Hannes Loeffler [EMAIL PROTECTED]
 Subject: Re: [gmx-users] Blue Gene/P
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID: [EMAIL PROTECTED]
 Content-Type: text/plain

 Ok, here just a few numbers.

 System: 52.000 atoms, coarse-grained, 500.000 steps

 BlueGene/P: 16(procs) x 4(threads) = 64 tasks (or whatever they are
 called)
8x4   7.25 h
   16x4  11h
   32x4   12  h (queue only allows = 12 h)

 BlueGene/L:
   16x2  10 h
   64x2   12 h

 Note: the procs x threads figures above are the numbers I asked for.  It
 may be possible that the scheduler distributes in a different manner.  I
 don't now.

 That's all I have for the moment.


 On Mon, 2008-03-31 at 17:29 -0700, Peter Tieleman wrote:
  Thanks. I've seen the BG/L argument, but BG/P processors are faster. I
  have no idea how much though, and am looking for actual numbers or if
  those are not available a guess based on actual BG/L numbers and some
  argument about scaling to a BG/P.
 
  Cheers,
  Peter
 
  Mark Abraham wrote:
   Peter Tieleman wrote:
   Hi,
  
   Has anyone run benchmarks on a BlueGene/P ?
  
   The word from IBM in December was that since GROMACS 3.x would lack
   both assembly inner-loops and threading on BlueGene/L, that it wasn't
   worthwhile running GROMACS on onw. I figure that goes for BlueGene/P
 too.
  
   Mark
   ___
   gmx-users mailing listgmx-users@gromacs.org
   http://www.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at http://www.gromacs.org/search before
   posting!
   Please don't post (un)subscribe

Re: [gmx-users] Blue Gene/P

2008-04-01 Thread Hannes Loeffler
Ok, here just a few numbers.

System: 52.000 atoms, coarse-grained, 500.000 steps

BlueGene/P: 16(procs) x 4(threads) = 64 tasks (or whatever they are
called)
   8x4   7.25 h 
  16x4  11h
  32x4   12  h (queue only allows = 12 h)

BlueGene/L:
  16x2  10 h
  64x2   12 h

Note: the procs x threads figures above are the numbers I asked for.  It
may be possible that the scheduler distributes in a different manner.  I
don't now.

That's all I have for the moment.


On Mon, 2008-03-31 at 17:29 -0700, Peter Tieleman wrote:
 Thanks. I've seen the BG/L argument, but BG/P processors are faster. I 
 have no idea how much though, and am looking for actual numbers or if 
 those are not available a guess based on actual BG/L numbers and some 
 argument about scaling to a BG/P.
  
 Cheers,
 Peter
 
 Mark Abraham wrote:
  Peter Tieleman wrote:
  Hi,
 
  Has anyone run benchmarks on a BlueGene/P ?
 
  The word from IBM in December was that since GROMACS 3.x would lack 
  both assembly inner-loops and threading on BlueGene/L, that it wasn't 
  worthwhile running GROMACS on onw. I figure that goes for BlueGene/P too.
 
  Mark
  ___
  gmx-users mailing listgmx-users@gromacs.org
  http://www.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at http://www.gromacs.org/search before 
  posting!
  Please don't post (un)subscribe requests to the list. Use the www 
  interface or send it to [EMAIL PROTECTED]
  Can't post? Read http://www.gromacs.org/mailing_lists/users.php
 
 
 
 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] how to simulate 2 structures together

2008-04-01 Thread Justin A. Lemkul
Quoting Radhika Jaswal [EMAIL PROTECTED]:

 Hii
 I want to simmulate and minimize two protein structures together in water,
 can you tell me  any tutorial online or send me any information.

I would suggest starting with basic tutorial material regarding simple
protein-in-water systems.  That way, you can gain appreciation for how to set
up a generic system within Gromacs, as well as understanding the topology
format.  Once you feel comfortable using the basic tools to set up a basic
system, then move on to a two-component system.  There are relevant discussions
in the list archive on how to deal with peptides with two chains, multiple
proteins, and so forth.  Do a thorough search of the archive, and post specific
questions when they arise.

-Justin


 Radhika

 On 4/1/08, [EMAIL PROTECTED] [EMAIL PROTECTED]
 wrote:
 
  Send gmx-users mailing list submissions to
  gmx-users@gromacs.org
 
  To subscribe or unsubscribe via the World Wide Web, visit
  http://www.gromacs.org/mailman/listinfo/gmx-users
  or, via email, send a message with subject or body 'help' to
  [EMAIL PROTECTED]
 
  You can reach the person managing the list at
  [EMAIL PROTECTED]
 
  When replying, please edit your Subject line so it is more specific
  than Re: Contents of gmx-users digest...
 
 
  Today's Topics:
 
 1. RE: Blue Gene/P (Berk Hess)
 2. Re: Blue Gene/P (Hannes Loeffler)
 
 
  --
 
  Message: 1
  Date: Tue, 1 Apr 2008 09:34:54 +0200
  From: Berk Hess [EMAIL PROTECTED]
  Subject: RE: [gmx-users] Blue Gene/P
  To: Discussion list for GROMACS users gmx-users@gromacs.org
  Message-ID: [EMAIL PROTECTED]
  Content-Type: text/plain; charset=iso-8859-1
 
  Hi,
 
  Gromacs 4 has blue gene optimized inner loops, which give a factor of 2
  performance improvement.
  On a BG/L it runs at 20-25% of the speed of x86 cores, but it scales
  quite well.
 
  Berk.
 
 
   Date: Mon, 31 Mar 2008 17:29:57 -0700
   From: [EMAIL PROTECTED]
   To: gmx-users@gromacs.org
   Subject: Re: [gmx-users] Blue Gene/P
  
   Thanks. I've seen the BG/L argument, but BG/P processors are faster. I
   have no idea how much though, and am looking for actual numbers or if
   those are not available a guess based on actual BG/L numbers and some
   argument about scaling to a BG/P.
  
   Cheers,
   Peter
  
   Mark Abraham wrote:
Peter Tieleman wrote:
Hi,
   
Has anyone run benchmarks on a BlueGene/P ?
   
The word from IBM in December was that since GROMACS 3.x would lack
both assembly inner-loops and threading on BlueGene/L, that it wasn't
worthwhile running GROMACS on onw. I figure that goes for BlueGene/P
  too.
   
Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
   
   
  
   ___
   gmx-users mailing listgmx-users@gromacs.org
   http://www.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at http://www.gromacs.org/search before
  posting!
   Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to [EMAIL PROTECTED]
   Can't post? Read http://www.gromacs.org/mailing_lists/users.php
 
  _
  Express yourself instantly with MSN Messenger! Download today it's FREE!
  http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
  -- next part --
  An HTML attachment was scrubbed...
  URL:
 

http://www.gromacs.org/pipermail/gmx-users/attachments/20080401/8d0d044f/attachment-0001.html
 
  --
 
  Message: 2
  Date: Tue, 01 Apr 2008 09:10:09 +0100
  From: Hannes Loeffler [EMAIL PROTECTED]
  Subject: Re: [gmx-users] Blue Gene/P
  To: Discussion list for GROMACS users gmx-users@gromacs.org
  Message-ID: [EMAIL PROTECTED]
  Content-Type: text/plain
 
  Ok, here just a few numbers.
 
  System: 52.000 atoms, coarse-grained, 500.000 steps
 
  BlueGene/P: 16(procs) x 4(threads) = 64 tasks (or whatever they are
  called)
 8x4   7.25 h
16x4  11h
32x4   12  h (queue only allows = 12 h)
 
  BlueGene/L:
16x2  10 h
64x2   12 h
 
  Note: the procs x threads figures above are the numbers I asked for.  It
  may be possible that the scheduler distributes in a different manner.  I
  don't now.
 
  That's all I have for the moment.
 
 
  On Mon, 2008-03-31 at 17:29 -0700, Peter Tieleman wrote:
   Thanks. I've seen the BG/L argument, but BG/P processors are faster. I
   have no idea how much though, and am looking for actual numbers

[gmx-users] Problem with lipid simulations

2008-04-01 Thread csreddy
Dear All,
I have been trying to simulate a small peptide in a DMPC lipid bylayer.
Same setup is working fine in one computer (cluster) but I am getting the
following error message in another cluster.
grompp_mpi -f test.mdp -c lipid_protein.gro -p topology.top -o test.tpr


checking input for internal consistency...
calling /lib/cpp...
processing topology...
WARNING 1 [file lipid.itp, line 14]:
  Overriding atomtype LO
WARNING 2 [file lipid.itp, line 15]:
  Overriding atomtype LOM
WARNING 3 [file lipid.itp, line 16]:
  Overriding atomtype LNL
WARNING 4 [file lipid.itp, line 17]:
  Overriding atomtype LC
WARNING 5 [file lipid.itp, line 18]:
  Overriding atomtype LH1
WARNING 6 [file lipid.itp, line 19]:
  Overriding atomtype LH2
WARNING 7 [file lipid.itp, line 20]:
  Overriding atomtype LP
WARNING 8 [file lipid.itp, line 21]:
  Overriding atomtype LOS
WARNING 9 [file lipid.itp, line 22]:
  Overriding atomtype LP2
WARNING 10 [file lipid.itp, line 23]:
  Overriding atomtype LP3
Cleaning up temporary file grompp16bicP
---
Program grompp_mpi, VERSION 3.3.1
Source code file: fatal.c, line: 416

Fatal error:
Too many warnings, grompp_mpi terminated
---

Please let me know how to get rid of this problem.

Thanks in advance

Regards
Chandu



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Problem with lipid simulations

2008-04-01 Thread chris . neale
Where is lipid.itp and your other forcefield files. Check that they  
are identical between your two clusters. This appears to be a problem  
with files and not with the installation. If you can't figure it out,  
you will need to post your topology at the very least and also post  
'grep LO ffoplsaa*.itp' or xxx in place of oplsaa for whatever ff you  
are using. That said, the error messages are pretty explicit. I assume  
that you checked to see that you are including atomtype LO, etc. more  
than once? You can probably figure out exactly where the problem is  
occurring by yourself and you are more likely to get some good advice  
ifyou do that.


Chris.

-- original message --

Dear All,
I have been trying to simulate a small peptide in a DMPC lipid bylayer.
Same setup is working fine in one computer (cluster) but I am getting the
following error message in another cluster.
grompp_mpi -f test.mdp -c lipid_protein.gro -p topology.top -o test.tpr


checking input for internal consistency...
calling /lib/cpp...
processing topology...
WARNING 1 [file lipid.itp, line 14]:
  Overriding atomtype LO
WARNING 2 [file lipid.itp, line 15]:
  Overriding atomtype LOM
WARNING 3 [file lipid.itp, line 16]:
  Overriding atomtype LNL
WARNING 4 [file lipid.itp, line 17]:
  Overriding atomtype LC
WARNING 5 [file lipid.itp, line 18]:
  Overriding atomtype LH1
WARNING 6 [file lipid.itp, line 19]:
  Overriding atomtype LH2
WARNING 7 [file lipid.itp, line 20]:
  Overriding atomtype LP
WARNING 8 [file lipid.itp, line 21]:
  Overriding atomtype LOS
WARNING 9 [file lipid.itp, line 22]:
  Overriding atomtype LP2
WARNING 10 [file lipid.itp, line 23]:
  Overriding atomtype LP3
Cleaning up temporary file grompp16bicP
---
Program grompp_mpi, VERSION 3.3.1
Source code file: fatal.c, line: 416

Fatal error:
Too many warnings, grompp_mpi terminated
---

Please let me know how to get rid of this problem.

Thanks in advance

Regards
Chandu






___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Problem with lipid simulations

2008-04-01 Thread patrick fuchs

Hi Chandu,
this looks like you have two statements #include lipid.itp, so check 
all your *.top and *.itp files.

Ciao,

Patrick

[EMAIL PROTECTED] a écrit :

Dear All,
I have been trying to simulate a small peptide in a DMPC lipid bylayer.
Same setup is working fine in one computer (cluster) but I am getting the
following error message in another cluster.
grompp_mpi -f test.mdp -c lipid_protein.gro -p topology.top -o test.tpr


checking input for internal consistency...
calling /lib/cpp...
processing topology...
WARNING 1 [file lipid.itp, line 14]:
  Overriding atomtype LO
WARNING 2 [file lipid.itp, line 15]:
  Overriding atomtype LOM
WARNING 3 [file lipid.itp, line 16]:
  Overriding atomtype LNL
WARNING 4 [file lipid.itp, line 17]:
  Overriding atomtype LC
WARNING 5 [file lipid.itp, line 18]:
  Overriding atomtype LH1
WARNING 6 [file lipid.itp, line 19]:
  Overriding atomtype LH2
WARNING 7 [file lipid.itp, line 20]:
  Overriding atomtype LP
WARNING 8 [file lipid.itp, line 21]:
  Overriding atomtype LOS
WARNING 9 [file lipid.itp, line 22]:
  Overriding atomtype LP2
WARNING 10 [file lipid.itp, line 23]:
  Overriding atomtype LP3
Cleaning up temporary file grompp16bicP
---
Program grompp_mpi, VERSION 3.3.1
Source code file: fatal.c, line: 416

Fatal error:
Too many warnings, grompp_mpi terminated
---

Please let me know how to get rid of this problem.

Thanks in advance

Regards
Chandu



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
_
 new E-mail address: [EMAIL PROTECTED] 
Patrick FUCHS
Equipe de Bioinformatique Genomique et Moleculaire
INSERM U726, Universite Paris 7
Case Courrier 7113
2, place Jussieu, 75251 Paris Cedex 05, FRANCE
Tel : +33 (0)1-44-27-77-16 - Fax : +33 (0)1-43-26-38-30
Web Site: http://www.dsimb.inserm.fr/~fuchs

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php