[gmx-users] 1-4 interactions

2009-09-25 Thread Vitaly V. Chaban
Hi,

Is it possible in gromacs to make 1-4 interaction energy reduced as
compared to the intermolecular interations between the same centers?
In other words, I want the INTRAmolecular interaction A-B to be by
some factor smaller than INTERmolecular A-B interactions in the box.

Thanks,
Vitaly
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Trajectory files in vmd

2009-09-25 Thread Aditi Borkar
Dear Rui,

Thanx for the explanation. I did not know that VMD only calculates the
secondary structure fort he first frame only. Is there an option to
calculate the secondary structure (say in New cartoon representation)
of all the frames in the trajectory?



On Thu, Sep 24, 2009 at 5:39 PM, J. Rui Rodrigues
 wrote:
> Dear Aditi,
>
> What do you mean with "evolution of the protein structure"? Are you referring 
> to
> *secondary structure*? By default, VMD only calculates it for the first 
> trajectory frame.
>
> --Rui
>
>
> On Thu, 24 Sep 2009 10:51:04 +0530, Aditi Borkar wrote
>> Dear All,
>>
>> When I am loading the GROMACS trajectory in VMD, I cannot the
>> evolution of the protein structure with time.
>>
>> When I am creating pdb files from the trajectory at different time
>> steps using the dump option, I do see changes in the protein structure
>> with time. My final structure after the MD simulation is also a lot
>> different that my start. However, when loading the trajectory, I do
>> not see a gradual/drastic transition from the starting to the final
>> conformation in VMD.
>>
>> Please suggest where am I going wrong.
>>
>> Thank you
>> --
>> Aditi Borkar,
>> Tata Institute of Fundamental Research,
>> Mumbai.
>> ___
>> gmx-users mailing list    gmx-us...@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
>
> --
> Webmail ESTG de Leiria (http://webmail.estg.ipleiria.pt)
>
> ___
> gmx-users mailing list    gmx-us...@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>



-- 
Aditi Borkar,
Tata Institute of Fundamental Research,
Mumbai.
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] PEO and OPLS-AA FF in gmx

2009-09-25 Thread Justin A. Lemkul



FLOR MARTINI wrote:
Perhaps you can do your parametrization. Enter to the prodrg page and 
you will see!

The page is:
http://davapc1.bioch.dundee.ac.uk/prodrg/
and you will get your .gro and .top, and you don't need to use the 
pdb2gmx. If you want to use it anyway, you need to edit the data base, 
but I think that the manual is not very clear for this item.
I think that you should see the .atp they are in the 
--/gromacs/share/top/ffGxx.atp where xx is the force field that you 
would use.

See your .pdb also and compare the atoms that there are not defined.
Cheers.


PRODRG topologies are for use with Gromos force fields, so it will not work for 
OPLS (many users have tried, and they are very disappointed).  I do not know 
what this business about editing .atp files is for - you shouldn't modify them 
unless you are modifying the entire force field.  Do you mean the .rtp files? 
You can make modifications to it to define new residues so that pdb2gmx will 
recognize them, but the .atp files generally remain unchanged.


Also important to note is that PRODRG topologies, taken at face value, are often 
unsatisfactory, requiring manual modification, and validation, as always.


-Justin


Flor


Dra.M.Florencia Martini
Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas
Cátedra de Química General e Inorgánica
Facultad de Farmacia y Bioquímica
Universidad de Buenos Aires
Junín 956 2º (1113)
TE: 54 011 4964-8249 int 24

--- El *vie 25-sep-09, Justin A. Lemkul //* escribió:


De: Justin A. Lemkul 
Asunto: Re: [gmx-users] PEO and OPLS-AA FF in gmx
Para: "Discussion list for GROMACS users" 
Fecha: viernes, 25 de septiembre de 2009, 12:32 pm



lammps lammps wrote:
 > Hi,
 >  I want to simulate a PEO chain in water using OPLs-AA FF in Gromacs.
 >  I have created a .PDB file using Material Studio, but I seems
that the pdb2gmx can not dealt with it because of the error" Residue
'xx' not found in residue topology database"
 > The question is how can I obtain the .top and .gro file in the
framework of OPLS-AA FF. Any suggestion is appreciated.
 > 


You have to derive parameters for yourself:

http://www.gromacs.org/Documentation/How-tos/Parametrization

There are a few scripts in the User Contributions section that make
efforts to do this for you, but you still have to demonstrate that
the parameters are valid.

-Justin

 > Thanks in advance.
 >
 > -- wende
 >
 >
 >

 >
 > ___
 > gmx-users mailing listgmx-users@gromacs.org

 > http://lists.gromacs.org/mailman/listinfo/gmx-users
 > Please search the archive at http://www.gromacs.org/search before
posting!
 > Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org
.
 > Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-- 

Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org

http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org
.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




Encontra las mejores recetas con Yahoo! Cocina.
http://ar.mujer.yahoo.com/cocina/


--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] trying to get better performance in a Rocks cluster running GROMACS 4.0.4

2009-09-25 Thread FLOR MARTINI
Hi, yeah, we have a clearly better performance with 2 nodes (8 CPU)! if
we try the same on 1 node (4 CPU) we have a day of difference. Yes, we
have a Gigabit ethernet network, so it is clear what you say about
congestion problems. We were thinking about to by pass the switch,
using another ethernet between two nodes each. Do you think that it
could be better for our performance??
Thank you in advance.
Flor

Dra.M.Florencia Martini

Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas

Cátedra de Química General e Inorgánica

Facultad de Farmacia y Bioquímica

Universidad de Buenos Aires

Junín 956 2º (1113)

TE: 54 011 4964-8249 int 24

--- El vie 25-sep-09, Carsten Kutzner  escribió:

De: Carsten Kutzner 
Asunto: Re: [gmx-users] trying to get better performance in a Rocks cluster 
running GROMACS 4.0.4
Para: flormart...@yahoo.com.ar, "Discussion list for GROMACS users" 

Fecha: viernes, 25 de septiembre de 2009, 9:19 am

Hi,
if you run without PME, there will be no all-to-all communication anyway, so in 
this sense the paper is (mostly) irrelevant here. Since you mention thispaper I 
assume that your network is gigabit ethernet. If you run on recentprocessors 
then I would say that for a 1 atom system on 8 cores theethernet is clearly 
the limiting factor, even if it runs optimal (the chance forcongestion problems 
on two nodes only is also very limited - these arelikely to appear on 3 or more 
nodes). 
What is your performance on a single node (4 CPUs)? You could compare that to 
the performance of 4 CPUs on 2 nodes to determine the networkimpact.
Carsten

On Sep 24, 2009, at 5:08 PM, FLOR MARTINI wrote:
Thanks for your question.
We are running a lipid bilayer of 128 DPPC and 3655 water molecules and the 
nstep of the mdp is a total for 10 ns. I don´t think really that our system is 
a small one...

Dra.M.Florencia Martini
Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas
Cátedra de Química General e Inorgánica
Facultad de Farmacia y Bioquímica
Universidad de Buenos Aires
Junín 956 2º (1113)
TE: 54 011 4964-8249 int 24

--- El jue 24-sep-09, Berk Hess  escribió:

De: Berk Hess 
Asunto: RE: [gmx-users] trying to get better performance in a Rocks cluster 
running GROMACS 4.0.4
Para: "Discussion list for GROMACS users" 
Fecha: jueves, 24 de septiembre de 2009, 11:22 am

Hi,

You don't mention what kind of benchmark system tou are using for these tests.
A too small system could explain these results.

Berk


Date: Thu, 24 Sep 2009 07:01:04 -0700
From: flormart...@yahoo.com.ar
To: gmx-us...@gromacs.org
Subject: [gmx-users] trying to get better performance in a Rocks cluster
running GROMACS 4.0.4

hi,

   We are about to start running GROMACS 4.0.4 with OpenMPI, in 8
nodes, quad core Rocks cluster. We made some tests, without PME and
found two notable things:

* We are getting the best speedup (6) with 2 nodes ( == 8 cores ). I read
the "Speeding Up Parallel GROMACS in High Latency networks" paper, and
thought that the culprit was the switch, but ifconfig shows no retransmits
(neither does ethtool -s or netstat -s). Does version 4 includes the
alltoall patch? Is the paper irrelevant with GROMACS 4?

* When running with the whole cluster ( 8 nodes, 32 cores ), top reports
in any node a 50% system CPU usage. Is that normal? Can it be accounted to
the use of the network? The sys usage gets a bit up when we configured the
Intel NICs with Interrupt Coalescense Off, so I'm tempted to think it is
just OpenMPI hammering the tcp stack, polling for packages.

Thanks in advance,

Dra.M.Florencia Martini
Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas
Cátedra de Química General e Inorgánica
Facultad de Farmacia y Bioquímica
Universidad de Buenos Aires
Junín 956 2º (1113)
TE: 54 011 4964-8249 int 24

Encontra las mejores recetas con Yahoo! Cocina. 
http://ar.mujer.yahoo.com/cocina/ 
Express yourself instantly with MSN Messenger! MSN Messenger
-Adjunto en línea a continuación-

___
gmx-users mailing list    gmx-us...@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Encontra las mejores recetas con Yahoo! Cocina. 
http://ar.mujer.yahoo.com/cocina/___
gmx-users mailing list    gmx-us...@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
 
--Dr. Carsten KutznerMax Planck Institute for Biophysical ChemistryTheoretical 
and Computational Biop

Re: [gmx-users] PEO and OPLS-AA FF in gmx

2009-09-25 Thread FLOR MARTINI
Perhaps you can do your parametrization. Enter to the prodrg page and you will 
see!
The page is: 
http://davapc1.bioch.dundee.ac.uk/prodrg/
and you will get your .gro and .top, and you don't need to use the pdb2gmx. If 
you want to use it anyway, you need to edit the data base, but I think that the 
manual is not very clear for this item.
I think that you should see the .atp they are in the 
--/gromacs/share/top/ffGxx.atp where xx is the force field that you would use.
See your .pdb also and compare the atoms that there are not defined.
Cheers.
Flor


Dra.M.Florencia Martini

Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas

Cátedra de Química General e Inorgánica

Facultad de Farmacia y Bioquímica

Universidad de Buenos Aires

Junín 956 2º (1113)

TE: 54 011 4964-8249 int 24

--- El vie 25-sep-09, Justin A. Lemkul  escribió:

De: Justin A. Lemkul 
Asunto: Re: [gmx-users] PEO and OPLS-AA FF in gmx
Para: "Discussion list for GROMACS users" 
Fecha: viernes, 25 de septiembre de 2009, 12:32 pm



lammps lammps wrote:
> Hi,
>  I want to simulate a PEO chain in water using OPLs-AA FF in Gromacs.
>  I have created a .PDB file using Material Studio, but I seems that the 
>pdb2gmx can not dealt with it because of the error" Residue 'xx' not found in 
>residue topology database"
> The question is how can I obtain the .top and .gro file in the framework of 
> OPLS-AA FF. Any suggestion is appreciated.
>  

You have to derive parameters for yourself:

http://www.gromacs.org/Documentation/How-tos/Parametrization

There are a few scripts in the User Contributions section that make efforts to 
do this for you, but you still have to demonstrate that the parameters are 
valid.

-Justin

> Thanks in advance.
> 
> -- wende
> 
> 
> 
> 
> ___
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-- 

Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or 
send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php



  Yahoo! Cocina

Encontra las mejores recetas con Yahoo! Cocina.


http://ar.mujer.yahoo.com/cocina/___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] nvt.gro

2009-09-25 Thread Justin A. Lemkul



ram bio wrote:

Dear Justin,

As suggested, i increased the force constant in the Z dimension from 
1000 to 1, and did the NVT equillibration, but still the gap 
existed, then i gave the output of nvt equillibration that is nvt.gro as 
input for the NPT anneling simulation (suggested as with position 
constraints, 1000) and simulated and here also i had gap .between layers 
when npt.gro was viewed in VMD.


I have a query that is can I use nvt equllibrated system as input for 
NPT simualated annealing or should I use the initial ionized and 
minimized system for the NPT annaelated simulation, as the gap is still 
persisting...




Use the energy-minimized system as the input into annealing.  I have no idea why 
this separation would be happening in this system, unless the box has been 
prepared improperly.  I chose the KALP-DPPC system because it is very robust in 
everything we've tried to subject it to.


-Justin


Thanks

Ram

On Thu, Sep 24, 2009 at 4:16 PM, Justin A. Lemkul > wrote:




ram bio wrote:

Dear Justin,

As suggested in the tutorial by you i applied the lipid position
restraints, while running the NVT  equillibration, but after the
job is finished, when i observed the nvt.gro file in VMD, still
there is a gap between the lipid bilayers but this time the gap
is not so large as it was in the earlier run (as discussed
earlier in previous email).

As I was already running the NPT equillibration(which I obtained
after the earlier NVT job, which ended in large gap between
layers), i just wanted to observe it and here there is no gap in
between the layers i.e. in npt.gro.

Please suggest me what to do to lower the gap after NVT
equillibration even after applying the lipid restraints and is
it ok for my NPT equillibration as there are no gaps between the
layers after this NPT equillibration.


The gap arises because the lipids (when free to move) are attracted
to the water above and below the bilayer.  If the protein is
restrained, it doesn't move. The box size in NVT is fixed, so the
system is trying to fill it.  It could be that your box was
inappropriately assigned (too large), but maybe not.

I am surprised that, even when using position restraints, the lipids
still separated at all.  Did you use the lipid_posre.itp file that I
provide on the tutorial site?  It has always worked well for me in
such cases.  You could also try increasing the force constant in the
z-dimension.

The other option is to do NPT simulated annealing, as I also suggest
in the troubleshooting page.  Using NPT allows the box to deform in
response to the system, so you will probably get less weird
behavior.  I have found that both NVT with PR and simulated
annealing can get the job done.

-Justin

Thanks

Ram


On Tue, Sep 22, 2009 at 8:25 PM, ram bio mailto:rmbio...@gmail.com> >> wrote:

   Dear Justin,

   Thanks for the suggestion, will try to apply position
restraints on
   lipid as mentioned in the advanced trouble shooting section.

   Ram


   On Tue, Sep 22, 2009 at 8:08 PM, Justin A. Lemkul
mailto:jalem...@vt.edu>
   >> wrote:



   ram bio wrote:

   Dear Gromacs Users,

   I am following the justin tutorial on KALP-15 in lipid
   bilayer, I have a query regarding the nvt.gro that is
after
   the NVT equillibration phase. The mdrun was proper
without
   any warnings or errors, but when i visuallized the
nvt.gro
   in VMD, i found that the peptide is intact in between the
   bilayers, but the the two layers got separated or
else it is
   like the peptide bridging the the two halves of the lipid
   bilayer with gap in between the layers and also found few
   water molecules to the sides of the peptide or in the gap
   mentioned betwwn the layers.

   Please let me know is the simulation going on normally or
   there is an defect or wrong going on, as the nvt
   equillibration was proper as i think i continued for the
   next equillibration that is npt for 1ns.


   You shouldn't continue blindly if you get weird results.
 Please
   see the "Advanced Troubleshooting" page (part of the
tutorial!),
   because I specifically address the issue of a bilayer
separating:

 
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/membrane_pro

Re: [gmx-users] PEO and OPLS-AA FF in gmx

2009-09-25 Thread Justin A. Lemkul



lammps lammps wrote:

Hi,
 
I want to simulate a PEO chain in water using OPLs-AA FF in Gromacs.
 
I have created a .PDB file using Material Studio, but I seems that the 
pdb2gmx can not dealt with it because of the error" Residue 'xx' not 
found in residue topology database"
The question is how can I obtain the .top and .gro file in the framework 
of OPLS-AA FF. Any suggestion is appreciated.
 


You have to derive parameters for yourself:

http://www.gromacs.org/Documentation/How-tos/Parametrization

There are a few scripts in the User Contributions section that make efforts to 
do this for you, but you still have to demonstrate that the parameters are valid.


-Justin


Thanks in advance.

--
wende




___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Problem with domain decomposition

2009-09-25 Thread Stephane Abel

Hi gromacs users and experts

I am doing some simulations using 8 CPU of solvate peptide (8 AA) in 
octahedron truncated box  (5150) with SPC water with GMX 4.05.  To 
simulate during a  long  time  i am cutting my simulation in 24 h time 
period (25 ns/day) using checkpoints.  During my last simulation part, i 
have note that the simulation was 2.6 slower (sim_last) than the 
preceding run (sim_prev). I have note that message at the end of the log 
file of the sim_last


 Log of sim_last -

   D O M A I N   D E C O M P O S I T I O N   S T A T I S T I C S

av. #atoms communicated per step for force:  2 x 35969.0
av. #atoms communicated per step for LINCS:  2 x 58.1

Average load imbalance: 4.6 %
Part of the total run time spent waiting due to load imbalance: 1.5 %


R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

Computing: Nodes Number G-CyclesSeconds %
---
Domain decomp. 8102554019176.963 6392.1 0.9
Comm. coord.   8512769812300.804 4100.1 0.6
Neighbor search81025541   183144.97561046.2 8.9
Force  85127698   263336.03287775.612.8
Wait + Comm. F 8512769823995.139 7998.1 1.2
PME mesh   85127698   265259.76788416.812.9
Write traj.8   5184   154247.41751414.0 7.5
Update 8512769813123.384 4374.3 0.6
Constraints8512769816635.925 5545.1 0.8
Comm. energies 85127698  1084187.361   361383.052.8
Rest   8   17552.589 5850.7 0.9
---
Total  8 2052960.356   684296.0   100.0
---

NOTE: 53 % of the run time was spent communicating energies,
 you might want to use the -nosum option of mdrun


   Parallel run - timing based on wallclock.

  NODE (s)   Real (s)  (%)
  Time:  85537.000  85537.000100.0
  23h45:37
  (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:144.887 10.126 10.359  2.317
Finished mdrun on node 0 Fri Sep 25 14:19:07 2009

- Log sim_prev

   D O M A I N   D E C O M P O S I T I O N   S T A T I S T I C S

av. #atoms communicated per step for force:  2 x 35971.8
av. #atoms communicated per step for LINCS:  2 x 59.7

Average load imbalance: 4.6 %
Part of the total run time spent waiting due to load imbalance: 1.5 %


R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

Computing: Nodes Number G-CyclesSeconds %
---
Domain decomp. 825047859.92915952.7 2.4
Comm. coord.   8   125038434.20712810.9 1.9
Neighbor search8251   445996.846   148659.922.4
Force  8   1250   637253.269   212409.632.1
Wait + Comm. F 8   125058421.25419473.0 2.9
PME mesh   8   1250   637267.326   212414.232.1
Write traj.8  12501   80.674   26.9 0.0
Update 8   125032011.69710670.2 1.6
Constraints8   125040061.17513353.2 2.0
Comm. energies 8   1250 8407.505 2802.4 0.4
Rest   8   41890.86513963.1 2.1
---
Total  8 1987684.746   662536.0   100.0
---

   Parallel run - timing based on wallclock.

  NODE (s)   Real (s)  (%)
  Time:  82817.000  82817.000100.0
  23h00:17
  (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:364.799 25.495 26.082  0.920

My simulation is running on a supercompter that you can see the 
characteristic  here : http://www.cines.fr/spip.php?article520). I don't 
know where is the problem (hardware ?, software ?) Any advice will be 
appreciate.


Stephane






___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] PEO and OPLS-AA FF in gmx

2009-09-25 Thread lammps lammps
Hi,

I want to simulate a PEO chain in water using OPLs-AA FF in Gromacs.

I have created a .PDB file using Material Studio, but I seems that the
pdb2gmx can not dealt with it because of the error" Residue 'xx' not found
in residue topology database"
The question is how can I obtain the .top and .gro file in the framework of
OPLS-AA FF. Any suggestion is appreciated.

Thanks in advance.

-- 
wende
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] mdrun -x for REMD does not work

2009-09-25 Thread David van der Spoel

jlenz wrote:

Hi,

I doing REMD simulations on a variety of peptides for 5 different 
temperatures.
Since the output trr files are very big before the compression I tried 
to give the mdrun function the -x option in order to write out 
compressed files.
Unfortunately that does not work. There is no xtc file written out. 
Instead, there occur the five different traj.trr files.


Is there a chance to directly write out compressed files ? What did I do 
wrong ?

set
nstxtcout = value
in your mdp file.


Thanks for an answer,
Joern
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use thewww 
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
David van der Spoel, Ph.D., Professor of Biology
Molec. Biophys. group, Dept. of Cell & Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205. Fax: +4618511755.
sp...@xray.bmc.uu.sesp...@gromacs.org   http://folding.bmc.uu.se
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] mdrun -x for REMD does not work

2009-09-25 Thread jlenz

Hi,

I doing REMD simulations on a variety of peptides for 5 different  
temperatures.
Since the output trr files are very big before the compression I tried  
to give the mdrun function the -x option in order to write out  
compressed files.
Unfortunately that does not work. There is no xtc file written out.  
Instead, there occur the five different traj.trr files.


Is there a chance to directly write out compressed files ? What did I  
do wrong ?

Thanks for an answer,
Joern
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


R: RE: R: RE: [gmx-users] Tabulated potential - Problem

2009-09-25 Thread albita...@virgilio.it
I have indeed large forces and the simulation stops after step 5...



Messaggio originale

Da: g...@hotmail.com

Data: 25-set-2009 12.27 PM

A: "Discussion list for GROMACS users"

Ogg: RE: R: RE: [gmx-users] Tabulated potential - Problem





-->

Your system could be unstable.
You can check for large forces with mdrun -pforce
I don't know what a reasonable range of forces is, you can try 5000.
If you have instabilities, you should get large forces printed
before you get the fatal error.

Berk


Date: Fri, 25 Sep 2009 14:10:08 +0200
From: albita...@virgilio.it
To: gmx-users@gromacs.org
Subject: R: RE: [gmx-users] Tabulated potential - Problem

Unfortunately, my box sizes are not close to 23. I also carried out 
calculations switching off PBC or on much smaller systems. 
I received always the same error. 
I tried also a geometry optimization. It finished without warnings nor errors: 
anyway the potential energy changed only very slightly during the simulation 
with too large values.

Thanks

AM



Messaggio originale

Da: g...@hotmail.com

Data: 24-set-2009 11.29 AM

A: "Discussion list for GROMACS users"

Ogg: RE: [gmx-users] Tabulated potential - Problem






This is not nonsense, it is exactly what is says.
The distance between two atoms is more than 10 times as large as your table 
length.

Maybe you are somehow having issues with periodic boundary conditions.
Is you box size close to 23?

Berk


Date: Thu, 24 Sep 2009 12:32:36 +0200
From: albita...@virgilio.it
To: gmx-users@gromacs.org
Subject: [gmx-users] Tabulated potential - Problem

Hi,

I'm trying to carry out a CG simulation and I'm using
a tabulated potential for a bond stretching term.
My MD simulations stops immediately with the error message:

---
Program mdrun_mpi, VERSION 4.0.5
Source code file: bondfree.c, line: 1772

Fatal error:
A tabulated bond interaction table number 0 is out of the table range: r 
23.678833, between table indices 12069 and 12070, table length 1020
---

This should mean that some distances are beyond table length (as reported in 
the manual) but this is
nonsense considering my input files and topology.

Do you have any suggestion?
Thanks!

AM

  
Express yourself instantly with MSN Messenger! MSN Messenger



  
What can you do with the new Windows Live? Find out




 ___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

RE: R: RE: [gmx-users] Tabulated potential - Problem

2009-09-25 Thread Berk Hess

Your system could be unstable.
You can check for large forces with mdrun -pforce
I don't know what a reasonable range of forces is, you can try 5000.
If you have instabilities, you should get large forces printed
before you get the fatal error.

Berk


Date: Fri, 25 Sep 2009 14:10:08 +0200
From: albita...@virgilio.it
To: gmx-users@gromacs.org
Subject: R: RE: [gmx-users] Tabulated potential - Problem

Unfortunately, my box sizes are not close to 23. I also carried out 
calculations switching off PBC or on much smaller systems. 
I received always the same error. 
I tried also a geometry optimization. It finished without warnings nor errors: 
anyway the potential energy changed only very slightly during the simulation 
with too large values.

Thanks

AM



Messaggio originale

Da: g...@hotmail.com

Data: 24-set-2009 11.29 AM

A: "Discussion list for GROMACS users"

Ogg: RE: [gmx-users] Tabulated potential - Problem






This is not nonsense, it is exactly what is says.
The distance between two atoms is more than 10 times as large as your table 
length.

Maybe you are somehow having issues with periodic boundary conditions.
Is you box size close to 23?

Berk


Date: Thu, 24 Sep 2009 12:32:36 +0200
From: albita...@virgilio.it
To: gmx-users@gromacs.org
Subject: [gmx-users] Tabulated potential - Problem

Hi,

I'm trying to carry out a CG simulation and I'm using
a tabulated potential for a bond stretching term.
My MD simulations stops immediately with the error message:

---
Program mdrun_mpi, VERSION 4.0.5
Source code file: bondfree.c, line: 1772

Fatal error:
A tabulated bond interaction table number 0 is out of the table range: r 
23.678833, between table indices 12069 and 12070, table length 1020
---

This should mean that some distances are beyond table length (as reported in 
the manual) but this is
nonsense considering my input files and topology.

Do you have any suggestion?
Thanks!

AM

  
Express yourself instantly with MSN Messenger! MSN Messenger



  
_
What can you do with the new Windows Live? Find out
http://www.microsoft.com/windows/windowslive/default.aspx___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] trying to get better performance in a Rocks cluster running GROMACS 4.0.4

2009-09-25 Thread Carsten Kutzner

Hi,

if you run without PME, there will be no all-to-all communication  
anyway,
so in this sense the paper is (mostly) irrelevant here. Since you  
mention this
paper I assume that your network is gigabit ethernet. If you run on  
recent

processors then I would say that for a 1 atom system on 8 cores the
ethernet is clearly the limiting factor, even if it runs optimal (the  
chance for

congestion problems on two nodes only is also very limited - these are
likely to appear on 3 or more nodes).

What is your performance on a single node (4 CPUs)? You could compare
that to the performance of 4 CPUs on 2 nodes to determine the network
impact.

Carsten


On Sep 24, 2009, at 5:08 PM, FLOR MARTINI wrote:


Thanks for your question.
We are running a lipid bilayer of 128 DPPC and 3655 water molecules  
and the nstep of the mdp is a total for 10 ns. I don´t think really  
that our system is a small one...


Dra.M.Florencia Martini
Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas
Cátedra de Química General e Inorgánica
Facultad de Farmacia y Bioquímica
Universidad de Buenos Aires
Junín 956 2º (1113)
TE: 54 011 4964-8249 int 24

--- El jue 24-sep-09, Berk Hess  escribió:

De: Berk Hess 
Asunto: RE: [gmx-users] trying to get better performance in a Rocks  
cluster running GROMACS 4.0.4

Para: "Discussion list for GROMACS users" 
Fecha: jueves, 24 de septiembre de 2009, 11:22 am

Hi,

You don't mention what kind of benchmark system tou are using for  
these tests.

A too small system could explain these results.

Berk


Date: Thu, 24 Sep 2009 07:01:04 -0700
From: flormart...@yahoo.com.ar
To: gmx-users@gromacs.org
Subject: [gmx-users] trying to get better performance in a Rocks  
cluster	running GROMACS 4.0.4


hi,

   We are about to start running GROMACS 4.0.4 with OpenMPI, in 8
nodes, quad core Rocks cluster. We made some tests, without PME and
found two notable things:

* We are getting the best speedup (6) with 2 nodes ( == 8 cores ). I  
read

the "Speeding Up Parallel GROMACS in High Latency networks" paper, and
thought that the culprit was the switch, but ifconfig shows no  
retransmits

(neither does ethtool -s or netstat -s). Does version 4 includes the
alltoall patch? Is the paper irrelevant with GROMACS 4?

* When running with the whole cluster ( 8 nodes, 32 cores ), top  
reports
in any node a 50% system CPU usage. Is that normal? Can it be  
accounted to
the use of the network? The sys usage gets a bit up when we  
configured the
Intel NICs with Interrupt Coalescense Off, so I'm tempted to think  
it is

just OpenMPI hammering the tcp stack, polling for packages.

Thanks in advance,

Dra.M.Florencia Martini
Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas
Cátedra de Química General e Inorgánica
Facultad de Farmacia y Bioquímica
Universidad de Buenos Aires
Junín 956 2º (1113)
TE: 54 011 4964-8249 int 24


Encontra las mejores recetas con Yahoo! Cocina.
http://ar.mujer.yahoo.com/cocina/
Express yourself instantly with MSN Messenger! MSN Messenger

-Adjunto en línea a continuación-

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!

Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Encontra las mejores recetas con Yahoo! Cocina.
http://ar.mujer.yahoo.com/cocina/___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!

Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

R: RE: [gmx-users] Tabulated potential - Problem

2009-09-25 Thread albita...@virgilio.it
Unfortunately, my box sizes are not close to 23. I also carried out 
calculations switching off PBC or on much smaller systems. 
I received always the same error. 
I tried also a geometry optimization. It finished without warnings nor errors: 
anyway the potential energy changed only very slightly during the simulation 
with too large values.

Thanks

AM



Messaggio originale

Da: g...@hotmail.com

Data: 24-set-2009 11.29 AM

A: "Discussion list for GROMACS users"

Ogg: RE: [gmx-users] Tabulated potential - Problem





-->

This is not nonsense, it is exactly what is says.
The distance between two atoms is more than 10 times as large as your table 
length.

Maybe you are somehow having issues with periodic boundary conditions.
Is you box size close to 23?

Berk


Date: Thu, 24 Sep 2009 12:32:36 +0200
From: albita...@virgilio.it
To: gmx-users@gromacs.org
Subject: [gmx-users] Tabulated potential - Problem

Hi,

I'm trying to carry out a CG simulation and I'm using
a tabulated potential for a bond stretching term.
My MD simulations stops immediately with the error message:

---
Program mdrun_mpi, VERSION 4.0.5
Source code file: bondfree.c, line: 1772

Fatal error:
A tabulated bond interaction table number 0 is out of the table range: r 
23.678833, between table indices 12069 and 12070, table length 1020
---

This should mean that some distances are beyond table length (as reported in 
the manual) but this is
nonsense considering my input files and topology.

Do you have any suggestion?
Thanks!

AM

  
Express yourself instantly with MSN Messenger! MSN Messenger




 ___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Static build

2009-09-25 Thread Jack Shultz
Hello,

I am trying to build statically with the small source changes I made yesterday.

I'm building on 32 bit linux with single precision. I have libSM

[r...@vps gromacs-4.0.5]# ls /usr/lib/libSM*
/usr/lib/libSM.so  /usr/lib/libSM.so.6  /usr/lib/libSM.so.6.0.0

I get this error message

cc -O3 -fomit-frame-pointer -finline-functions -Wall -Wno-unused
-funroll-all-loops -std=gnu99 -static -o grompp grompp.o  -L/usr/lib
./.libs/libgmxpreprocess.a ../mdlib/.libs/libmd.a
/root/gromacs-4.0.5/src/gmxlib/.libs/libgmx.a ../gmxlib/.libs/libgmx.a
-lnsl -lfftw3f -lm -lSM -lICE -lX11
/usr/bin/ld: cannot find -lSM
collect2: ld returned 1 exit status
make[3]: *** [grompp] Error 1
make[3]: Leaving directory `/root/gromacs-4.0.5/src/kernel'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/root/gromacs-4.0.5/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/root/gromacs-4.0.5/src'
make: *** [all-recursive] Error 1

I used this flag
./configure CPPFLAGS="-L/usr/lib" --enable-all-static



-- 
Jack

http://drugdiscoveryathome.com
http://hydrogenathome.org
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


RE: [gmx-users] martini simulation problem with unsaturated lipid

2009-09-25 Thread Berk Hess

I think the text below "Fatal error:" explains pretty clearly why this 
interaction is missing.

Berk

Date: Fri, 25 Sep 2009 13:04:28 +0200
From: mariagorano...@gmail.com
To: gmx-users@gromacs.org
Subject: [gmx-users] martini simulation problem with unsaturated lipid

Hello

I get this error while running Martini:

A list of missing interactions:
G96Angle of   1280 missing  1

Molecule type 'DUPC'
the first 10 missing interactions, except for exclusions:

G96Angle atoms356  global   135   137   138


Does this mean that some terms are reallly missing?

---
Program mdrun_mpi, VERSION 4.0.4
Source code file: domdec_top.c, line: 341


Fatal error:
1 of the 4040 bonded interactions could not be calculated because some atoms 
involved moved further apart than the multi-body cut-off distance (1.2 nm) or 
the two-body cut-off distance (1.2 nm), see option -rdd, for pairs and 
tabulated bonds also see option -ddcheck

---
Maria
  
_
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] viscosity and acflen in g_energy

2009-09-25 Thread Vitaly V. Chaban
Hi,

When using g_energy to calculate viscosity ("g_energy -vis"), in
enecorr.xvg the ACF of stress tensor is output. How can one specify
the "acflen" of this ACF?
I try the key "g_energy -vis -acflen XXX" but the length of the ACF is
still equal to the half of the trajectory length.

Thanks,
Vitaly
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] martini simulation problem with unsaturated lipid

2009-09-25 Thread maria goranovic
Hello

I get this error while running Martini:

A list of missing interactions:
G96Angle of   1280 missing  1

Molecule type 'DUPC'
the first 10 missing interactions, except for exclusions:
G96Angle atoms356  global   135   137   138


Does this mean that some terms are reallly missing?

---
Program mdrun_mpi, VERSION 4.0.4
Source code file: domdec_top.c, line: 341

Fatal error:
1 of the 4040 bonded interactions could not be calculated because some atoms
involved moved further apart than the multi-body cut-off distance (1.2 nm)
or the two-body cut-off distance (1.2 nm), see option -rdd, for pairs and
tabulated bonds also see option -ddcheck
---

Maria
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Re: Re: Re: Re: Re: Re: Re: Re: umbrella potentia

2009-09-25 Thread Justin A. Lemkul



Stefan Hoorman wrote:

I simulate 400 ps for each window. I have a total of 20 windows. My 


400 ps is relatively short, especially given the speed of current hardware and 
of Gromacs 4.0.  I generally see longer time periods in the literature.



histogram looks like a chromatographic peak ranging from 0.74 and 0.91


Then you are not getting the separation you described before (>2.5 nm).  It 
looks like you are only pulling a distance of 0.17 nm total.


in the x axis and the count (y axis) goes up to 3. Is there a way to 
index my histogram.xvg or my profile.xvg file and send it to the gromacs 
user's list? Or is it not necessary?


The best way to send this information is to generate image files and post them 
to a free site where others can see them (photobucket, etc).  Up to you to 
determine whether its necessary.  At this point, I think you are just not 
getting what you think you are setting up.


-Justin


Thank you




___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


RE: [gmx-users] Error while scaling mdrun for more number of nodes.

2009-09-25 Thread Berk Hess

You should read the manual, look in the index for domain decomposition.
For domain decomposition the unit-cell is divided into n sub-cells n=nx*ny*nz.
If n is a prime, you can only decompose in one dimension and 57 cells in one
dimension is often not possible due to certain restrictions.
Also 57 is inconvenient for PME, whether you use separate PME nodes or not.

Berk

Date: Fri, 25 Sep 2009 15:16:40 +0530
Subject: Re: [gmx-users] Error while scaling mdrun for more number of nodes.
From: viveksharma.i...@gmail.com
To: gmx-users@gromacs.org

Hi Berk, 
Thanks for your suggestion, It is working well now, Can you explain Why it 
didn't worked for prime number?


Thanks & Regards,
Vivek
2009/9/25 Berk Hess 






Hi,

Why do you want to run on exactly 57 nodes?
That is a nasty prime number.
I guess 56 or 60 nodes would work fine.

Berk

Date: Fri, 25 Sep 2009 14:39:27 +0530
From: viveksharma.i...@gmail.com

To: gmx-users@gromacs.org
Subject: [gmx-users] Error while scaling mdrun for more number of nodes.

Hi There,
I was trying to rum mdrun on large number of nodes. When I tried the run on 57 
nodes, I got an error which is pasted below.

---
Program mpi_mdrun_d, VERSION 4.0.3

Source code file: domdec_setup.c, line: 147

Fatal error:
Could not find an appropriate number of separate PME nodes. i.e. >= 
0.409991*#nodes (44) and <= #nodes/2 (57) and reasonable performance wise 
(grid_x=63, grid_y=63).  


Use the -npme option of mdrun or change the number of processors or the PME 
grid dimensions, see the manual for details.
---
then I tried with -npme option as "-npme 20", this time it failed with the 
following error.


---
Program mpi_mdrun_d, VERSION 4.0.3
Source code file: domdec.c, line: 5858

Fatal error:
There is no domain decomposition for 94 nodes that is compatible with the given 
box and a minimum cell size of 1.025 nm


Change the number of nodes or mdrun option -rcon or -dds or your LINCS settings
Look in the log file for details on the domain decomposition
---
Same system was running fine when I tried it on 4 nodes.


I havn't used gromacs4.0 very well, so i don't understand these errors.
Please suggest me a way to get out of these errors, It will be really helpful 
if anybody can explain me these errors.

With thanks in advance.



Thanks & Regards,
Vivek
  
What can you do with the new Windows Live? Find out

___

gmx-users mailing listgmx-users@gromacs.org

http://lists.gromacs.org/mailman/listinfo/gmx-users

Please search the archive at http://www.gromacs.org/search before posting!

Please don't post (un)subscribe requests to the list. Use the

www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php

  
_
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Error while scaling mdrun for more number of nodes.

2009-09-25 Thread vivek sharma
Hi Berk,
Thanks for your suggestion, It is working well now, Can you explain Why it
didn't worked for prime number?


Thanks & Regards,
Vivek
2009/9/25 Berk Hess 

>  Hi,
>
> Why do you want to run on exactly 57 nodes?
> That is a nasty prime number.
> I guess 56 or 60 nodes would work fine.
>
> Berk
>
> --
> Date: Fri, 25 Sep 2009 14:39:27 +0530
> From: viveksharma.i...@gmail.com
> To: gmx-users@gromacs.org
> Subject: [gmx-users] Error while scaling mdrun for more number of nodes.
>
>
> Hi There,
> I was trying to rum mdrun on large number of nodes. When I tried the run on
> 57 nodes, I got an error which is pasted below.
> ---
> Program mpi_mdrun_d, VERSION 4.0.3
> Source code file: domdec_setup.c, line: 147
>
> Fatal error:
> Could not find an appropriate number of separate PME nodes. i.e. >=
> 0.409991*#nodes (44) and <= #nodes/2 (57) and reasonable performance wise
> (grid_x=63, grid_y=63).
> Use the -npme option of mdrun or change the number of processors or the PME
> grid dimensions, see the manual for details.
> ---
> then I tried with -npme option as "-npme 20", this time it failed with the
> following error.
> ---
> Program mpi_mdrun_d, VERSION 4.0.3
> Source code file: domdec.c, line: 5858
>
> Fatal error:
> There is no domain decomposition for 94 nodes that is compatible with the
> given box and a minimum cell size of 1.025 nm
> Change the number of nodes or mdrun option -rcon or -dds or your LINCS
> settings
> Look in the log file for details on the domain decomposition
> ---
> Same system was running fine when I tried it on 4 nodes.
> I havn't used gromacs4.0 very well, so i don't understand these errors.
> Please suggest me a way to get out of these errors, It will be really
> helpful if anybody can explain me these errors.
>
> With thanks in advance.
>
> Thanks & Regards,
> Vivek
>
> --
> What can you do with the new Windows Live? Find 
> out
>
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

RE: [gmx-users] Error while scaling mdrun for more number of nodes.

2009-09-25 Thread Berk Hess

Hi,

Why do you want to run on exactly 57 nodes?
That is a nasty prime number.
I guess 56 or 60 nodes would work fine.

Berk

Date: Fri, 25 Sep 2009 14:39:27 +0530
From: viveksharma.i...@gmail.com
To: gmx-users@gromacs.org
Subject: [gmx-users] Error while scaling mdrun for more number of nodes.

Hi There,
I was trying to rum mdrun on large number of nodes. When I tried the run on 57 
nodes, I got an error which is pasted below.
---
Program mpi_mdrun_d, VERSION 4.0.3

Source code file: domdec_setup.c, line: 147

Fatal error:
Could not find an appropriate number of separate PME nodes. i.e. >= 
0.409991*#nodes (44) and <= #nodes/2 (57) and reasonable performance wise 
(grid_x=63, grid_y=63).  

Use the -npme option of mdrun or change the number of processors or the PME 
grid dimensions, see the manual for details.
---
then I tried with -npme option as "-npme 20", this time it failed with the 
following error.

---
Program mpi_mdrun_d, VERSION 4.0.3
Source code file: domdec.c, line: 5858

Fatal error:
There is no domain decomposition for 94 nodes that is compatible with the given 
box and a minimum cell size of 1.025 nm

Change the number of nodes or mdrun option -rcon or -dds or your LINCS settings
Look in the log file for details on the domain decomposition
---
Same system was running fine when I tried it on 4 nodes.

I havn't used gromacs4.0 very well, so i don't understand these errors.
Please suggest me a way to get out of these errors, It will be really helpful 
if anybody can explain me these errors.

With thanks in advance.


Thanks & Regards,
Vivek
  
_
What can you do with the new Windows Live? Find out
http://www.microsoft.com/windows/windowslive/default.aspx___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Error while scaling mdrun for more number of nodes.

2009-09-25 Thread vivek sharma
Hi There,
I was trying to rum mdrun on large number of nodes. When I tried the run on
57 nodes, I got an error which is pasted below.
---
Program mpi_mdrun_d, VERSION 4.0.3
Source code file: domdec_setup.c, line: 147

Fatal error:
Could not find an appropriate number of separate PME nodes. i.e. >=
0.409991*#nodes (44) and <= #nodes/2 (57) and reasonable performance wise
(grid_x=63, grid_y=63).
Use the -npme option of mdrun or change the number of processors or the PME
grid dimensions, see the manual for details.
---
then I tried with -npme option as "-npme 20", this time it failed with the
following error.
---
Program mpi_mdrun_d, VERSION 4.0.3
Source code file: domdec.c, line: 5858

Fatal error:
There is no domain decomposition for 94 nodes that is compatible with the
given box and a minimum cell size of 1.025 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS
settings
Look in the log file for details on the domain decomposition
---
Same system was running fine when I tried it on 4 nodes.
I havn't used gromacs4.0 very well, so i don't understand these errors.
Please suggest me a way to get out of these errors, It will be really
helpful if anybody can explain me these errors.

With thanks in advance.

Thanks & Regards,
Vivek
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Probelm of g_rms

2009-09-25 Thread Tsjerk Wassenaar
Hi Nikhil,

Try extracting the frame just before and just after the jump and view
them in pymol/vmd/rasmol/... to check for a possible cause.

Cheers,

Tsjerk

On Fri, Sep 25, 2009 at 5:50 AM, nikhil damle  wrote:
> Yes. I am correcting the trajectory for periodicity
>
> Regards,
> Nikhil
>
> 
> From: Justin A. Lemkul 
> To: Discussion list for GROMACS users 
> Sent: Wednesday, 23 September, 2009 3:48:47 PM
> Subject: Re: [gmx-users] Probelm of g_rms
>
>
>
> nikhil damle wrote:
>> Hi all,
>>
>>      I am facing the problem while calculating backbone RMSD over 30 ns.
>> upto ~6-7 ns g_rms gives correct RMSDs and later all RMSD values are
>> unexpectedly and unusually high (within 20 ps RMSD value shoots up by ~8A).
>> But when i calculate RMSD using g_confrms programme, it gives me expected
>> and usual RMSD. I tried running g_rms separately for that particular time
>> range which was giving me high values only to get same result once again. Is
>> there problem with g_rms or i am calculating wrongly ?
>>
>
> Are you correcting the trajectory for periodicity?
>
> -Justin
>
>> Regards,
>> Nikhil
>>
>> 
>> Connect more, do more and share more with Yahoo! India Mail. Learn more
>> .
>>
>>
>> 
>>
>> ___
>> gmx-users mailing list    gmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before posting!
>> Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
> -- 
>
> Justin A. Lemkul
> Ph.D. Candidate
> ICTAS Doctoral Scholar
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
> 
> ___
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
> 
> Connect more, do more and share more with Yahoo! India Mail. Learn more.
> ___
> gmx-users mailing list    gmx-us...@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>



-- 
Tsjerk A. Wassenaar, Ph.D.
Junior UD (post-doc)
Biomolecular NMR, Bijvoet Center
Utrecht University
Padualaan 8
3584 CH Utrecht
The Netherlands
P: +31-30-2539931
F: +31-30-2537623
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


RE: [gmx-users] BUG in GROMACS 4.0.5, related to very log total integration time

2009-09-25 Thread Berk Hess



> Date: Fri, 25 Sep 2009 08:23:37 +1000
> From: mark.abra...@anu.edu.au
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] BUG in GROMACS 4.0.5,related to very log 
> total integration time
> 
> Daniel Adriano Silva M wrote:
> > Dear GROMACS users and developers.
> > 
> > I don known if this issued had been previously addressed, but I found
> > that when I try to run a MD with time-step of 2fs and 25 steps
> > (yes 500us!!!) the dynamics aborts with the next message (64bit-LINUX
> > and icc 10 compiler):
> > 
> > 
> > WARNING: This run will generate roughly 20576946697451257856 Mb of data
> > 
> > starting mdrun 'F1-ATPASE'
> > -1794967296 steps, -3589934.8 ps.
> > 
> > nodetime = 0! Infinite Giga flopses!
> > Parallel run - timing based on wallclock.
> > 
> > #
> > 
> > If i reduce the steps number by one order of magnitude then all goes
> > ok.  My MDP when I obtined this error was:
> > 
> > 
> > ; VARIOUS PREPROCESSING OPTIONS
> > title= NPT simulacion
> > cpp  = /lib/cpp
> > 
> > ; RUN CONTROL PARAMETERS
> > integrator   = md
> > dt   = 0.002
> > nsteps   = 25
> > nstxout  = 5000
> > nstvout  = 5000
> > nstlog   = 2500
> > nstenergy= 2500
> > nstxtcout= 2500
> > energygrps   = Protein Non-Protein
> > nstlist = 10
> > rlist   = 1.0
> > ns_type = grid
> > pbc = xyz
> > coulombtype  = pme
> > rcoulomb = 1.0
> > vdw-type = Cut-off
> > rvdw = 1.0
> > fourierspacing   =  0.12
> > pme_order=  4
> > optimize_fft =  yes
> > ewald_rtol   =  1e-5
> > tcoupl   = Berendsen
> > tc-grps  = Protein  Non-Protein
> > tau_t= 0.1  0.1
> > ref_t= 300  300
> > Pcoupl   = Parrinello-Rahman
> > Pcoupltype   = Isotropic
> > tau_p= 1.0
> > ref_p= 1.0
> > compressibility  = 4.5e-5
> > gen_vel  = no
> > constraints  = all-bonds
> > constraint-algorithm = Lincs
> > unconstrained-start  = yes
> > lincs-order  = 4
> > lincs-iter   = 1
> > lincs-warnangle  = 30
> > 
> > I known this kind of error is not a priority since the total
> > integration time is ridiculous big, but anyway I want to comment it to
> > you.
> 
> Yes, it's known. IIRC there was some discussion on the developers list 
> about changing the relevant data type so that it can store bigger 
> numbers. It's also a non-problem inasmuch as if you are ever able to run 
> a simulation that long, manually resetting the number of steps to zero 
> at a suitable point will be workable.

This issue has already been fixed for the 4.1 release.

Berk

  
_
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php