Re: [gmx-users] Re: wierd behavior of mdrun

2009-09-05 Thread Justin A. Lemkul


Have you tried my suggestion from the last message of setting frequent output? 
Could your system just be collapsing at the outset of the simulation?  Setting 
nstxout = 1 would catch something like this.


There is nothing special about treating a protein in parallel vs. a system of 
water.  Since a system of water runs just fine, it seems even more likely to me 
that your system is simply crashing immediately, rather than a problem with 
Gromacs or the MPI implementation.


-Justin

Paymon Pirzadeh wrote:

Regarding the problems I have on running protein system in parallel
(runs without output), When I run pure water system, everything is fine,
I have tested pure water systems 8 times larger than my protein system.
while the former runs fine, the latter has problems. I have also tested
pure water systems with approximately same number of sites in .gro file
as in my protein .gro file, and with the same input file in terms of
spitting outputs; they are fine.I would like to know what happens to
GROMACS when a protein is added to the system. The cluster admin has not
get back to me, but I still want to check there is no problem with my
setup! (although my system runs fine in serial mode).
Regards,

Payman



On Fri, 2009-08-28 at 16:41 -0400, Justin A. Lemkul wrote:

Payman Pirzadeh wrote:

There is sth strange about this problem which I suspect it might be due to
the mdp file and input. I can run the energy minimization without any
problems (I submit the job and it apparently works using the same submission
script)! But as soon as I prepare the tpr file for MD run, then I run into
this run-without-output trouble.
Again I paste my mdp file below (I want to run an NVT run):

There isn't anything in the .mdp file that suggests you wouldn't get any output. 
  The output of mdrun is buffered, so depending on your settings, you may have 
more frequent output during energy minimization.  There may be some problem with 
the MPI implementation in buffering and communicating data properly.  That's a 
bit of a guess, but it could be happening.


Definitely check with the cluster admin to see if there are any error messages 
reported for the jobs you submitted.


Another test you could do to force a huge amount of data would be to set all of 
your outputs (nstxout, nstxtcout, etc) = 1 and run a much shorter simulation (to 
prevent massive data output!); this would force more continuous data through the 
buffer.


-Justin


cpp  = cpp
include  = -I../top
define   = -DPOSRES

; Run control

integrator   = md
dt   = 0.001   ;1 fs
nsteps   = 300 ;3 ns
comm_mode= linear
nstcomm  = 1

;Output control

nstxout  = 5000
nstlog   = 5000
nstenergy= 5000
nstxtcout= 1500
nstvout  = 5000
nstfout  = 5000
xtc_grps =
energygrps   =

; Neighbour Searching

nstlist  = 10
ns_type  = grid
rlist= 0.9
pbc  = xyz

; Electrostatistics

coulombtype  = PME
rcoulomb = 0.9
;epsilon_r= 1

; Vdw

vdwtype  = cut-off
rvdw = 1.2
DispCorr = EnerPres

;Ewald

fourierspacing  = 0.12
pme_order   = 4
ewald_rtol  = 1e-6
optimize_fft= yes

; Temperature coupling

tcoupl   = v-rescale
ld_seed  = -1
tc-grps  = System
tau_t= 0.1
ref_t= 275

; Pressure Coupling

Pcoupl   = no
;Pcoupltype   = isotropic
;tau_p= 1.0
;compressibility  = 5.5e-5
;ref_p= 1.0
gen_vel  = yes
gen_temp = 275
gen_seed = 173529
constraint-algorithm = Lincs
constraints  = all-bonds
lincs-order  = 4

Regards,

Payman
 


-Original Message-
From: gmx-users-boun...@gromacs.org [mailto:gmx-users-boun...@gromacs.org]
On Behalf Of Mark Abraham
Sent: August 27, 2009 3:32 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] Re: wierd behavior of mdrun

Vitaly V. Chaban wrote:

Then I believe you have problems with MPI.

Before I experienced something alike on our old system - serial
version worked OK but parallel one failed. The same issue was with
CPMD by the way. Another programs worked fine. I didn't correct that
problem...

On Thu, Aug 27, 2009 at 7:14 PM, Paymon Pirzadeh

wrote:

Yes,
it works when it is run on one processor interactively!
That's fine, but it doesn't mean the problem is with the parallelism, as 
Vitaly suggests. If your cluster filesystem isn't configured properly, 
you will observe these symptoms. Since the submission script was the 
same, MPI worked previously, so isn't likely to be the problem...


Mark


On Thu, 2009-08-27 at 09:23 +0300, Vitaly V. Chaban wrote:

I made a .tpr file for my md run without any problems (using the bottom
mdp file). My job submission script is also the same thing I used for
other jobs which had no problems. But now when I submit this .tpr file,
only an emp

[gmx-users] grompp error in peptide-membrane simulations

2009-09-05 Thread afsaneh maleki
Hi,
I am working on memberane peptide simulation under lipid DOPC,i have
downloaded the lipid and dopc.itp from the same site,when i run grommp:
 ]grompp -f em.mdp  -c complex.gro  -o  em.tpr -p  complex.top
it gives me:
>Fetal error :
Atomtype LC3 not found! (this is atomtype of the lipide)
This is my  complex.top file
#include  "protein.itp"
#include "dopc.itp"
#include "lipid.itp"
#include "tip3p.itp"
#include "ions.itp"
[system]
;name
protein on sur+relaxed dopc
[molecules]
;namenumber
Protein  1
DOPC   128
SOL  4086
SOD  6
CLA   8
---
this atomtype (LC3) is in the dopc.itp and  lipid.itp files but don't find
in ffG43a2.rtp and .atp.
i'm sure structure file is n't different in terms of the number of atoms ,
atomnames and the order of them with .itp files.

any help will be hightly appreciated.

-- 
Afsaneh Maleki
PhD student of physical chemistry
Department of chemistry, Isfahan Univ. of Tech.
Isfahan 84156-83111, Iran
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] grompp error in peptide-membrane simulations

2009-09-05 Thread Justin A. Lemkul



afsaneh maleki wrote:

Hi,
I am working on memberane peptide simulation under lipid DOPC,i have 
downloaded the lipid and dopc.itp from the same site,when i run grommp:
 ]grompp -f em.mdp  -c complex.gro  -o  em.tpr -p  complex.top 
it gives me:

 >Fetal error :
Atomtype LC3 not found! (this is atomtype of the lipide)
This is my  complex.top file
#include  "protein.itp"
#include "dopc.itp"
#include "lipid.itp"
#include "tip3p.itp"
#include "ions.itp"
[system]
;name
protein on sur+relaxed dopc
[molecules]
;namenumber
Protein  1
DOPC   128
SOL  4086
SOD  6
CLA   8
---
this atomtype (LC3) is in the dopc.itp and  lipid.itp files but don't 
find in ffG43a2.rtp and .atp.
i'm sure structure file is n't different in terms of the number of atoms 
, atomnames and the order of them with .itp files.


any help will be hightly appreciated.



Your topology file is set up incorrectly.  Might I suggest the tutorial I wrote 
for membrane protein simulations:


http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/membrane_protein/index.html

There is also helpful information on the wiki site (http://oldwiki.gromacs.org) 
about running membrane simulations and doing analysis.


-Justin


--
Afsaneh Maleki
PhD student of physical chemistry
Department of chemistry, Isfahan Univ. of Tech.
Isfahan 84156-83111, Iran




___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] PMF and pull with membrane helix

2009-09-05 Thread XAvier Periole


On Sep 5, 2009, at 7:42 AM, Ragnarok sdf wrote:


Hi
I am trying to calculate the PMF by pulling two membrane protein  
monomers apart using the pull code with umbrella sampling. While  
trying to generate my reaction path, no matter how slow I pull, my  
helix starts bending (not really bending, but it kind of tends to  
transform into a "C" shape inside my bilayer). Since I am only  
simulating a small part of what would a be a very big transmembrane  
receptor, I thought about restraining the movement along the z axis  
of my terminal residues, sort of simulating the "weight" that would  
exist if the entire intracellular and extracellular domains were  
there. My protocol to obtain the PMF is to use this first pulling  
protocol only to generate the different windows (distance between  
the two monomers) in each of which I would use the umbrella sampling  
(maintaining the force constant and switching off the pull_rate) to  
generate data to perform WHAM analysis.

So that leaves me with two questions.
First is: would I, by restraining the movement along the Z axis,  
create artifacts that would be computed and ruin my PMF calculation?
Technically speaking, no. But you should correct for the energy of  
imposing the

alignment with the z axis.

And second: Would this be a correct PMF protocol?
That would just give you the PMF of the two transmembrane segments  
given their
fixed relative orientations, which might be questionable in regard to  
its relevance

but may be not!

It is however strange that you have this C shape even with the slow  
pulling.

You might want to check your parameters/procedure. You might need a long
period of "relaxation/equilibration" to remove the C shape, which  
suggests

that you are still pulling too fast!

XAvier.

Thank you in advance
Fabrício Bracht
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!

Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Re: DMF topology file

2009-09-05 Thread Vitaly V. Chaban
Hi Abhishek,

I know two possible ways. The first is to find PBD file of DMF (one
molecule) and then use x2top program of the gromacs package. When asked for
a FF, OPLS/AA can be pointed out. DMF contains only common atoms, so I guess
there will be no problems to generate the topology automatically. The second
way is to make topology by hand. If one has enough experience it can be of
some pleasure. :)

Good luck!
Vitaly

-- 
Vitaly V. Chaban, Ph.D. (ABD)
School of Chemistry
V.N. Karazin Kharkiv National University
Svoboda sq.,4, Kharkiv 61077, Ukraine
email: cha...@univer.kharkov.ua,vvcha...@gmail.com
skype: vvchaban, cell.: +38-097-8259698
http://www-rmn.univer.kharkov.ua/chaban.html
===
!!! Looking for a postdoctoral position !!!
===

On Sat, Sep 5, 2009 at 3:20 PM, Abhishek Banerjee wrote:

> hi Vitaly,
> Thanks for your help. I have created a DMF box. Now I want
> to do NTP and NVT on that. For that I want to use OPLS/AA force field. I got
> some reference paper on that. How can I go to create a itp or top for DMF.
> If you give me some hints about the steps to crate a topology file for
> DMF(OPLS/AA), will be greate help for me.
> thanks
> abhishek
>
> --
>
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

RE: {Spam?} Re: [gmx-users] Re: wierd behavior of mdrun

2009-09-05 Thread Payman Pirzadeh
I have checked my job on another cluster which is dual core. On one node, it
just ran fine! I think sth might be wrong with the cluster settings. I will
try the output settings you suggested as well. I will keep you posted.

Payman

-Original Message-
From: gmx-users-boun...@gromacs.org [mailto:gmx-users-boun...@gromacs.org]
On Behalf Of Justin A. Lemkul
Sent: Friday, September 04, 2009 5:29 PM
To: Gromacs Users' List
Subject: {Spam?} Re: [gmx-users] Re: wierd behavior of mdrun


Have you tried my suggestion from the last message of setting frequent
output? 
Could your system just be collapsing at the outset of the simulation?
Setting 
nstxout = 1 would catch something like this.

There is nothing special about treating a protein in parallel vs. a system
of 
water.  Since a system of water runs just fine, it seems even more likely to
me 
that your system is simply crashing immediately, rather than a problem with 
Gromacs or the MPI implementation.

-Justin

Paymon Pirzadeh wrote:
> Regarding the problems I have on running protein system in parallel
> (runs without output), When I run pure water system, everything is fine,
> I have tested pure water systems 8 times larger than my protein system.
> while the former runs fine, the latter has problems. I have also tested
> pure water systems with approximately same number of sites in .gro file
> as in my protein .gro file, and with the same input file in terms of
> spitting outputs; they are fine.I would like to know what happens to
> GROMACS when a protein is added to the system. The cluster admin has not
> get back to me, but I still want to check there is no problem with my
> setup! (although my system runs fine in serial mode).
> Regards,
> 
> Payman
> 
> 
> 
> On Fri, 2009-08-28 at 16:41 -0400, Justin A. Lemkul wrote:
>> Payman Pirzadeh wrote:
>>> There is sth strange about this problem which I suspect it might be due
to
>>> the mdp file and input. I can run the energy minimization without any
>>> problems (I submit the job and it apparently works using the same
submission
>>> script)! But as soon as I prepare the tpr file for MD run, then I run
into
>>> this run-without-output trouble.
>>> Again I paste my mdp file below (I want to run an NVT run):
>>>
>> There isn't anything in the .mdp file that suggests you wouldn't get any
output. 
>>   The output of mdrun is buffered, so depending on your settings, you may
have 
>> more frequent output during energy minimization.  There may be some
problem with 
>> the MPI implementation in buffering and communicating data properly.
That's a 
>> bit of a guess, but it could be happening.
>>
>> Definitely check with the cluster admin to see if there are any error
messages 
>> reported for the jobs you submitted.
>>
>> Another test you could do to force a huge amount of data would be to set
all of 
>> your outputs (nstxout, nstxtcout, etc) = 1 and run a much shorter
simulation (to 
>> prevent massive data output!); this would force more continuous data
through the 
>> buffer.
>>
>> -Justin
>>
>>> cpp  = cpp
>>> include  = -I../top
>>> define   = -DPOSRES
>>>
>>> ; Run control
>>>
>>> integrator   = md
>>> dt   = 0.001   ;1 fs
>>> nsteps   = 300 ;3 ns
>>> comm_mode= linear
>>> nstcomm  = 1
>>>
>>> ;Output control
>>>
>>> nstxout  = 5000
>>> nstlog   = 5000
>>> nstenergy= 5000
>>> nstxtcout= 1500
>>> nstvout  = 5000
>>> nstfout  = 5000
>>> xtc_grps =
>>> energygrps   =
>>>
>>> ; Neighbour Searching
>>>
>>> nstlist  = 10
>>> ns_type  = grid
>>> rlist= 0.9
>>> pbc  = xyz
>>>
>>> ; Electrostatistics
>>>
>>> coulombtype  = PME
>>> rcoulomb = 0.9
>>> ;epsilon_r= 1
>>>
>>> ; Vdw
>>>
>>> vdwtype  = cut-off
>>> rvdw = 1.2
>>> DispCorr = EnerPres
>>>
>>> ;Ewald
>>>
>>> fourierspacing  = 0.12
>>> pme_order   = 4
>>> ewald_rtol  = 1e-6
>>> optimize_fft= yes
>>>
>>> ; Temperature coupling
>>>
>>> tcoupl   = v-rescale
>>> ld_seed  = -1
>>> tc-grps  = System
>>> tau_t= 0.1
>>> ref_t= 275
>>>
>>> ; Pressure Coupling
>>>
>>> Pcoupl   = no
>>> ;Pcoupltype   = isotropic
>>> ;tau_p= 1.0
>>> ;compressibility  = 5.5e-5
>>> ;ref_p= 1.0
>>> gen_vel  = yes
>>> gen_temp = 275
>>> gen_seed = 173529
>>> constraint-algorithm = Lincs
>>> constraints  = all-bonds
>>> lincs-order  = 4
>>>
>>> Regards,
>>>
>>> Payman
>>>  
>>>
>>> -Original Message-
>>> From: gmx-users-boun...@gromacs.org
[mailto:gmx-users-boun...@gromacs.org]
>>> On Behalf Of Mark Abraham
>>> Sent: August 27, 2009 3:32 PM
>>> To: Discussion list for GROMACS users
>>> Subject: Re: [gmx-users] Re: wierd behavior of mdrun
>>>
>>> Vitaly V. Chaban wrote:
 Then I believe you have problems with MPI.
>