Re: [gmx-users] The "correct" way to equilibrate a membrane / lipid bilayer w/water?

2011-03-24 Thread Justin A. Lemkul



Peter C. Lai wrote:
Hi 


I haven't played with gromacs in years, so I'm pretty new at the ability
to play with large systems like lipid bilayers.

What is the "correct" or lets say canonical method to equilibrate a 
membrane patch of say 9x9 nm POPC with sufficient vertical waters to take

care of pbc and long range interactions (> 1.5nm above/below the bilayer)
using Tom's Charmm36 FF port.

I am seeing a lot of different methods out there to: construct and
equilibrate such a system in prepration for g_embed or whatever.

I see from 
http://www.mail-archive.com/gmx-users@gromacs.org/msg33812.html
the ability to use genbox to replicate Tieleman's 128 patch 


but I also can build my bilayer inside VMD, which supports charmm36->pdb
atom names (and things seem to work fine)

I have yet to see anybody's run parameters and constraints for running
the equilibration, particularly using charmm36 (using vdwtype=switch).
Should I constrain all heavy atoms in the lipid during the NVT run, or
just the polar group (or even just P) and allow the tails to "melt" as seen 
elsewhere? Should I only set Z axis constraints? How long should I run in 
NVT before switching to NPT and for how long (10ns?) And should I change the

constraints moving from NVT to NPT?


As a point of clarity, "constraints" and "restraints" serve very different 
functions in Gromacs:


http://www.gromacs.org/Documentation/Terminology/Constraints_and_Restraints

I presume you mean "restraints" in this context.  I see no point in restraining 
the lipids completely during equilibration.  It sort of defeats the purpose. 
Depending on how reasonable the starting configuration is, restraints on any 
group may not be necessary at all.  Poor starting configurations, particularly 
with inadequate solvation within the interfacial region, often require vertical 
position restraints on P atoms to prevent the membrane from separating. 
Restraining the entire headgroup probably restricts lipid motion too much.


There is no standard timeframe for equilibration.  Rotational relaxation occurs 
within a very short amount of time, translational within 10-20 ns for most 
lipids.  You should pay close attention to diffusion constants, membrane 
thickness, area per lipid, etc to judge the quality of equilibration.


How should the thermostat be coupled? System or separate the water from the 
lipid?




This consideration is uniform among just about all systems and is a function of 
the thermostat rather than the system itself.  Couple water and lipids 
separately, just as you would couple protein and solvent separately for most 
thermostats.


COM motion should be removed separately for water and lipids as well, to prevent 
lateral sliding that is hidden through global COM motion removal.



I experienced the same thing as Ng Hui Wen back in October with waters
being extremely dehomogenized (and diffusing out of the box) after 100ps 
NVT with all heavy POPC constrained at 1000 kJ/nm for all axes. I am sure
that moving to NPT will restore the waters but still it was a bit scary to 
see for the first run.




That is probably an artifact of restraining all lipid atoms and not allowing for 
proper response between water and lipids.



If a lot of these approaches haven't been published, maybe we can just
stick them in the wiki to let everyone "pick their poison" when it comes
to equilibrating their bilayers.



I can think of at least a dozen plausible methods for equilibrating a lipid 
bilayer system, making it probably more confusing to simply list them and tell 
users to pick, especially if they don't know what they're doing.  As with any 
equilibration of any system, you have to judge (1) whether the desired 
thermodynamic ensemble is achieved and (2) whether the properties of the system 
of interest are established.  With these general rules in mind, the user is free 
to design their own procedure.


I wrote a membrane protein tutorial a long time ago that outlines many of these 
principles and discusses several more.


http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/membrane_protein/index.html

Even though your purposes may not involve a protein or the same force field, you 
may find some of the general information useful.


-Justin


Thanks for any ideas/hints/suggestions



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/S

[gmx-users] The "correct" way to equilibrate a membrane / lipid bilayer w/water?

2011-03-24 Thread Peter C. Lai
Hi 

I haven't played with gromacs in years, so I'm pretty new at the ability
to play with large systems like lipid bilayers.

What is the "correct" or lets say canonical method to equilibrate a 
membrane patch of say 9x9 nm POPC with sufficient vertical waters to take
care of pbc and long range interactions (> 1.5nm above/below the bilayer)
using Tom's Charmm36 FF port.

I am seeing a lot of different methods out there to: construct and
equilibrate such a system in prepration for g_embed or whatever.

I see from 
http://www.mail-archive.com/gmx-users@gromacs.org/msg33812.html
the ability to use genbox to replicate Tieleman's 128 patch 

but I also can build my bilayer inside VMD, which supports charmm36->pdb
atom names (and things seem to work fine)

I have yet to see anybody's run parameters and constraints for running
the equilibration, particularly using charmm36 (using vdwtype=switch).
Should I constrain all heavy atoms in the lipid during the NVT run, or
just the polar group (or even just P) and allow the tails to "melt" as seen 
elsewhere? Should I only set Z axis constraints? How long should I run in 
NVT before switching to NPT and for how long (10ns?) And should I change the
constraints moving from NVT to NPT?
How should the thermostat be coupled? System or separate the water from the 
lipid?

I experienced the same thing as Ng Hui Wen back in October with waters
being extremely dehomogenized (and diffusing out of the box) after 100ps 
NVT with all heavy POPC constrained at 1000 kJ/nm for all axes. I am sure
that moving to NPT will restore the waters but still it was a bit scary to 
see for the first run.

If a lot of these approaches haven't been published, maybe we can just
stick them in the wiki to let everyone "pick their poison" when it comes
to equilibrating their bilayers.

Thanks for any ideas/hints/suggestions

-- 
===
Peter C. Lai | University of Alabama-Birmingham
Programmer/Analyst   | BEC 257
Genetics, Div. of Research   | 1150 10th Avenue South
p...@uab.edu  | Birmingham AL 35294-4461
(205) 690-0808   |
===

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS 4.5.1 mdrun re-compile for MPI

2011-03-24 Thread Mark Abraham

On 25/03/2011 8:36 AM, Adam Herbst wrote:

Thank you all for your help!  I ran

  make distclean

and re-configured in the GROMACS source folder via

  sudo ./configure --prefix=$SOFT --with-gsl --enable-threads 
CFLAGS="$CFLAGS -I$SOFT/include" LDFLAGS="$LDFLAGS -L$SOFT/lib"

  sudo make -j $NCPU
  sudo make install


As a general rule, one should minimize the use of super-user accounts. 
You never know when some idiot has messed up the configure script, and 
running it as super-user can trash your machine. It is safer to maintain 
the source code in a directory writable by whoever will be building it. 
Now "configure" and "make" can be run as a normal user. Only the step 
that needs write access to system areas needs sudo, i.e. "make install".


Another good idea is to keep separate an unmodified version of GROMACS 
and your modified one. Now you you just source the GMXRC from the 
version you want to use. Then if you run into some problem, you can 
sometimes immediately establish that it wasn't the fault of your changed 
code. Alternatively or additionally, using "./configure --suffix=_m" (or 
whatever) gives you confirmation of which version you are using every 
time you use it.


Mark

where SOFT is the installation directory for FFTW3 and GSL, and NCPU 
is the number of processors (24) in my machine.  I found if I didn't 
specify the CFLAGS and LDFLAGS when running configure (even if they 
were already environment variables with the proper paths), I got an 
error that the FFTW3 header file or library couldn't be found.


Now when I run mdrun, it automatically uses multiple threads and goes 
just as fast as before my changes.  Thanks again,


Adam


On Thu, Mar 24, 2011 at 9:00 AM, Mark Abraham > wrote:


On 24/03/2011 11 :51 PM, Adam Herbst wrote:

Dear GROMACS users,
I successfully installed GROMACS 4.5.1 several months ago on a
Mac Pro with 12 CPUs, and the "mdrun" command (not "mpirun
mdrun_mpi") allows parallel simulations--it automatically uses
multiple processors, while the number of processors can be
manually specified as N with the flag "mdrun -nt N".  I
understand that this is a feature of GROMACS 4 and later.


Yes, threading, enabled by default, and mutually incompatible with
MPI.


Now I am making minor changes to the mdrun source code, and I
want to recompile such that the parallel version of mdrun is
updated with my changes.  But when I run:

 make mdrun (or just make)
 make install-mdrun (or just make install)

from the top-level source directory, the only executables that
are updated are the ones with the _mpi suffix, such as
mdrun_mpi.  The version of mdrun in src/kernel/ is updated,
but this one has no -nt flag and cannot seem to run on
multiple processors.  And when I run


Subsequently you have configured with --enable-mpi, so threading
is disabled. Now everything is probably a mess.



 mpirun -np N mdrun_mpi [options],

the same simulation is started separately on each processor,
leading to a crash.  If I use

 mpirun -np 1 -cpus-per-proc N mdrun_mpi [options],

I get an error message that this is not supported on my
computer ("An attempt to set processor affinity has failed").

I can't configure the input .tpr file for parallel because
grompp doesn't have the -np flag in GROMACS 4.

How can I update the parallel-capable "mdrun" executable with
my changes?


Run "make distclean" and then re-configure.

Mark

-- 
gmx-users mailing list gmx-users@gromacs.org


http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org
.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] surface tension vs. system size

2011-03-24 Thread Elisabeth
Dear all,

I performed surface tension calculations vs. different system size (NVT, mdp
file is included at the end). The reported surface tension for hydrocarbon I
am studying is 18 mN/m at 20C. I am getting ~ 175 bar nm from g_energy which
means surface tension of  around 9 mN/m. For box size 3 3 6 nm I get 202 bar
nm which is 10 mN/m and is closer to the reported one.

0-Are my results reasonable? Is it possible to get closer to the actual
surf. ten? or the current difference can not be improved by molecular
dynamics?

1- I am just wondering how I can decide on the system size given runs
performed (shown below). Which system I can stick to?

2- Can I conclude that surface tension I am getting is equilibrated one?
RMSD is much larger than average!!

There are 125 molecules in a 3 3 3 nm box initially. *F*or 6 6 ? runs: 3 3 3
was replicated in X and Y. (that is 4*125 molecules)

*box size 6 6 18 nm (molecules fill up volume of 6 6 3 and Z direction is
increased to 18 so total size is 6 6 18)
*
Statistics over 898601 steps [ *0. through 1797.2001 ps ]*, 1 data sets
All statistics are over 898601 points

Energy  Average   Err.Est.   RMSD  Tot-Drift
---
#Surf*SurfTen175.322.91813.06 -17.41
(kJ/mol)


Statistics over 818001 steps [ *0. through 1636.0001 ps* ], 1 data sets
All statistics are over 818001 points

Energy  Average   Err.Est.   RMSD  Tot-Drift
---
#Surf*SurfTen   175.4824.91814.64   -19.6398
(kJ/mol)

Statistics over 569501 steps [ *500. through 1639.0001 ps* ], 1 data
sets
All statistics are over 569501 points

Energy  Average   Err.Est.   RMSD  Tot-Drift
---
#Surf*SurfTen   175.459  51815.85   -31.5002
(kJ/mol)

Statistics over 321001 steps [* 1000.0001 through 1642.0001 ps ]*, 1 data
sets
All statistics are over 321001 points

Energy  Average   Err.Est.   RMSD  Tot-Drift
---
#Surf*SurfTen   168.1283.91812.23   -12.0096
(kJ/mol)

*box size 6 6 8 nm. (molecules fill up volume of 6 6 3 and Z direction is
increased to 8 so total size is 6 6 8)

*Statistics over 101 steps [* 0. through 2000.0001 ps *], 1 data
sets
All statistics are over 101 points

Energy  Average   Err.Est.   RMSD  Tot-Drift
---
#Surf*SurfTen   177.2743.91819.27  -0.465725
(kJ/mol)*
*
Statistics over 750001 steps *[ 500. through 2000.0001 ps ],* 1 data
sets
All statistics are over 750001 points

Energy  Average   Err.Est.   RMSD  Tot-Drift
---
#Surf*SurfTen   176.2473.71817.8812.2119
(kJ/mol)
*
*
-
*box size 6 6 6 nm. (molecules fill up volume of 6 6 3 and Z direction is
increased to 6 so total size is 6 6 6)

*Statistics over 532601 steps [ 0. through 1065.2001 ps ], 1 data sets
All statistics are over 532601 points

Energy  Average   Err.Est.   RMSD  Tot-Drift
---
#Surf*SurfTen   175.1372.21816.63   -2.51592
(kJ/mol)

Statistics over 282601 steps [ *500. through 1065.2001 ps* ], 1 data
sets
All statistics are over 282601 points

Energy  Average   Err.Est.   RMSD  Tot-Drift
---
#Surf*SurfTen   178.4137.61815.25   -21.8612
(kJ/mol)

-
*box size 3 3 9 nm. (molecules fill up volume of 3 3 3 and Z direction is
increased to 9 so total size is 3 3 9)*

Statistics over 101 steps [ 0. through 2000.0001 ps ], 1 data sets
All statistics are over 101 points

Energy  Average   Err.Est.   RMSD  Tot-Drift
---
#Surf*SurfTen   180.9858.23621.5518.4731
(kJ/mol)

Statistics over 750001 steps [ 500. through 2000.0001 ps ], 1 data sets
All statistics are over 750001 poi

Re: [gmx-users] GROMACS 4.5.1 mdrun re-compile for MPI

2011-03-24 Thread Adam Herbst
Thank you all for your help!  I ran

  make distclean

and re-configured in the GROMACS source folder via

  sudo ./configure --prefix=$SOFT --with-gsl --enable-threads
CFLAGS="$CFLAGS -I$SOFT/include" LDFLAGS="$LDFLAGS -L$SOFT/lib"
  sudo make -j $NCPU
  sudo make install

where SOFT is the installation directory for FFTW3 and GSL, and NCPU is the
number of processors (24) in my machine.  I found if I didn't specify the
CFLAGS and LDFLAGS when running configure (even if they were already
environment variables with the proper paths), I got an error that the FFTW3
header file or library couldn't be found.

Now when I run mdrun, it automatically uses multiple threads and goes just
as fast as before my changes.  Thanks again,

Adam


On Thu, Mar 24, 2011 at 9:00 AM, Mark Abraham wrote:

> On 24/03/2011 11:51 PM, Adam Herbst wrote:
>
>> Dear GROMACS users,
>> I successfully installed GROMACS 4.5.1 several months ago on a Mac Pro
>> with 12 CPUs, and the "mdrun" command (not "mpirun mdrun_mpi") allows
>> parallel simulations--it automatically uses multiple processors, while the
>> number of processors can be manually specified as N with the flag "mdrun -nt
>> N".  I understand that this is a feature of GROMACS 4 and later.
>>
>
> Yes, threading, enabled by default, and mutually incompatible with MPI.
>
>
>  Now I am making minor changes to the mdrun source code, and I want to
>> recompile such that the parallel version of mdrun is updated with my
>> changes.  But when I run:
>>
>>  make mdrun (or just make)
>>  make install-mdrun (or just make install)
>>
>> from the top-level source directory, the only executables that are updated
>> are the ones with the _mpi suffix, such as mdrun_mpi.  The version of mdrun
>> in src/kernel/ is updated, but this one has no -nt flag and cannot seem to
>> run on multiple processors.  And when I run
>>
>
> Subsequently you have configured with --enable-mpi, so threading is
> disabled. Now everything is probably a mess.
>
>
>
>>  mpirun -np N mdrun_mpi [options],
>>
>> the same simulation is started separately on each processor, leading to a
>> crash.  If I use
>>
>>  mpirun -np 1 -cpus-per-proc N mdrun_mpi [options],
>>
>> I get an error message that this is not supported on my computer ("An
>> attempt to set processor affinity has failed").
>>
>> I can't configure the input .tpr file for parallel because grompp doesn't
>> have the -np flag in GROMACS 4.
>>
>> How can I update the parallel-capable "mdrun" executable with my changes?
>>
>
> Run "make distclean" and then re-configure.
>
> Mark
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Problems with a three domain protein in a membrane

2011-03-24 Thread Justin A. Lemkul



Justin A. Lemkul wrote:



Dr. Ramón Garduño-Juárez wrote:

Dear all,
Dear Justin,

We have been working on setting up an MD experiment in which our 
system is made of three separate domains of a protein embedded in a 
DMPC bilayer surrounded by water.


We have generated a PDB file in which our three domains are labeled as 
A, B and C (in the column corresponding to the chain identifiers). Our 
system is labeled "mod", Chain A has 342 atoms, Chain B has 289 atoms 
and Chain C has 715 atoms. Somehow these identifiers are lost when we 
use pdb2gmx.




Chain identifiers are not necessary for Gromacs programs to function.  
If you wish to keep them, use .pdb format as the output of pdb2gmx 
rather than .gro, which does not contain this information.


We have gone as far as Step Three of Justin's tutorial. It is here 
were we have encountered a problem.


The steps we have performed previously without trouble are:

$ cat  mod_newbox.gro  dmpc_whole.gro >  system.gro
$ genrestr  -f  mod_newbox.gro  -o strong_posre.itp  -fc  10  
10  10
$ perl inflategro.pl  system.gro  4  DMPC  14  system_inflated.gro  5  
area.dat


When we wish to perform a minimization of the system via

$ grompp  -f  minim_inflo.mpd  -c  system_inflated.gro  -p  
topol_inflo.top  -o  em_system_inflated.tpr


the following error appears:

---
Program grompp_d, VERSION 4.5.3
Source code file: toppush.c, line: 1526

Fatal error:
[ file strong_posre.itp, line 720 ]:
Atom index (716) in position_restraints out of bounds (1-715).
This probably means that you have inserted topology section 
"position_restraints"

in a part belonging to a different molecule than you intended to.
In that case move the "position_restraints" section to the right 
molecule.

---

The topol_inflo.top file contains the following instructions:

--
; Include forcefield parameters
#include "gromos53a6_lipid.ff/forcefield.itp"

; Include chain topologies
#include "topol_Protein_chain_A.itp"
#ifdef POSRES
#include "posre_Protein_chain_A.itp"
#endif

#include "topol_Protein_chain_B.itp"
#ifdef POSRES
#include "posre_Protein_chain_B.itp"
#endif

#include "topol_Protein_chain_C.itp"
#ifdef POSRES
#include "posre_Protein_chain_C.itp"
#endif

; Strong position restraints for InflateGRO
#ifdef STRONG_POSRES
#include "strong_posre.itp"
#endif

; Include DMPC chain topology
#include "dmpc.itp"

; Include water topology
#include "gromos53a6_lipid.ff/spc.itp"

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
   11   1000   1000   1000
#endif

; Include topology for ions
#include "gromos53a6_lipid.ff/ions.itp"

[ system ]
; Name
mod.pdb

[ molecules ]
; Compound#mols
Protein_chain_A 1
Protein_chain_B 1
Protein_chain_C 1
DMPC  128
--

QUESTIONS

1) Do you see any problem in the topol_inflo.top setup?



No.



Well, actually yes, but hopefully that was clear from the notes below.  I was 
thinking of the problem in terms of the arrangement of molecule names, etc.


Sorry for any confusion.

-Justin


2) Do we need to create a strong_posre.itp for each chain?



Yes.

http://www.gromacs.org/Documentation/How-tos/Position_Restraints

http://www.gromacs.org/Documentation/Errors#Atom_index_n_in_position_restraints_out_of_bounds 



-Justin


Looking forward to hear your comments...

Cheers,
Ramon Garduno






--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problems with a three domain protein in a membrane

2011-03-24 Thread Justin A. Lemkul



Dr. Ramón Garduño-Juárez wrote:

Dear all,
Dear Justin,

We have been working on setting up an MD experiment in which our system 
is made of three separate domains of a protein embedded in a DMPC 
bilayer surrounded by water.


We have generated a PDB file in which our three domains are labeled as 
A, B and C (in the column corresponding to the chain identifiers). Our 
system is labeled "mod", Chain A has 342 atoms, Chain B has 289 atoms 
and Chain C has 715 atoms. Somehow these identifiers are lost when we 
use pdb2gmx.




Chain identifiers are not necessary for Gromacs programs to function.  If you 
wish to keep them, use .pdb format as the output of pdb2gmx rather than .gro, 
which does not contain this information.


We have gone as far as Step Three of Justin's tutorial. It is here were 
we have encountered a problem.


The steps we have performed previously without trouble are:

$ cat  mod_newbox.gro  dmpc_whole.gro >  system.gro
$ genrestr  -f  mod_newbox.gro  -o strong_posre.itp  -fc  10  
10  10
$ perl inflategro.pl  system.gro  4  DMPC  14  system_inflated.gro  5  
area.dat


When we wish to perform a minimization of the system via

$ grompp  -f  minim_inflo.mpd  -c  system_inflated.gro  -p  
topol_inflo.top  -o  em_system_inflated.tpr


the following error appears:

---
Program grompp_d, VERSION 4.5.3
Source code file: toppush.c, line: 1526

Fatal error:
[ file strong_posre.itp, line 720 ]:
Atom index (716) in position_restraints out of bounds (1-715).
This probably means that you have inserted topology section 
"position_restraints"

in a part belonging to a different molecule than you intended to.
In that case move the "position_restraints" section to the right molecule.
---

The topol_inflo.top file contains the following instructions:

--
; Include forcefield parameters
#include "gromos53a6_lipid.ff/forcefield.itp"

; Include chain topologies
#include "topol_Protein_chain_A.itp"
#ifdef POSRES
#include "posre_Protein_chain_A.itp"
#endif

#include "topol_Protein_chain_B.itp"
#ifdef POSRES
#include "posre_Protein_chain_B.itp"
#endif

#include "topol_Protein_chain_C.itp"
#ifdef POSRES
#include "posre_Protein_chain_C.itp"
#endif

; Strong position restraints for InflateGRO
#ifdef STRONG_POSRES
#include "strong_posre.itp"
#endif

; Include DMPC chain topology
#include "dmpc.itp"

; Include water topology
#include "gromos53a6_lipid.ff/spc.itp"

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
   11   1000   1000   1000
#endif

; Include topology for ions
#include "gromos53a6_lipid.ff/ions.itp"

[ system ]
; Name
mod.pdb

[ molecules ]
; Compound#mols
Protein_chain_A 1
Protein_chain_B 1
Protein_chain_C 1
DMPC  128
--

QUESTIONS

1) Do you see any problem in the topol_inflo.top setup?



No.


2) Do we need to create a strong_posre.itp for each chain?



Yes.

http://www.gromacs.org/Documentation/How-tos/Position_Restraints

http://www.gromacs.org/Documentation/Errors#Atom_index_n_in_position_restraints_out_of_bounds

-Justin


Looking forward to hear your comments...

Cheers,
Ramon Garduno




--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Problems with a three domain protein in a membrane

2011-03-24 Thread Dr. Ramón Garduño-Juárez

Dear all,
Dear Justin,

We have been working on setting up an MD experiment in which our system 
is made of three separate domains of a protein embedded in a DMPC 
bilayer surrounded by water.


We have generated a PDB file in which our three domains are labeled as 
A, B and C (in the column corresponding to the chain identifiers). Our 
system is labeled "mod", Chain A has 342 atoms, Chain B has 289 atoms 
and Chain C has 715 atoms. Somehow these identifiers are lost when we 
use pdb2gmx.


We have gone as far as Step Three of Justin's tutorial. It is here were 
we have encountered a problem.


The steps we have performed previously without trouble are:

$ cat  mod_newbox.gro  dmpc_whole.gro >  system.gro
$ genrestr  -f  mod_newbox.gro  -o strong_posre.itp  -fc  10  
10  10
$ perl inflategro.pl  system.gro  4  DMPC  14  system_inflated.gro  5  
area.dat


When we wish to perform a minimization of the system via

$ grompp  -f  minim_inflo.mpd  -c  system_inflated.gro  -p  
topol_inflo.top  -o  em_system_inflated.tpr


the following error appears:

---
Program grompp_d, VERSION 4.5.3
Source code file: toppush.c, line: 1526

Fatal error:
[ file strong_posre.itp, line 720 ]:
Atom index (716) in position_restraints out of bounds (1-715).
This probably means that you have inserted topology section 
"position_restraints"

in a part belonging to a different molecule than you intended to.
In that case move the "position_restraints" section to the right molecule.
---

The topol_inflo.top file contains the following instructions:

--
; Include forcefield parameters
#include "gromos53a6_lipid.ff/forcefield.itp"

; Include chain topologies
#include "topol_Protein_chain_A.itp"
#ifdef POSRES
#include "posre_Protein_chain_A.itp"
#endif

#include "topol_Protein_chain_B.itp"
#ifdef POSRES
#include "posre_Protein_chain_B.itp"
#endif

#include "topol_Protein_chain_C.itp"
#ifdef POSRES
#include "posre_Protein_chain_C.itp"
#endif

; Strong position restraints for InflateGRO
#ifdef STRONG_POSRES
#include "strong_posre.itp"
#endif

; Include DMPC chain topology
#include "dmpc.itp"

; Include water topology
#include "gromos53a6_lipid.ff/spc.itp"

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
   11   1000   1000   1000
#endif

; Include topology for ions
#include "gromos53a6_lipid.ff/ions.itp"

[ system ]
; Name
mod.pdb

[ molecules ]
; Compound#mols
Protein_chain_A 1
Protein_chain_B 1
Protein_chain_C 1
DMPC  128
--

QUESTIONS

1) Do you see any problem in the topol_inflo.top setup?

2) Do we need to create a strong_posre.itp for each chain?

Looking forward to hear your comments...

Cheers,
Ramon Garduno


<>-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] ethanol and dmso files

2011-03-24 Thread Justin A. Lemkul



ahmet yıldırım wrote:

Dear users,

I need ethanol.itp, ethanol216.gro, dmso.itp and dmso216.gro files for 
gromos43a1 force field. can anyone help me?




Both ethanol and DMSO are defined in the .rtp file for the force field.  You can 
easily create an .itp file based on this information, but the parameters for 
DMSO are significantly better in 53A6.  You may want to consider whether or not 
your force field choice is the best one.


Creating solvent boxes is easy with genconf -nbox.  Generate a grid of molecules 
from a single coordinate file and equilibrate.


-Justin


Thanks in advance



--
Ahmet YILDIRIM



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] ethanol and dmso files

2011-03-24 Thread ahmet yıldırım
Dear users,

I need ethanol.itp, ethanol216.gro, dmso.itp and dmso216.gro files for
gromos43a1 force field. can anyone help me?

Thanks in advance



-- 
Ahmet YILDIRIM
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] GROMACS 4.5.1 mdrun re-compile for MPI

2011-03-24 Thread Mark Abraham

On 24/03/2011 11:51 PM, Adam Herbst wrote:

Dear GROMACS users,
I successfully installed GROMACS 4.5.1 several months ago on a Mac Pro 
with 12 CPUs, and the "mdrun" command (not "mpirun mdrun_mpi") allows 
parallel simulations--it automatically uses multiple processors, while 
the number of processors can be manually specified as N with the flag 
"mdrun -nt N".  I understand that this is a feature of GROMACS 4 and 
later.


Yes, threading, enabled by default, and mutually incompatible with MPI.

Now I am making minor changes to the mdrun source code, and I want to 
recompile such that the parallel version of mdrun is updated with my 
changes.  But when I run:


  make mdrun (or just make)
  make install-mdrun (or just make install)

from the top-level source directory, the only executables that are 
updated are the ones with the _mpi suffix, such as mdrun_mpi.  The 
version of mdrun in src/kernel/ is updated, but this one has no -nt 
flag and cannot seem to run on multiple processors.  And when I run


Subsequently you have configured with --enable-mpi, so threading is 
disabled. Now everything is probably a mess.




  mpirun -np N mdrun_mpi [options],

the same simulation is started separately on each processor, leading 
to a crash.  If I use


  mpirun -np 1 -cpus-per-proc N mdrun_mpi [options],

I get an error message that this is not supported on my computer ("An 
attempt to set processor affinity has failed").


I can't configure the input .tpr file for parallel because grompp 
doesn't have the -np flag in GROMACS 4.


How can I update the parallel-capable "mdrun" executable with my changes?


Run "make distclean" and then re-configure.

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS 4.5.1 mdrun re-compile for MPI

2011-03-24 Thread Carsten Kutzner
Hi,

you could try a make clean, and then configure again with --enable-threads.
It seems for some reason you only build the serial mdrun version.

Carsten


On Mar 24, 2011, at 1:51 PM, Adam Herbst wrote:

> Dear GROMACS users,
> I successfully installed GROMACS 4.5.1 several months ago on a Mac Pro with 
> 12 CPUs, and the "mdrun" command (not "mpirun mdrun_mpi") allows parallel 
> simulations--it automatically uses multiple processors, while the number of 
> processors can be manually specified as N with the flag "mdrun -nt N".  I 
> understand that this is a feature of GROMACS 4 and later.  Now I am making 
> minor changes to the mdrun source code, and I want to recompile such that the 
> parallel version of mdrun is updated with my changes.  But when I run:
> 
>   make mdrun (or just make)
>   make install-mdrun (or just make install)
> 
> from the top-level source directory, the only executables that are updated 
> are the ones with the _mpi suffix, such as mdrun_mpi.  The version of mdrun 
> in src/kernel/ is updated, but this one has no -nt flag and cannot seem to 
> run on multiple processors.  And when I run
> 
>   mpirun -np N mdrun_mpi [options],
> 
> the same simulation is started separately on each processor, leading to a 
> crash.  If I use
> 
>   mpirun -np 1 -cpus-per-proc N mdrun_mpi [options],
> 
> I get an error message that this is not supported on my computer ("An attempt 
> to set processor affinity has failed").
> 
> I can't configure the input .tpr file for parallel because grompp doesn't 
> have the -np flag in GROMACS 4.
> 
> How can I update the parallel-capable "mdrun" executable with my changes?
> Thanks in advance,
> 
> Adam
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS 4.5.1 mdrun re-compile for MPI

2011-03-24 Thread Justin A. Lemkul



Adam Herbst wrote:

Dear GROMACS users,
I successfully installed GROMACS 4.5.1 several months ago on a Mac Pro 
with 12 CPUs, and the "mdrun" command (not "mpirun mdrun_mpi") allows 
parallel simulations--it automatically uses multiple processors, while 
the number of processors can be manually specified as N with the flag 
"mdrun -nt N".  I understand that this is a feature of GROMACS 4 and 
later.  Now I am making minor changes to the mdrun source code, and I 
want to recompile such that the parallel version of mdrun is updated 
with my changes.  But when I run:


  make mdrun (or just make)
  make install-mdrun (or just make install)

from the top-level source directory, the only executables that are 
updated are the ones with the _mpi suffix, such as mdrun_mpi.  The 
version of mdrun in src/kernel/ is updated, but this one has no -nt flag 
and cannot seem to run on multiple processors.  And when I run




Please post your actual configuration command.  Did you properly --enable-mpi 
and --disable-threads?  What MPI library are you using?  Do other processes 
executed via mpirun work as expected?



  mpirun -np N mdrun_mpi [options],

the same simulation is started separately on each processor, leading to 
a crash.  If I use


  mpirun -np 1 -cpus-per-proc N mdrun_mpi [options],

I get an error message that this is not supported on my computer ("An 
attempt to set processor affinity has failed").


I can't configure the input .tpr file for parallel because grompp 
doesn't have the -np flag in GROMACS 4.




There is no need to do this anymore.  mdrun deals with all aspects of 
parallelization.  A .tpr file can then be run on as many processors as you like.


-Justin


How can I update the parallel-capable "mdrun" executable with my changes?
Thanks in advance,

Adam



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GROMACS 4.5.1 mdrun re-compile for MPI

2011-03-24 Thread Adam Herbst
Dear GROMACS users,
I successfully installed GROMACS 4.5.1 several months ago on a Mac Pro with
12 CPUs, and the "mdrun" command (not "mpirun mdrun_mpi") allows parallel
simulations--it automatically uses multiple processors, while the number of
processors can be manually specified as N with the flag "mdrun -nt N".  I
understand that this is a feature of GROMACS 4 and later.  Now I am making
minor changes to the mdrun source code, and I want to recompile such that
the parallel version of mdrun is updated with my changes.  But when I run:

  make mdrun (or just make)
  make install-mdrun (or just make install)

from the top-level source directory, the only executables that are updated
are the ones with the _mpi suffix, such as mdrun_mpi.  The version of mdrun
in src/kernel/ is updated, but this one has no -nt flag and cannot seem to
run on multiple processors.  And when I run

  mpirun -np N mdrun_mpi [options],

the same simulation is started separately on each processor, leading to a
crash.  If I use

  mpirun -np 1 -cpus-per-proc N mdrun_mpi [options],

I get an error message that this is not supported on my computer ("An
attempt to set processor affinity has failed").

I can't configure the input .tpr file for parallel because grompp doesn't
have the -np flag in GROMACS 4.

How can I update the parallel-capable "mdrun" executable with my changes?
Thanks in advance,

Adam
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] No cut-off

2011-03-24 Thread Swarnendu Tripathi
Thank you very much. Really very helpful comments.

-Swarnendu

On Thu, Mar 24, 2011 at 1:52 AM, Mark Abraham wrote:

>  On 24/03/2011 4:38 PM, Swarnendu Tripathi wrote:
>
> Hi all,
>
> Thank you for the reply. The answer I got for the second question about
> table-extension, I understand and agree with that.
>
> Regarding my first question I asked, I did not get any error with grompp
> even when I tried rlist=rvdw=2.0 and rcoulomb=0. For this I used the no
> cut-off conditions which I have mentioned below in my previous e-mail. I am
> using Gromacs-4.0.7.  Any further suggestions or comments will be very
> helpful.
>
>
> VDW interactions normally become insignificant much faster than Coulomb
> interactions, so treating all Coulomb interactions and only some VDW
> interactions is logical. However, performing the search for neighbours takes
> time, and given that you are computing the distances already for their
> Coulomb interaction, it is probably cheaper just to compute all the VDW
> interactions, i.e. rlist=rvdw=0. This might be different if you had a lot of
> atoms that had zero partial charge.
>
> Mark
>
>
>  On Thu, Mar 24, 2011 at 12:17 AM, Mark Abraham 
> wrote:
>
>> On 24/03/2011 1:34 PM, Itamar Kass wrote:
>>
>>> Hi Swarnendu,
>>>
>>> grompp will return errer unless rlist=rcoulomb=rvdw=0, so you should
>>> stick to it.
>>>
>>> Cheers,
>>> Itamar
>>>
>>> On 24/03/11 1:13 PM, Swarnendu Tripathi wrote:
>>>
 Hello everybody,

 I want to use the no cut-off option in gromacs for the electrostatic
 interactions. In manual it says for this I need to define: pbc=no;
 nstlist=0; ns-type=simple and rlist=rcoulomb=rvdw=0 in the .mdp file.

 My questions are:

 1. If I choose rcoulomb=0 and rlist=rvdw=2.0 is that a problem? Do you
 always recommend to use rlist=rcoulomb=rvdw=0 for no cut-off option?

 2. How  should I choose the table-extension parameter now? Before, with
 pbc=xyz (with cut-off) I used table-extension=rvdw+1/2*length of the 
 longest
 diagonal of the box (approximately). I am also using a tabulated potential.

>>>
>>  Since there is effectively no box once pbc=no, your tables need only be
>> as long as the longest possible interaction, plus some margin for when the
>> structure rearranges.
>>
>> Mark
>>
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] g_tcaf reference

2011-03-24 Thread Dommert Florian
On Thu, 2011-03-24 at 12:56 +0100, Tsjerk Wassenaar wrote:
> Hi Florian,
> 
> It should be Phys. Rev. E i.s.o. JCP.
> 
> Cheers,
> 
> Tsjerk
> 
> 

Thank you very much, I got it.

Cheers,

Flo

-- 
Florian Dommert
Dipl. - Phys.

Institute for Computational Physics
University Stuttgart

Pfaffenwaldring 27
70569 Stuttgart

EMail: domm...@icp.uni-stuttgart.de
Homepage: http://www.icp.uni-stuttgart.de/~icp/Florian_Dommert

Tel.: +49 - (0)711 - 68563613
Fax.: +49 - (0)711 - 68563658


signature.asc
Description: This is a digitally signed message part
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] g_tcaf reference

2011-03-24 Thread Tsjerk Wassenaar
Hi Florian,

It should be Phys. Rev. E i.s.o. JCP.

Cheers,

Tsjerk

On Thu, Mar 24, 2011 at 12:49 PM, Dommert Florian
 wrote:
> Hello,
>
> g_tcaf gives a reference for the method to calculate \eta. However I can
> not find the Palmer JCP 49 (1994), either in a database nor on the JCP
> page. As I want to use the method, I first have to get an idea about it
> and so I need this article to continue. Has anybody an idea where it is
> and how to get it ?
>
> Cheers,
>
> Flo
>
> --
> Florian Dommert
> Dipl. - Phys.
>
> Institute for Computational Physics
> University Stuttgart
>
> Pfaffenwaldring 27
> 70569 Stuttgart
>
> EMail: domm...@icp.uni-stuttgart.de
> Homepage: http://www.icp.uni-stuttgart.de/~icp/Florian_Dommert
>
> Tel.: +49 - (0)711 - 68563613
> Fax.: +49 - (0)711 - 68563658
>
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
Tsjerk A. Wassenaar, Ph.D.

post-doctoral researcher
Molecular Dynamics Group
* Groningen Institute for Biomolecular Research and Biotechnology
* Zernike Institute for Advanced Materials
University of Groningen
The Netherlands
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_tcaf reference

2011-03-24 Thread Dommert Florian
Hello,

g_tcaf gives a reference for the method to calculate \eta. However I can
not find the Palmer JCP 49 (1994), either in a database nor on the JCP
page. As I want to use the method, I first have to get an idea about it
and so I need this article to continue. Has anybody an idea where it is
and how to get it ?

Cheers,

Flo

-- 
Florian Dommert
Dipl. - Phys.

Institute for Computational Physics
University Stuttgart

Pfaffenwaldring 27
70569 Stuttgart

EMail: domm...@icp.uni-stuttgart.de
Homepage: http://www.icp.uni-stuttgart.de/~icp/Florian_Dommert

Tel.: +49 - (0)711 - 68563613
Fax.: +49 - (0)711 - 68563658


signature.asc
Description: This is a digitally signed message part
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] question about g_tcaf

2011-03-24 Thread Tsjerk Wassenaar
Hi Muhammad,

It's 'just' a fit/proportionality parameter relating the viscosity for a
k-vector to the length of the vector and the viscosity proper (eta0). From
the equation you can work out that the unit is nm^-2.

Hope it helps,

Tsjerk

On Thu, Mar 24, 2011 at 10:27 AM, Alif M Latif  wrote:

> Dear gromacs users and developers,
>
> I have 1 question about the calculation of viscosity using g_tcaf. I got
> the visc_k.xvg file. According to Berk Hess's report, the k values should be
> extrapolated to k=0 to obtain the viscosity. The question is (and I'm sorry
> if this is a silly question), what is "a" in : eta(k) = eta 0 (1-ak^2)
> ?..can someone help? I couldn't find (or missed) it in the paper.
>
> Thank You in advance,
>
> MUHAMMAD ALIF MOHAMMAD LATIF
> Laboratory of Theoretical and Computational Chemistry
> Department of Chemistry
> Faculty of Science
> Universiti Putra Malaysia
> 43400 UPM Serdang, Selangor
> MALAYSIA
>
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
Tsjerk A. Wassenaar, Ph.D.

post-doctoral researcher
Molecular Dynamics Group
* Groningen Institute for Biomolecular Research and Biotechnology
* Zernike Institute for Advanced Materials
University of Groningen
The Netherlands
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Problem with g_rdf plot

2011-03-24 Thread Carla Jamous
Hi everyone,

I ran a water simulation and I tried to calculate radial distribution
function OO but I get strange peaks from r =1.5nm onwards.
I have 8 atoms in a box of 9.5.

I post the graph as an attached file.

Please does anyone have an idea of what might be going on?

Thanks,
Carla


rdf_Ow-Ow-15to17ns.dat
Description: MOPAC data 


rdf_Ow-Ow-15to17ns.xvg
Description: Binary data
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] question about g_tcaf

2011-03-24 Thread Alif M Latif
 Dear gromacs users and developers,I have 1 question about the calculation of viscosity using g_tcaf. I got the visc_k.xvg file. According to Berk Hess's report, the k values should be extrapolated to k=0 to obtain the viscosity. The question is (and I'm sorry if this is a silly question), what is "a" in : eta(k) = eta 0 (1-ak^2) ?..can someone help? I couldn't find (or missed) it in the paper.Thank You in advance, MUHAMMAD ALIF MOHAMMAD LATIFLaboratory of Theoretical and Computational ChemistryDepartment of ChemistryFaculty of ScienceUniversiti Putra Malaysia43400 UPM Serdang, SelangorMALAYSIA
   







  -- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] compressing a box of water droplets into a homogeneous solution of liquid water

2011-03-24 Thread Patrick Fuchs

Hi Chris,
I fully agree with your analysis about the effect of non-bonded 
interactions that accelerate the collapse when the layer of vacuum 
around the system is thin. I also observed that it is way faster to 
reach equilibrium density in this case.

Ciao,

Patrick

Le 23/03/2011 21:06, chris.ne...@utoronto.ca a écrit :

Thanks Patrick and Andre!

We repeated this with a few box sizes just to get a quick handle on it.
The equilibrium volume is about 64 nm^3. If we start with a volume of
1000 nm^3 then the overall box does not collapse at all within 200 ps of
NPT Langevin dynamics at 1 atm.If we start with a volume of 200 nm^3,
then it does collapse to approximately 64 nm^3 within 200 ps of such
simulation.

My best guess is that the rapid collapse is driven by nonbonded
interactions and thus the rapid collapse does not occur when the system
is so large with such low density that water forms isolated vapour
droplets that do not interact with each other by LJ interactions. Sure,
it is expected to collapse eventually from the 1 atm pressure coupling,
and we have also observed that high pressure works, but at 1 atm it
might take a very long time to reach equilibrium.

I agree with Andre that none of this matters to regular simulations as
there is no good reason to go through this type of state when one wants
to simulate dense liquids. I just found it curious that Berendsen
pressure coupling at 1 atm was not sufficient to quickly equilibrate the
volume in a system where the vacuum regions are large in comparison to
the LJ cutoffs.

Chris.

-- original message --

Hi Chris,
I experienced the same kind of thing. In the process of building a box
of liquid (organic solvent), at some point I wanted to get rid of a
layer of vacuum around my system. So for shrinking the box I used
similar settings as you and found also that the collapse was going
slower than I'd have expected.
One solution to accelerate this (if your goal is to shrink the box) is
to increase the pressure (to say 100 atm). But it's important to stop
the simulation in time (i.e. once the layer of vacuum has disapeared)
otherwise the system shrinks too much and density is off.
So to come back to your system which has a very big layer of vacuum
around, and according to my experience, the volume is probably
decreasing but too slowly to see anything signigicant (compared to the
initial value) in 200 ps .
Ciao,

Patrick

Le 21/03/2011 16:53, chris.ne...@utoronto.ca a écrit :

[Hide Quoted Text]
Dear users:

I recently came across a system that was composed of tip4p water vapor
droplets separated by vacuum. This system is what you might get if you
did a NVT simulation of water with a box that was 10 times too large for
the number of water molecules.

I was surprised to see that this system did not collapse to any
significant extent during 200 ps of NPT equilibration at 1 atm using the
Berendsen thermostat with tau_p=1.0 and the sd integrator and a colombic
cut-off. (We also tried a number of other integrator/pressure coupling
combinations with the same results).

I had assumed that such collapse would occur quite rapidly. This does
not seem to be the case (no noticeable contraction within 200 ps).

Has anybody else done anything like this? Can anybody comment on their
expectations/experience of collapse from the gas state to the liquid
state under standard NPT conditions?

Thank you,
Chris.



--
___
 new E-mail address: patrick.fu...@univ-paris-diderot.fr 
Patrick FUCHS
Dynamique des Structures et Interactions des Macromolécules Biologiques
INTS, INSERM UMR-S665, Université Paris Diderot,
6 rue Alexandre Cabanel, 75015 Paris
Tel : +33 (0)1-44-49-30-57 - Fax : +33 (0)1-47-34-74-31
Web Site: http://www.dsimb.inserm.fr/~fuchs
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] No cut-off

2011-03-24 Thread Erik Marklund

Mark Abraham skrev 2011-03-24 06.52:

On 24/03/2011 4:38 PM, Swarnendu Tripathi wrote:

Hi all,

Thank you for the reply. The answer I got for the second question 
about table-extension, I understand and agree with that.


Regarding my first question I asked, I did not get any error with 
grompp even when I tried rlist=rvdw=2.0 and rcoulomb=0. For this I 
used the no cut-off conditions which I have mentioned below in my 
previous e-mail. I am using Gromacs-4.0.7.  Any further suggestions 
or comments will be very helpful.


VDW interactions normally become insignificant much faster than 
Coulomb interactions, so treating all Coulomb interactions and only 
some VDW interactions is logical. However, performing the search for 
neighbours takes time, and given that you are computing the distances 
already for their Coulomb interaction, it is probably cheaper just to 
compute all the VDW interactions, i.e. rlist=rvdw=0. This might be 
different if you had a lot of atoms that had zero partial charge.


Mark
In addition, depending on your system, the collective long-range 
attraction from VdW interactions may not be insignificant. And since 
you're simulating in vacuo, I guess you have an inhomogeneous system, 
and dispersion corrdection may not be adequate. That's further reason to 
have infinite VdW cut-offs too. And, as mark mentioned, the distances 
are calculated anyway, so computing VdW will not add that much to your 
simulation time.


Erik


On Thu, Mar 24, 2011 at 12:17 AM, Mark Abraham 
mailto:mark.abra...@anu.edu.au>> wrote:


On 24/03/2011 1:34 PM, Itamar Kass wrote:

Hi Swarnendu,

grompp will return errer unless rlist=rcoulomb=rvdw=0, so you
should stick to it.

Cheers,
Itamar

On 24/03/11 1:13 PM, Swarnendu Tripathi wrote:

Hello everybody,

I want to use the no cut-off option in gromacs for the
electrostatic interactions. In manual it says for this I
need to define: pbc=no; nstlist=0; ns-type=simple and
rlist=rcoulomb=rvdw=0 in the .mdp file.

My questions are:

1. If I choose rcoulomb=0 and rlist=rvdw=2.0 is that a
problem? Do you always recommend to use
rlist=rcoulomb=rvdw=0 for no cut-off option?

2. How  should I choose the table-extension parameter
now? Before, with pbc=xyz (with cut-off) I used
table-extension=rvdw+1/2*length of the longest diagonal
of the box (approximately). I am also using a tabulated
potential.


Since there is effectively no box once pbc=no, your tables need
only be as long as the longest possible interaction, plus some
margin for when the structure rearranges.

Mark

-- 
gmx-users mailing list gmx-users@gromacs.org


http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org
.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists







--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists