Re: [gmx-users] Continuous mdrun vs step-by-step mdrun

2012-11-12 Thread francesco oteri
Hi,
happy diwali to you, too.
Can you please post a link where what you said is stated?
it seems quite strange to me!



2012/11/12 Venkat Reddy venkat...@gmail.com

 Dear gromacs users,

 I have a very basic doubt regarding mdrun. Is there any difference between
 doing final MD for 100 ns at  a stretch and doing the same with a 10 ns
 step size (*i.e., 10ns20ns30ns100ns*)  on a cluster of 256
 processors. I have read some where that continuous MD of longer simulations
 will cause spurious drifts in velocity and energy, errors in velocity
 correlationetc. Please advise me in this regard.

 Thank you and Happy DIWALI

 --
 With Best Wishes
 Venkat Reddy Chirasani
 PhD student
 Laboratory of Computational Biophysics
 Department of Biotechnology
 IIT Madras
 Chennai
 INDIA-600036
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Cordiali saluti, Dr.Oteri Francesco
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] do_dssp Segmentation fault

2012-11-12 Thread João Henriques
Hello,

do_dssp (4.5.5) is broken. There are two possible answers you're gonna get
here:

1) Use old dssp, which you are using.
2) You're an idiot, which are not.

What I did to solve the problem was, download gmx from git, and substitute
the /src/tools/do_dssp.c of gmx 4.5.5 with the one from the git version.
Re-compile it and voila! This do_dssp version accepts both old and new dssp
and you have to specify which version with the flag -ver if I remember
correctly.

This worked perfectly for me. I hope it helps you as well.

All the best,
João Henriques

On Mon, Nov 12, 2012 at 8:38 AM, mshappy1986 mshappy1...@126.com wrote:

 Hi all,
I am meeting the following error in Gromacs 4.5.5 with do_dssp
Here is the command
do_dssp -f md.xtc -s md.tpr -o dssp.xpm
   give me the following error
segmentation fault
   I have downloaded the executable DSSP form
 http://swift.cmbi.ru.nl/gv/dssp/ and set the environment variable, but
 do_dssp did not work.
   How can I fix it?
   Thanks a lot










 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Continuous mdrun vs step-by-step mdrun

2012-11-12 Thread Venkat Reddy
Hi Francesco,
Thanks for ur reply and wishes. I couldn't remember exactly where I read
this but what is your opinion on this discrete vs continuous runs ?


On Mon, Nov 12, 2012 at 2:10 PM, francesco oteri
francesco.ot...@gmail.comwrote:

 Hi,
 happy diwali to you, too.
 Can you please post a link where what you said is stated?
 it seems quite strange to me!



 2012/11/12 Venkat Reddy venkat...@gmail.com

  Dear gromacs users,
 
  I have a very basic doubt regarding mdrun. Is there any difference
 between
  doing final MD for 100 ns at  a stretch and doing the same with a 10
 ns
  step size (*i.e., 10ns20ns30ns100ns*)  on a cluster of
 256
  processors. I have read some where that continuous MD of longer
 simulations
  will cause spurious drifts in velocity and energy, errors in velocity
  correlationetc. Please advise me in this regard.
 
  Thank you and Happy DIWALI
 
  --
  With Best Wishes
  Venkat Reddy Chirasani
  PhD student
  Laboratory of Computational Biophysics
  Department of Biotechnology
  IIT Madras
  Chennai
  INDIA-600036
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 



 --
 Cordiali saluti, Dr.Oteri Francesco
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
With Best Wishes
Venkat Reddy Chirasani
PhD student
Laboratory of Computational Biophysics
Department of Biotechnology
IIT Madras
Chennai
INDIA-600036
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] About periodic image of system.......

2012-11-12 Thread rama david
Thank you for your reply.


On Sun, Nov 11, 2012 at 8:30 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/11/12 4:51 AM, rama david wrote:

 Hi justin ,

 Thank you a lot for your explaination.

 My opinion  on the working of g_mindist -pi is that when it shows the
 distance between two atom
 of the protein is less than vdw cut off ( 1.4 nm ) , then protein see it
 periodic image, and it is the violation of pbc.
 Is these is right??? ( That is the shortest Periodic distance should be
 larger than vdw cut off 1.4 )


 This is correct, when considering a single molecule, i.e. it can't see
 itself. If you have two proteins, and you choose the blanket Protein
 group, you haven't determined anything, because now the calculation
 involves multiple molecules.


  If it is right, g_mindist say that  The shortest periodic distance is
 0.154938 (nm) at time 16162 (ps),
 between atoms 223 and 3270
 This is less than 1.4 then Why it is not problem..???


 Because they're in separate molecules.  Did you ever do as I suggested and
 visualize this frame?  It will be immediately apparent that there is no
 problem.  Please refer to textbooks or even simple Google searching for
 explanations of the minimum image convention.  As I said, it is described
 in almost every reference text.


 -Justin

 --
 ==**==

 Justin A. Lemkul, Ph.D.
 Research Scientist
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justinhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 ==**==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] About Presence of Water in Hydrophobic core of lipids in NPT equlibration

2012-11-12 Thread vidhya sankar
Dear Justin,
    Thank you for your previous reply,
   I 
am  doing Protein-Lipid simulation. After Em When I visualize the str in .vmd  
The water Molecules Present in Lipid (DPPC) Head groups .There is No Water 
Molecules Present in the Hydrophobic Part of Bilayer . While The Protein Is in 
the Center of box . 

Then I Have done NVT Equilibration   When I visualize the  .gro file in vmd  
Around 40 Water molecules (out of 2699 )  are present in Hydrophobic part of 
Lipid (that is Moving inside the box, Nearer to protein which is at the 
center)  How to Avoid this ? 

May I simply delete these 40 water  Molecules  ?  or
May I freeze these molecules During NVT Equilibration) ?
Can i Leave it as such and Then Proceed Further NPT?

I Hope Your Valuable Suggestion?


Thanks in Advance

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] do_dssp Segmentation fault

2012-11-12 Thread Erik Marklund
Hi,

The explanation is that DSSP changed its syntax some time ago and do_dssp no 
longer complied. More recent versions of do_dssp follows the new syntax  while 
still supporting the old one.

Erik

12 nov 2012 kl. 10.55 skrev João Henriques:

 Hello,
 
 do_dssp (4.5.5) is broken. There are two possible answers you're gonna get
 here:
 
 1) Use old dssp, which you are using.
 2) You're an idiot, which are not.
 
 What I did to solve the problem was, download gmx from git, and substitute
 the /src/tools/do_dssp.c of gmx 4.5.5 with the one from the git version.
 Re-compile it and voila! This do_dssp version accepts both old and new dssp
 and you have to specify which version with the flag -ver if I remember
 correctly.
 
 This worked perfectly for me. I hope it helps you as well.
 
 All the best,
 João Henriques
 
 On Mon, Nov 12, 2012 at 8:38 AM, mshappy1986 mshappy1...@126.com wrote:
 
 Hi all,
   I am meeting the following error in Gromacs 4.5.5 with do_dssp
   Here is the command
   do_dssp -f md.xtc -s md.tpr -o dssp.xpm
  give me the following error
   segmentation fault
  I have downloaded the executable DSSP form
 http://swift.cmbi.ru.nl/gv/dssp/ and set the environment variable, but
 do_dssp did not work.
  How can I fix it?
  Thanks a lot
 
 
 
 
 
 
 
 
 
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 
 -- 
 João Henriques
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

---
Erik Marklund, PhD
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 6688fax: +46 18 511 755
er...@xray.bmc.uu.se
http://www2.icm.uu.se/molbio/elflab/index.html

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Setting up a complex membrane simulation

2012-11-12 Thread jonas87
Ok, what i've currently done:

I start with a pdb file of a single molecule Lipid_A and its topology and
put it in a box:

editconf -f Lipid_A.pdb -o Lipid_A-box.gro -bt triclinic -d 1.0

I want a total of 128 molecules of this lipid in my system so I use genbox
to add 127 more:

genbox -cp Lipid_A-box.gro -ci Lipid_A.pdb -p Lipid_A.top -o 128_Lipid_A.pdb
-nmol 127

This only ads 8 molecules, I'm going to guess because more don't fit in my
box. How do I deal with this? How can i know in advance how big i have to
make the box?

kind regards,



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/Setting-up-a-complex-membrane-simulation-tp5002607p5002885.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] How to launch mdrun_mpi using g_tune_pme on a cluster

2012-11-12 Thread Venkat Reddy
Dear Gromacs users,

How to launch mdrun_mpi using g_tune_pme on a cluster, because
*
*
*g_tune_pme -launch *launches mdrun but not mdrun_mpi.

Thanks for your valuable time

-- 
With Best Wishes
Venkat Reddy Chirasani
PhD student
Laboratory of Computational Biophysics
Department of Biotechnology
IIT Madras
Chennai
INDIA-600036
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] How to launch mdrun_mpi using g_tune_pme on a cluster

2012-11-12 Thread Justin Lemkul



On 11/12/12 7:40 AM, Venkat Reddy wrote:

Dear Gromacs users,

How to launch mdrun_mpi using g_tune_pme on a cluster, because
*
*
*g_tune_pme -launch *launches mdrun but not mdrun_mpi.

Thanks for your valuable time



Please read g_tune_pme -h, which describes how you can set the names of mpirun 
and mdrun executables.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] About Presence of Water in Hydrophobic core of lipids in NPT equlibration

2012-11-12 Thread Justin Lemkul



On 11/12/12 6:35 AM, vidhya sankar wrote:

Dear Justin,
 Thank you for your previous reply,
I 
am  doing Protein-Lipid simulation. After Em When I visualize the str in .vmd  
The water Molecules Present in Lipid (DPPC) Head groups .There is No Water 
Molecules Present in the Hydrophobic Part of Bilayer . While The Protein Is in 
the Center of box .

Then I Have done NVT Equilibration   When I visualize the  .gro file in vmd  
Around 40 Water molecules (out of 2699 )  are present in Hydrophobic part of 
Lipid (that is Moving inside the box, Nearer to protein which is at the center) 
 How to Avoid this ?



Some water will diffuse into the lipid headgroup and ester regions, which are 
largely devoid of water based on the method of increasing the van der Waals 
radius of carbon, but will normally be hydrated.  If the water molecules are 
diffusing all the way into the membrane, making contact with the lipid tails 
themselves, this would be very odd and I suspect a result of incorrect packing 
of the lipids.  That's a blind guess, and I'm not going to try to predict what's 
going on in your system, but it would be very odd for a properly equilibrated 
membrane to allow that many water molecules to diffuse deeply within it.



May I simply delete these 40 water  Molecules  ?  or


Maybe.


May I freeze these molecules During NVT Equilibration) ?


The better approach would be a position restrain along the z-axis only, allowing 
the lipids to perhaps re-orient and pack a bit better, followed by NVT 
equilibration in the absence of any restraints on water.



Can i Leave it as such and Then Proceed Further NPT?



You could, but expect your equilibration to last significantly longer than 
normal for the hydrophobic effect to expel any water molecules that are in bad 
places.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Setting up a complex membrane simulation

2012-11-12 Thread Justin Lemkul



On 11/12/12 7:03 AM, jonas87 wrote:

Ok, what i've currently done:

I start with a pdb file of a single molecule Lipid_A and its topology and
put it in a box:

editconf -f Lipid_A.pdb -o Lipid_A-box.gro -bt triclinic -d 1.0

I want a total of 128 molecules of this lipid in my system so I use genbox
to add 127 more:

genbox -cp Lipid_A-box.gro -ci Lipid_A.pdb -p Lipid_A.top -o 128_Lipid_A.pdb
-nmol 127

This only ads 8 molecules, I'm going to guess because more don't fit in my
box. How do I deal with this? How can i know in advance how big i have to
make the box?



You would need to know the volume of an individual lipid in the configuration 
that you are supplying, plus a volume that will allow for any other species in 
the system (like water) to yield the desired composition of the system.  The 
above series of commands yields a very small box, as you have found.  Most 
128-lipid membranes are on the order of 6-8 nm cubes, depending on the specific 
properties of the lipids being studied.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gromacs 4.6 segmentation fault with mdrun

2012-11-12 Thread sebastian
Dear GROMACS user,

I am running in major problems trying to use gromacs 4.6 on my desktop
with two GTX 670 GPU's and one i7 cpu. On the system I installed the
CUDA 4.2, running fine for many different test programs.
Compiling the git version of gromacs 4.6 with hybrid acceleration I get
one error message of a missing libxml2 but it compiles with no further
complaints. The tools I tested (like g_rdf or grompp usw.) work fine as
long as I generate the tpr files with the right gromacs version.
Now, if I try to use mdrun (GMX_GPU_ID=1 mdrun -nt 1 -v -deffnm ) 
the preparation seems to work fine until it starts the actual run. It
stops with a segmentation fault:

Reading file pdz_cis_ex_200ns_test.tpr, VERSION
4.6-dev-20121002-20da718-dirty (single precision)

Using 1 MPI thread

Using 1 OpenMP thread


2 GPUs detected:

  #0: NVIDIA GeForce GTX 670, compute cap.: 3.0, ECC:  no, stat: compatible

  #1: NVIDIA GeForce GTX 670, compute cap.: 3.0, ECC:  no, stat: compatible


1 GPU user-selected to be used for this run: #1


Using CUDA 8x8x8 non-bonded kernels


* WARNING * WARNING * WARNING * WARNING * WARNING * WARNING *

We have just committed the new CPU detection code in this branch,

and will commit new SSE/AVX kernels in a few days. However, this

means that currently only the NxN kernels are accelerated!

In the mean time, you might want to avoid production runs in 4.6.


Back Off! I just backed up pdz_cis_ex_200ns_test.trr to
./#pdz_cis_ex_200ns_test.trr.4#


Back Off! I just backed up pdz_cis_ex_200ns_test.xtc to
./#pdz_cis_ex_200ns_test.xtc.4#


Back Off! I just backed up pdz_cis_ex_200ns_test.edr to
./#pdz_cis_ex_200ns_test.edr.4#

starting mdrun 'Protein in water'

350 steps,   7000.0 ps.

Segmentation fault


Since I have no idea whats going wrong any help is welcomed.
Attached you find the log file.

Thanks a lot

Sebastian





-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] is it possible?

2012-11-12 Thread Albert

Hello:

 Recently, in a released paper I found that some one claimed they've 
observed a Asp is protonated and deprotonated from time to time during a 
micro second MD simulation by Gromacs. I am still curious about such 
kind of observation. Does anybody else observe a covalent bond can be 
broken in normal MD simulations?


thank you very much
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Error of violate the Second Law of Thermodynamics in Free energy calculation with BAR

2012-11-12 Thread badamkhatan togoldor
Dear GMX users

Hi. I'm calculating some small organic molecule's desolvation free energy. 
Recently i got this error from BAR calculation. Please anyone explain what's 
wrong in here?  

First:
WARNING: Using the derivative data (dH/dlambda) to extrapolate delta H values.
This will only work if the Hamiltonian is linear in lambda.

and then Second:
WARNING: Some of these results violate the Second Law of Thermodynamics: 
 This is can be the result of severe undersampling, or (more likely)
 there is something wrong with the simulations.

I have two MD simulation steps. For example desolvation free energy of 
chloroform and methanol:
The final result from MD1 something like this:
total   0.000 -  1.000,   DG  7.72 +/-  0.05

The final result from MD2 something like this:
total   0.000 -  1.000,   DG  3.96 +/-  0.06

Total Gibbs energy of desolvation 11.7 +/- 0.1 kJ/mol  (including with these 
two warnings)

Something wrong in MD?  
MD1.mdp
;Run control
integrator   = sd   ; Langevin dynamics
tinit    = 0
dt   = 0.002
nsteps   = 100  ; 2 ns
nstcomm  = 100
; Output control
nstxout  = 500
nstvout  = 500
nstfout  = 0
nstlog   = 500
nstenergy    = 500
nstxtcout    = 0
xtc-precision    = 1000
; Neighborsearching and short-range nonbonded interactions
nstlist  = 10
ns_type  = grid
pbc  = xyz
rlist    = 1.0
; Electrostatics
coulombtype  = PME
rcoulomb = 1.0
; van der Waals
vdw-type = switch
rvdw-switch  = 0.8
rvdw = 0.9
; Apply long range dispersion corrections for Energy and Pressure
DispCorr  = EnerPres
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12
; EWALD/PME/PPPM parameters
pme_order    = 6
ewald_rtol   = 1e-06
epsilon_surface  = 0
optimize_fft = no
; Temperature coupling
; tcoupl is implicitly handled by the sd integrator
tc_grps  = system
tau_t    = 0.2
ref_t    = 300
; Pressure coupling is on for NPT
Pcoupl   = Parrinello-Rahman 
tau_p    = 5
compressibility  = 4.5e-05
ref_p    = 1.0 
; Free energy control stuff
free_energy  = yes
init_lambda  = 0.0
delta_lambda = 0
foreign_lambda   = 0.05
sc-alpha = 0.5
sc-power = 1.0
sc-sigma = 0.3 
couple-lambda0   = vdw-q  ; only van der Waals interactions
couple-lambda1   = vdw ; turn off everything, in this case only vdW
couple-intramol  = no
nstdhdl  = 10
; Do not generate velocities
gen_vel  = no 
; options for bonds
constraints  = all-bonds  ; 
; Type of constraint algorithm
constraint-algorithm = lincs
; Constrain the starting configuration
; since we are continuing from NPT
continuation = yes 
; Highest order in the expansion of the constraint coupling matrix
lincs-order  = 12

MD2.mdp
;Run control
integrator   = sd   ; Langevin dynamics
tinit    = 0
dt   = 0.002
nsteps   = 100  ; 2 ns
nstcomm  = 100
; Output control
nstxout  = 500
nstvout  = 500
nstfout  = 0
nstlog   = 500
nstenergy    = 500
nstxtcout    = 0
xtc-precision    = 1000
; Neighborsearching and short-range nonbonded interactions
nstlist  = 10
ns_type  = grid
pbc  = xyz
rlist    = 1.0
; Electrostatics
coulombtype  = PME
rcoulomb = 1.0
; van der Waals
vdw-type = switch
rvdw-switch  = 0.8
rvdw = 0.9
; Apply long range dispersion corrections for Energy and Pressure
DispCorr  = EnerPres
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12
; EWALD/PME/PPPM parameters
pme_order    = 6
ewald_rtol   = 1e-06
epsilon_surface  = 0
optimize_fft = no
; Temperature coupling
; tcoupl is implicitly handled by the sd integrator
tc_grps  = system
tau_t    = 0.2
ref_t    = 300
; Pressure coupling is on for NPT
Pcoupl   = Parrinello-Rahman 
tau_p    = 5
compressibility  = 4.5e-05
ref_p    = 1.0 
; Free energy control stuff
free_energy  = yes
init_lambda  = 0.0
delta_lambda = 0
foreign_lambda   = 0.05
sc-alpha = 0.5
sc-power = 1.0
sc-sigma 

[gmx-users] Question about scaling

2012-11-12 Thread Thomas Schlesier

Dear all,
i did some scaling tests for a cluster and i'm a little bit clueless 
about the results.

So first the setup:

Cluster:
Saxonid 6100, Opteron 6272 16C 2.100GHz, Infiniband QDR
GROMACS version: 4.0.7 and 4.5.5
Compiler:   GCC 4.7.0
MPI: Intel MPI 4.0.3.008
FFT-library: ACML 5.1.0 fma4

System:
895 spce water molecules
Simulation time: 750 ps (0.002 fs timestep)
Cut-off: 1.0 nm
but with long-range correction ( DispCorr = EnerPres ; PME (standard 
settings) - but in each case no extra CPU solely for PME)

V-rescale thermostat and Parrinello-Rahman barostat

I get the following timings (seconds), whereas is calculated as the time 
which would be needed for 1 CPU (so if a job on 2 CPUs took X s the time 
would be 2 * X s).

These timings were taken from the *.log file, at the end of the
'real cycle and time accounting' - section.

Timings:
gmx-version 1cpu2cpu4cpu
4.0.7   422333843540
4.5.5   378032552878

I'm a little bit clueless about the results. I always thought, that if i 
have a non-interacting system and double the amount of CPUs, i would get 
a simulation which takes only half the time (so the times as defined 
above would be equal). If the system does have interactions, i would 
lose some performance due to communication. Due to node imbalance there 
could be a further loss of performance.


Keeping this in mind, i can only explain the timings for version 4.0.7 
2cpu - 4cpu (2cpu a little bit faster, since going to 4cpu leads to 
more communication - loss of performance).


All the other timings, especially that 1cpu takes in each case longer 
than the other cases, i do not understand.
Probalby the system is too small and / or the simulation time is too 
short for a scaling test. But i would assume that the amount of time to 
setup the simulation would be equal for all three cases of one 
GROMACS-version.
Only other explaination, which comes to my mind, would be that something 
went wrong during the installation of the programs...


Please, can somebody enlighten me?

Greetings
Thomas
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] is it possible?

2012-11-12 Thread Justin Lemkul



On 11/12/12 10:29 AM, Albert wrote:

Hello:

  Recently, in a released paper I found that some one claimed they've observed a
Asp is protonated and deprotonated from time to time during a micro second MD
simulation by Gromacs. I am still curious about such kind of observation. Does
anybody else observe a covalent bond can be broken in normal MD simulations?



A link to the paper would be helpful, otherwise the commentary is a bit vague.

In classical MD, no, bonds cannot break and form, but there are ways of changing 
protonation states (titration MD, dual-topology approaches with virtual sites, 
etc).  Titration MD is in the works for Gromacs, but at present, is not possible.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Question about scaling

2012-11-12 Thread Carsten Kutzner
Hi Thomas,

On Nov 12, 2012, at 5:18 PM, Thomas Schlesier schl...@uni-mainz.de wrote:

 Dear all,
 i did some scaling tests for a cluster and i'm a little bit clueless about 
 the results.
 So first the setup:
 
 Cluster:
 Saxonid 6100, Opteron 6272 16C 2.100GHz, Infiniband QDR
 GROMACS version: 4.0.7 and 4.5.5
 Compiler: GCC 4.7.0
 MPI: Intel MPI 4.0.3.008
 FFT-library: ACML 5.1.0 fma4
 
 System:
 895 spce water molecules
this is a somewhat small system I would say.

 Simulation time: 750 ps (0.002 fs timestep)
 Cut-off: 1.0 nm
 but with long-range correction ( DispCorr = EnerPres ; PME (standard 
 settings) - but in each case no extra CPU solely for PME)
 V-rescale thermostat and Parrinello-Rahman barostat
 
 I get the following timings (seconds), whereas is calculated as the time 
 which would be needed for 1 CPU (so if a job on 2 CPUs took X s the time 
 would be 2 * X s).
 These timings were taken from the *.log file, at the end of the
 'real cycle and time accounting' - section.
 
 Timings:
 gmx-version   1cpu2cpu4cpu
 4.0.7 422333843540
 4.5.5 378032552878
Do you mean CPUs or CPU cores? Are you using the IB network or are you running 
single-node?

 
 I'm a little bit clueless about the results. I always thought, that if i have 
 a non-interacting system and double the amount of CPUs, i
You do use PME, which means a global interaction of all charges.

 would get a simulation which takes only half the time (so the times as 
 defined above would be equal). If the system does have interactions, i would 
 lose some performance due to communication. Due to node imbalance there could 
 be a further loss of performance.
 
 Keeping this in mind, i can only explain the timings for version 4.0.7 2cpu 
 - 4cpu (2cpu a little bit faster, since going to 4cpu leads to more 
 communication - loss of performance).
 
 All the other timings, especially that 1cpu takes in each case longer than 
 the other cases, i do not understand.
 Probalby the system is too small and / or the simulation time is too short 
 for a scaling test. But i would assume that the amount of time to setup the 
 simulation would be equal for all three cases of one GROMACS-version.
 Only other explaination, which comes to my mind, would be that something went 
 wrong during the installation of the programs…
You might want to take a closer look at the timings in the md.log output files, 
this will 
give you a clue where the bottleneck is, and also tell you about the 
communication-computation 
ratio.

Best,
  Carsten


 
 Please, can somebody enlighten me?
 
 Greetings
 Thomas
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www interface 
 or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs 4.6 segmentation fault with mdrun

2012-11-12 Thread sebastian
On 11/12/2012 04:12 PM, sebastian wrote:
 Dear GROMACS user,

 I am running in major problems trying to use gromacs 4.6 on my desktop
 with two GTX 670 GPU's and one i7 cpu. On the system I installed the
 CUDA 4.2, running fine for many different test programs.
 Compiling the git version of gromacs 4.6 with hybrid acceleration I get
 one error message of a missing libxml2 but it compiles with no further
 complaints. The tools I tested (like g_rdf or grompp usw.) work fine as
 long as I generate the tpr files with the right gromacs version.
 Now, if I try to use mdrun (GMX_GPU_ID=1 mdrun -nt 1 -v -deffnm ) 
 the preparation seems to work fine until it starts the actual run. It
 stops with a segmentation fault:

 Reading file pdz_cis_ex_200ns_test.tpr, VERSION
 4.6-dev-20121002-20da718-dirty (single precision)

 Using 1 MPI thread

 Using 1 OpenMP thread


 2 GPUs detected:

   #0: NVIDIA GeForce GTX 670, compute cap.: 3.0, ECC:  no, stat: compatible

   #1: NVIDIA GeForce GTX 670, compute cap.: 3.0, ECC:  no, stat: compatible


 1 GPU user-selected to be used for this run: #1


 Using CUDA 8x8x8 non-bonded kernels


 * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING *

 We have just committed the new CPU detection code in this branch,

 and will commit new SSE/AVX kernels in a few days. However, this

 means that currently only the NxN kernels are accelerated!
   

Since it does run on a pure CPU run (without the verlet cut-off scheme)
does it maybe help to change the NxN kernels  manually in the .mdp file
(how can I do so)? Or is there something wrong using the CUDA 4.2
version or what so ever. The libxml2 should not be a problem since the
pure CPU run works.  

 In the mean time, you might want to avoid production runs in 4.6.


 Back Off! I just backed up pdz_cis_ex_200ns_test.trr to
 ./#pdz_cis_ex_200ns_test.trr.4#


 Back Off! I just backed up pdz_cis_ex_200ns_test.xtc to
 ./#pdz_cis_ex_200ns_test.xtc.4#


 Back Off! I just backed up pdz_cis_ex_200ns_test.edr to
 ./#pdz_cis_ex_200ns_test.edr.4#

 starting mdrun 'Protein in water'

 350 steps,   7000.0 ps.

 Segmentation fault


 Since I have no idea whats going wrong any help is welcomed.
 Attached you find the log file.
   

Help is really appreciated since I want to use my new desktop including
the GPU's

 Thanks a lot

 Sebastian





   

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] compressibility water - TFE mixture

2012-11-12 Thread jojartb

Dear Users,
I would like to study water - TFE mixtures at different molar ratios.
Is it a reasonably approach to set the compressibility to the value of  
water at [100/0 - water/TFE] and to that of tfe at [0/100 - water/TFE]  
and scale the compressibility values for the other mixtures?

Thnak you for your help in advance!
Balazs


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs 4.6 segmentation fault with mdrun

2012-11-12 Thread Szilárd Páll
Hi Sebastian,

That is very likely a bug so I'd appreciate if you could provide a bit more
information, like:

- OS, compiler

- results of runs with the following configurations:
  - mdrun -nb cpu (to run CPU-only with Verlet scheme)
 - GMX_EMULATE_GPU=1 mdrun -nb gpu (to run GPU emulation using plain C
kernels);
  - mdrun without any arguments (which will use 2x(n/2 cores + 1 GPU))
  - mdrun -ntmpi 1 without any other arguments (which will use n cores +
the first GPU)

- please attach the log files of all failed and a successful run as well as
the mdrun.debug file from a failed runs that you can obtain with mdrun
-debug 1

Note that a backtrace would be very useful and if you can get one I'd
be grateful, but for now the above should be minimum effort and I'll
provide simple introductions to get a backtrace later (if needed).

Thanks,

--
Szilárd


On Mon, Nov 12, 2012 at 6:22 PM, sebastian 
sebastian.wa...@physik.uni-freiburg.de wrote:

 On 11/12/2012 04:12 PM, sebastian wrote:
  Dear GROMACS user,
 
  I am running in major problems trying to use gromacs 4.6 on my desktop
  with two GTX 670 GPU's and one i7 cpu. On the system I installed the
  CUDA 4.2, running fine for many different test programs.
  Compiling the git version of gromacs 4.6 with hybrid acceleration I get
  one error message of a missing libxml2 but it compiles with no further
  complaints. The tools I tested (like g_rdf or grompp usw.) work fine as
  long as I generate the tpr files with the right gromacs version.
  Now, if I try to use mdrun (GMX_GPU_ID=1 mdrun -nt 1 -v -deffnm )
  the preparation seems to work fine until it starts the actual run. It
  stops with a segmentation fault:
 
  Reading file pdz_cis_ex_200ns_test.tpr, VERSION
  4.6-dev-20121002-20da718-dirty (single precision)
 
  Using 1 MPI thread
 
  Using 1 OpenMP thread
 
 
  2 GPUs detected:
 
#0: NVIDIA GeForce GTX 670, compute cap.: 3.0, ECC:  no, stat:
 compatible
 
#1: NVIDIA GeForce GTX 670, compute cap.: 3.0, ECC:  no, stat:
 compatible
 
 
  1 GPU user-selected to be used for this run: #1
 
 
  Using CUDA 8x8x8 non-bonded kernels
 
 
  * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING *
 
  We have just committed the new CPU detection code in this branch,
 
  and will commit new SSE/AVX kernels in a few days. However, this
 
  means that currently only the NxN kernels are accelerated!
 

 Since it does run on a pure CPU run (without the verlet cut-off scheme)
 does it maybe help to change the NxN kernels  manually in the .mdp file
 (how can I do so)? Or is there something wrong using the CUDA 4.2
 version or what so ever. The libxml2 should not be a problem since the
 pure CPU run works.

  In the mean time, you might want to avoid production runs in 4.6.
 
 
  Back Off! I just backed up pdz_cis_ex_200ns_test.trr to
  ./#pdz_cis_ex_200ns_test.trr.4#
 
 
  Back Off! I just backed up pdz_cis_ex_200ns_test.xtc to
  ./#pdz_cis_ex_200ns_test.xtc.4#
 
 
  Back Off! I just backed up pdz_cis_ex_200ns_test.edr to
  ./#pdz_cis_ex_200ns_test.edr.4#
 
  starting mdrun 'Protein in water'
 
  350 steps,   7000.0 ps.
 
  Segmentation fault
 
 
  Since I have no idea whats going wrong any help is welcomed.
  Attached you find the log file.
 

 Help is really appreciated since I want to use my new desktop including
 the GPU's

  Thanks a lot
 
  Sebastian
 
 
 
 
 
 

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Setting up a complex membrane simulation

2012-11-12 Thread Christopher Neale
The simple answer is yes, you could make Lipid_A-box.gro larger in the bilayer 
plane. That probably won`t
address the underlying problem though.

As far as I know, genbox doesn't take periodicity into account. That means that 
with larger species, such as lipid A
you are going to need to start with a much larger box and let pressure 
equilibration bring it down to the correct 
size in the context of PBC during mdrun. The alternative is to build a crystal 
with your own script and then set the
box boundaries yourself so that periodicity is taken into account.

Note that this is a major limitation of genbox and it would be great if 
somebody had the time to address it...

Also note that there will be a similar packing problem between lipids even if 
you discount problems with PBC
packing. There are only 2 ways to deal with this: (a) start with a crystal or 
(b) start with a sparse bilayer and have a
good equilibration technique. If you go for option A then you need to be aware 
of biases from the starting 
crystal. If you go for option B, then you'll likely have problems with lipid A 
acyl chains moving out toward bulk water 
and then not equilibrating on your achievable simulation timescale (there's at 
least one paper out there describing 
how to build a lipid A system and circumvent some setup problems, you should 
read it. I think it's at least 10 years 
old). The main problem with setup is that lipid A equilibration times are 
orders of magnitude larger than those for
phospholipids. I have personally had good success with high-temperature 
equilibration in which I add restraints to
a number of atoms that keep their z-coordinates in certain regions (where z is 
along the bilayer normal).

In fact, a good technique is probably to build a sparse lipid A bilayer,
restrain all Z coordinates of all lipid A atoms to their original values, and 
run some high-temperature MD to get 
some preliminary packing before slowly releasing the z-value restraints. I 
haven`t done this myself, but it is what I
would do the next time I need to build a lipid A bilayer from scratch.

Finally, there are papers recently in the literature of lipid A simulations. 
Perhaps you can get a lipid A bilayer from
one of those groups. I didn't have any luck obtaining these myself but perhaps 
they will be kinder to you if you ask 
nicely. Then you can bypass the building step entirely.

Chris.

-- original message --


Ok, what i've currently done:

I start with a pdb file of a single molecule Lipid_A and its topology and
put it in a box:

editconf -f Lipid_A.pdb -o Lipid_A-box.gro -bt triclinic -d 1.0

I want a total of 128 molecules of this lipid in my system so I use genbox
to add 127 more:

genbox -cp Lipid_A-box.gro -ci Lipid_A.pdb -p Lipid_A.top -o 128_Lipid_A.pdb
-nmol 127

This only ads 8 molecules, I'm going to guess because more don't fit in my
box. How do I deal with this? How can i know in advance how big i have to
make the box?

kind regards,

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists