Re: [gmx-users] Creating POSRES option for MDP file

2015-02-27 Thread Justin Lemkul



On 2/27/15 8:28 PM, Agnivo Gosai wrote:

Dear Users

My system can be identified into 2 groups ( default ) namely DNA and
Protein. To use the pull code , I want to apply pulling force on DNA and
keep Protein fixed.

Now, I have two itp files for my protein ( two chains,A and B). So do I
need to include something like this :-
; Include Position restraint file
#ifdef POSRES_P
#include posre_Protein_chain_A.itp
#endif
and
; Include Position restraint file
#ifdef POSRES_P
#include posre_Protein_chain_B.itp
#endif

in both of the itp files for the two chains and then use -DPOSRES_P in the
mdp file for the pulling simulation ?



The approach is correct, provided the restraint #include statements immediately 
follow the [moleculetype] directives to which they apply.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GMX 5.0 compilation across different platforms

2015-02-27 Thread David McGiven
Dear All,

I would like to statically compile GROMACS 5 in an Intel Xeon X3430 machine
with gcc4.7 (cluster front node) BUT run it on an Intel Xeon E5-2650V2
machine (cluster compute node).

Would that be possible ? And if so, how should I do it ?

Haven't found it on the Installation_Instructions webpage.

Thanks in advance.

BR,
D.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GMX 5.0 compilation across different platforms

2015-02-27 Thread Szilárd Páll
You need to cross-compile by selecting the SIMD support and possibly
RDTSCP support. It's here:
http://www.gromacs.org/Documentation/Installation_Instructions#portability-aspects
--
Szilárd


On Fri, Feb 27, 2015 at 2:54 PM, David McGiven davidmcgiv...@gmail.com wrote:
 Dear All,

 I would like to statically compile GROMACS 5 in an Intel Xeon X3430 machine
 with gcc4.7 (cluster front node) BUT run it on an Intel Xeon E5-2650V2
 machine (cluster compute node).

 Would that be possible ? And if so, how should I do it ?

 Haven't found it on the Installation_Instructions webpage.

 Thanks in advance.

 BR,
 D.
 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Doubt about energies in a very simple system

2015-02-27 Thread IÑIGO SAENZ
Hi,

Thank you all for your anwsers.
Let me explain the circumstances that sorround this. I'm using a
developer's version of ACEMD, which grant me access to its internal data
structures.
When I run this version of acemd, it gives me the information of all the
atoms, bonds, angles, dihedrals, exclusion list, 1-4 pairs and running
parameters(cutoffs, switching mode, etc...) of the system that I'm running,
with this information I can create the .gro, .top and .mdp

There may be something weird in the .top file that I showed you previously
in this post. If you take a look at this:


[ moleculetype ]
;namenrexcl
sys  0

it may look strange this nrexcl = 0
but actually it make sense because what I explained some lines above that
ACEMD gives me the exclusion list, so putting this nrexcl = 0, gromacs
calculates the interactions for all those pair of atoms within the cutoff
radius except those that are in the exclusion list.

Let me show you a pair of real cases that I've tried today.


400 WATER MOLECULES

ACEMD

NAMD

GROMACS



LJ

2540.0544

2538.2893

2538.28

COULOMB

-3996.3431

-3993.6970

-3993.17






This first case is a SPE simulation of a box full of water molecules,
nothing more, only 400 water molecules,
as you can see, result are pretty much the same, energies are the same for
the three programs (well, acemd deviates 2kj/mol in LJ and 3kj/mol in
Coulomb, but this doesn't matter really)

I want to highlight that water molecules don't have 1-4 pairs, only
exclusions. I say this because I'm sure that the problem in the following
case (actually in all cases) is caused by the 1-4 interactions.


1A1X

ACEMD

NAMD

GROMACS

BOND

201.9381

201.4818

201.809

ANGLE

1318.6907

1317.691

1317.8

DIHEDRAL

5225.9049

5222.46

5222.41

LJ

25404.8407

25391.6948

25386.68

COULOMB

-219604.180

-219387.922

-231465.3

This second case is the SPE calculation of a pdb (code 1a1x) sourrounded by
water.
As you can see every energy term coincide between the three programs (with
the logical deviation of using distinct software),
but Coulomb result don't make sense in the gromacs case, it deviates more
than 11000KJ/mol respect to acemd or namd.
I put here the Lj and Coulomb energies in the way that g_energy gives the
results:

LJ(SR)   = 27393.2
LJ-14= 2006.52
Coulomb(SR)  = -216459
Coulomb-14   = 15006.3

Looking back to the water example, which hasn't got 1-4 interactions, I
imagine that the Coulomb(SR) energy term is well calculated (in the water
system coulomb energies coincide) and that the problem comes with the
Coulomb-14 energy term, but this doesn't make much sense because in that
case, LJ energies should also be bad...

Thank you very much for your attention, any help or idea of what's going
wrong would be highly appreciated.



Iñigo Sáenz
Universitat Pompeu Fabra



2015-02-26 2:41 GMT+01:00 Justin Lemkul jalem...@vt.edu:



 On 2/24/15 3:23 PM, IÑIGO SAENZ wrote:

 Hi Justin,

 I always do the SPE as follows:
 grompp -f SPE.mdp -p sys.top -c sys.gro
 and after that I simply execute mdrun, i didn't know about the mdrun
 -rerun
 function.

 Now I have done: mdrun -s topol.tpr -rerun sys.gro

 but the energy results are the exactly the same.



 Start with something simpler; an actual molecule with normal bonded
 interactions.  Equivalency of energy between GROMACS and many programs has
 been shown many times over, so it's certainly possible to prove.  Likely
 there's just something weird about what you're doing with your combinations
 of exclusions, pairs, etc.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Re : GROMACS 4.6.7 not running on more than 16 MPI threads

2015-02-27 Thread Agnivo Gosai
Dear Users

My problem is solved and GROMACS is running successfully on multiple nodes
now. The problem was that I accidentally modified a .ssh file in such a
way that it broke the system's ability to spread the job among cluster
nodes.
This was fixed by my system administrator.

Thanks  Regards
Agnivo Gosai
Grad Student, Iowa State University.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] gromacs_win32.zip CPU version: All MS_Windows version: XP/NT/95 Libraries: Static Size: 17633280 FFTW version: 2.1.3 MPI support: no Installation prefix: customizable Comments: This is an

2015-02-27 Thread Vasiliy Znamenskiy
I hoped that the package (gromacs_win32.zip,
http://www.gromacs.org/Downloads/User_contributions/Gromacs_binaries/i686%2f%2fMS_Windows
), with programs, which are ready to execution, will accelerate my time of
study of the GROMACS  program, but it appeared that I got stuck on a step
GROMPP.
There was an error at execution of the training example.
All my attempts to understand something and to correct, replace also give
mistakes.

Can anybody give me useful advise?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs_win32.zip CPU version: All MS_Windows version: XP/NT/95 Libraries: Static Size: 17633280 FFTW version: 2.1.3 MPI support: no Installation prefix: customizable Comments: This is

2015-02-27 Thread ms

On 2/27/15 10:21 AM, Vasiliy Znamenskiy wrote:

I hoped that the package (gromacs_win32.zip,
http://www.gromacs.org/Downloads/User_contributions/Gromacs_binaries/i686%2f%2fMS_Windows
), with programs, which are ready to execution, will accelerate my time of
study of the GROMACS  program, but it appeared that I got stuck on a step
GROMPP.
There was an error at execution of the training example.
All my attempts to understand something and to correct, replace also give
mistakes.

Can anybody give me useful advise?
Not if you don't provide your exact commands and error message (copy and 
paste from your terminal).


cheers,
M.
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Error in generated Replica Exchange statistics in log file

2015-02-27 Thread Abhi Acharya
Hello GROMACS users,

I have a problem with the Replica Exchange statistics printed in log file.
I ran a 25 ns Replica exchange on a cluster through a queuing system.
However, due to insufficient wall time the job stopped abruptly around 24
ns and no stats were printed in the log file. I then extended the run by
1 steps using -nsteps  and provided additional -cpi flag for
continuation. The simulation ran without errors but the replica exchange
stats that were printed in the log files of each of the replica is as
follows:

Replica exchange statistics
Repl  10 attempts, 5 odd, 5 even
Repl  average probabilities:
Repl 0123456789   10   11   12   13
  14   15   16   17   18   19   20   21   22   23   24   25   26   27   28
  29   30   31
Repl  .23  .45  .57  .12  .28  .20  .82  .46  .44  .50  .07  .74  .53
 .46  .44  .51  .22  .56  .24  .56  .23  .82  .30  .28  .59  .54  .34  .53
 .75  .29  .62
Repl  number of exchanges:
Repl 0123456789   10   11   12   13
  14   15   16   17   18   19   20   21   22   23   24   25   26   27   28
  29   30   31
Repl1230024222043
 224140324102421
 412
Repl  average number of exchanges:
Repl 0123456789   10   11   12   13
  14   15   16   17   18   19   20   21   22   23   24   25   26   27   28
  29   30   31
Repl  .20  .40  .60  .00  .00  .40  .80  .40  .40  .40  .00  .80  .60
 .40  .40  .80  .20  .80  .00  .60  .40  .80  .20  .00  .40  .80  .40  .20
 .80  .20  .40



I rate of exchange attempt used was 1000, which means that the above stats
somehow only take into account the last 1 steps of the simulations. Is
this some bug or I did something wrong ? More importantly, is there a way
to regenerate the replica exchange stats from the log file ?


Regards,
Abhishek Acharya
Shasara Research Foundation
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Umbrella sampling Tutorial Fatal Error

2015-02-27 Thread Nima Soltani
Hi Dear Gromacs Users
I am following umbrella sampling tutorial provided by Dr Justin Lemkul
(This tutorial is not updated for Gromacs 5 However I am using Gromacs
5.0.2 and i am trying my best to use files and commands compatible with
version 5)

I have done all the parts up to pulling section very well
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/umbrella/05_pull.html
but at the stage that I want to generate pull.tpr file it gives me a fatal
Error:
Fatal error:
Group pull_group1 required by grompp was undefined.
I Checked spelling of Chain_A that i named, utilizing mgx make_ndx
command at previous step
Any advice or guidance would be greatly appreciated
Best Regards,
Nima Soltani
--
Graduate Student of Physical Chemistry
Department of Chemistry,
Sharif University of Technology.
=
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Fwd: Umbrella sampling Tutorial Fatal Error

2015-02-27 Thread Nima Soltani
Excuse me that I forgot to attach the complete Error page:
nima@nima-ThinkPad:~/Desktop/3$ gmx grompp -f md_pull.mdp -c npt.gro -p
topol.top -n index.ndx -t npt.cpy -o pull.tpr
GROMACS:gmx grompp, VERSION 5.0.2

GROMACS is written by:
Emile Apol Rossen Apostolov   Herman J.C. Berendsen Par
Bjelkmar
Aldert van Buuren  Rudi van DrunenAnton Feenstra Sebastian Fritsch
Gerrit GroenhofChristoph Junghans Peter Kasson   Carsten Kutzner
Per LarssonJustin A. Lemkul   Magnus LundborgPieter Meulenhoff
Erik Marklund  Teemu Murtola  Szilard Pall   Sander Pronk
Roland Schulz  Alexey ShvetsovMichael Shirts Alfons Sijbers
Peter Tieleman Christian Wennberg Maarten Wolf
and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2014, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx grompp, VERSION 5.0.2
Executable:   /usr/local/gromacs/bin/gmx
Library dir:  /usr/local/gromacs/share/gromacs/top
Command line:
  gmx grompp -f md_pull.mdp -c npt.gro -p topol.top -n index.ndx -t npt.cpy
-o pull.tpr


NOTE 1 [file md_pull.mdp, line 62]:
  md_pull.mdp did not specify a value for the .mdp option cutoff-scheme.
  Probably it was first intended for use with GROMACS before 4.6. In 4.6,
  the Verlet scheme was introduced, but the group scheme was still the
  default. The default is now the Verlet scheme, so you will observe
  different behaviour.

Ignoring obsolete mdp entry 'title'
Ignoring obsolete mdp entry 'optimize_fft'
Replacing old mdp entry 'nstxtcout' by 'nstxout-compressed'
ERROR: pull-coord1-groups should have 2 components

Back Off! I just backed up mdout.mdp to ./#mdout.mdp.4#

WARNING 1 [file md_pull.mdp, line 62]:
  Unknown left-hand 'pull_group0' in parameter file



WARNING 2 [file md_pull.mdp, line 62]:
  Unknown left-hand 'pull_group1' in parameter file



WARNING 3 [file md_pull.mdp, line 62]:
  Unknown left-hand 'pull_rate1' in parameter file



WARNING 4 [file md_pull.mdp, line 62]:
  Unknown left-hand 'pull_k1' in parameter file



NOTE 2 [file md_pull.mdp]:
  With Verlet lists the optimal nstlist is = 10, with GPUs = 20. Note
  that with the Verlet scheme, nstlist has no effect on the accuracy of
  your simulation.


NOTE 3 [file md_pull.mdp]:
  nstcomm  nstcalcenergy defeats the purpose of nstcalcenergy, setting
  nstcomm to nstcalcenergy


NOTE 4 [file md_pull.mdp]:
  leapfrog does not yet support Nose-Hoover chains, nhchainlength reset to 1

Setting the LD random seed to 1032180820
Generated 165 of the 1596 non-bonded parameter combinations
Excluding 3 bonded neighbours molecule type 'Protein_chain_A'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_B'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_C'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_D'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_E'
turning all bonds into constraints...
Excluding 2 bonded neighbours molecule type 'SOL'
turning all bonds into constraints...
Excluding 1 bonded neighbours molecule type 'NA'
turning all bonds into constraints...
Excluding 1 bonded neighbours molecule type 'CL'
turning all bonds into constraints...
Removing all charge groups because cutoff-scheme=Verlet
The center of mass of the position restraint coord's is  3.265  2.171  2.977
The center of mass of the position restraint coord's is  3.265  2.171  2.977

---
Program gmx, VERSION 5.0.2
Source code file:
/home/nima/Documents/Gromacs-Program/gromacs-5.0.2/src/gromacs/gmxpreprocess/readpull.c,
line: 257

Fatal error:
Group pull_group1 required by grompp was undefined.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


Best Regards,
Nima Soltani
--
Graduate Student of Physical Chemistry
Department of Chemistry,
Sharif University of Technology.
=

-- Forwarded message --
From: Nima Soltani nima@gmail.com
Date: Sat, Feb 28, 2015 at 3:34 AM
Subject: Umbrella sampling Tutorial Fatal Error
To: gmx-us...@gromacs.org


Hi Dear Gromacs Users
I am following umbrella sampling tutorial provided by Dr Justin Lemkul
(This tutorial is not updated for Gromacs 5 However I am using Gromacs

Re: [gmx-users] Fwd: Umbrella sampling Tutorial Fatal Error

2015-02-27 Thread Mohsen Ramezanpour
Hi,

Did you check the parameters in .mdp file for pulling simulation?!
I think you have to define this pulling group in .mdp file that you use in
grompp.


On Fri, Feb 27, 2015 at 7:16 PM, Nima Soltani nima@gmail.com wrote:

 Excuse me that I forgot to attach the complete Error page:
 nima@nima-ThinkPad:~/Desktop/3$ gmx grompp -f md_pull.mdp -c npt.gro -p
 topol.top -n index.ndx -t npt.cpy -o pull.tpr
 GROMACS:gmx grompp, VERSION 5.0.2

 GROMACS is written by:
 Emile Apol Rossen Apostolov   Herman J.C. Berendsen Par
 Bjelkmar
 Aldert van Buuren  Rudi van DrunenAnton Feenstra Sebastian Fritsch
 Gerrit GroenhofChristoph Junghans Peter Kasson   Carsten Kutzner
 Per LarssonJustin A. Lemkul   Magnus LundborgPieter Meulenhoff
 Erik Marklund  Teemu Murtola  Szilard Pall   Sander Pronk
 Roland Schulz  Alexey ShvetsovMichael Shirts Alfons Sijbers
 Peter Tieleman Christian Wennberg Maarten Wolf
 and the project leaders:
 Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

 Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2014, The GROMACS development team at
 Uppsala University, Stockholm University and
 the Royal Institute of Technology, Sweden.
 check out http://www.gromacs.org for more information.

 GROMACS is free software; you can redistribute it and/or modify it
 under the terms of the GNU Lesser General Public License
 as published by the Free Software Foundation; either version 2.1
 of the License, or (at your option) any later version.

 GROMACS:  gmx grompp, VERSION 5.0.2
 Executable:   /usr/local/gromacs/bin/gmx
 Library dir:  /usr/local/gromacs/share/gromacs/top
 Command line:
   gmx grompp -f md_pull.mdp -c npt.gro -p topol.top -n index.ndx -t npt.cpy
 -o pull.tpr


 NOTE 1 [file md_pull.mdp, line 62]:
   md_pull.mdp did not specify a value for the .mdp option cutoff-scheme.
   Probably it was first intended for use with GROMACS before 4.6. In 4.6,
   the Verlet scheme was introduced, but the group scheme was still the
   default. The default is now the Verlet scheme, so you will observe
   different behaviour.

 Ignoring obsolete mdp entry 'title'
 Ignoring obsolete mdp entry 'optimize_fft'
 Replacing old mdp entry 'nstxtcout' by 'nstxout-compressed'
 ERROR: pull-coord1-groups should have 2 components

 Back Off! I just backed up mdout.mdp to ./#mdout.mdp.4#

 WARNING 1 [file md_pull.mdp, line 62]:
   Unknown left-hand 'pull_group0' in parameter file



 WARNING 2 [file md_pull.mdp, line 62]:
   Unknown left-hand 'pull_group1' in parameter file



 WARNING 3 [file md_pull.mdp, line 62]:
   Unknown left-hand 'pull_rate1' in parameter file



 WARNING 4 [file md_pull.mdp, line 62]:
   Unknown left-hand 'pull_k1' in parameter file



 NOTE 2 [file md_pull.mdp]:
   With Verlet lists the optimal nstlist is = 10, with GPUs = 20. Note
   that with the Verlet scheme, nstlist has no effect on the accuracy of
   your simulation.


 NOTE 3 [file md_pull.mdp]:
   nstcomm  nstcalcenergy defeats the purpose of nstcalcenergy, setting
   nstcomm to nstcalcenergy


 NOTE 4 [file md_pull.mdp]:
   leapfrog does not yet support Nose-Hoover chains, nhchainlength reset to
 1

 Setting the LD random seed to 1032180820
 Generated 165 of the 1596 non-bonded parameter combinations
 Excluding 3 bonded neighbours molecule type 'Protein_chain_A'
 turning all bonds into constraints...
 Excluding 3 bonded neighbours molecule type 'Protein_chain_B'
 turning all bonds into constraints...
 Excluding 3 bonded neighbours molecule type 'Protein_chain_C'
 turning all bonds into constraints...
 Excluding 3 bonded neighbours molecule type 'Protein_chain_D'
 turning all bonds into constraints...
 Excluding 3 bonded neighbours molecule type 'Protein_chain_E'
 turning all bonds into constraints...
 Excluding 2 bonded neighbours molecule type 'SOL'
 turning all bonds into constraints...
 Excluding 1 bonded neighbours molecule type 'NA'
 turning all bonds into constraints...
 Excluding 1 bonded neighbours molecule type 'CL'
 turning all bonds into constraints...
 Removing all charge groups because cutoff-scheme=Verlet
 The center of mass of the position restraint coord's is  3.265  2.171
 2.977
 The center of mass of the position restraint coord's is  3.265  2.171
 2.977

 ---
 Program gmx, VERSION 5.0.2
 Source code file:

 /home/nima/Documents/Gromacs-Program/gromacs-5.0.2/src/gromacs/gmxpreprocess/readpull.c,
 line: 257

 Fatal error:
 Group pull_group1 required by grompp was undefined.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors


 Best Regards,
 Nima Soltani
 --
 Graduate Student of Physical Chemistry
 Department of Chemistry,
 Sharif University of Technology.
 =

 -- Forwarded 

Re: [gmx-users] Problems Running Gromacs v-5.0.4

2015-02-27 Thread Stephen P. Molnar
Thank you for your reply.

Stephen P. Molnar, Ph.D.Life is a fuzzy
set
Foundation for Chemistry   Stochastic and
multivariate
www.FoundationForChemistry.com
(614)312-7528 (c)
Skype:  smolnar1

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of
Justin Lemkul
Sent: Friday, February 27, 2015 5:50 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Problems Running Gromacs v-5.0.4



On 2/27/15 4:01 PM, Stephen P. Molnar wrote:
 The original of this message was sent on 2/18.  To date it has greeted 
 freeted with thundering silence.  I would really appreciate an answer, 
 even if no solution is readily apparent.

 I have compiled and installed Gromacs v-5.0.4 (rather than using the 
 version bundled with the distribution() in RoboLinux 7.81. without 
 any warning or error messages.

 However, I have encountered problems running gmxdemo.  Although I get 
 the message Display variable is set I see a flash, but the window 
 does not stay open.  However, I do get the final display of the box 
 and the simulation.


Probably my first question is how are you running gmxdemo?  It was removed
from the source code in 2012...before version even the release of 4.6-beta1.

 The second problem probably involves permissions.  If I run the demo 
 as the superuser, (RoboLinux does not allow the enablement of the 
 root during the installation) The demo runs to completion without 
 problems, other that the one of the vanishing window.

 Running gmxdemo as the superuser generated 39 files (including 
 gmxdemo and cpeptide.pdb) while running as a user generates 23 files.


Depends on where GROMACS is installed and what you've done to get gmxdemo
running.  FWIW, it was removed because it wasn't useful and no one wanted to
maintain it.  If it's not working, the solution is to find better tutorial
material, as suggested in README.tutor :)

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] about source code of the old version of GROAMCS (1.0 to 2.0)

2015-02-27 Thread Mark Abraham
On Fri, Feb 27, 2015 at 7:16 AM, Man Hoang Viet mhv...@ifpan.edu.pl wrote:

 Dear GROMACS team,

 I am looking for source code of the old version (v1.0 to v2.0) of GROMACS,
 which is not available for downloading on GROMACS home page.
 Could you please share them to me?


I didn't find them on the ftp server in a brief excursion through the
accumulated junk. However release-2-0 is tagged in the git repo, and the
initial commit is (I think) release 1.6. If you really need 1.0 for some
reason, then you should contact the original authors directly.

Mark

Thank you very much,

 Yours sincerely,

 Viet Man

 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Fwd: Umbrella sampling Tutorial Fatal Error

2015-02-27 Thread Justin Lemkul



On 2/27/15 7:16 PM, Nima Soltani wrote:

Excuse me that I forgot to attach the complete Error page:
nima@nima-ThinkPad:~/Desktop/3$ gmx grompp -f md_pull.mdp -c npt.gro -p
topol.top -n index.ndx -t npt.cpy -o pull.tpr
GROMACS:gmx grompp, VERSION 5.0.2

GROMACS is written by:
Emile Apol Rossen Apostolov   Herman J.C. Berendsen Par
Bjelkmar
Aldert van Buuren  Rudi van DrunenAnton Feenstra Sebastian Fritsch
Gerrit GroenhofChristoph Junghans Peter Kasson   Carsten Kutzner
Per LarssonJustin A. Lemkul   Magnus LundborgPieter Meulenhoff
Erik Marklund  Teemu Murtola  Szilard Pall   Sander Pronk
Roland Schulz  Alexey ShvetsovMichael Shirts Alfons Sijbers
Peter Tieleman Christian Wennberg Maarten Wolf
and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2014, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx grompp, VERSION 5.0.2
Executable:   /usr/local/gromacs/bin/gmx
Library dir:  /usr/local/gromacs/share/gromacs/top
Command line:
   gmx grompp -f md_pull.mdp -c npt.gro -p topol.top -n index.ndx -t npt.cpy
-o pull.tpr


NOTE 1 [file md_pull.mdp, line 62]:
   md_pull.mdp did not specify a value for the .mdp option cutoff-scheme.
   Probably it was first intended for use with GROMACS before 4.6. In 4.6,
   the Verlet scheme was introduced, but the group scheme was still the
   default. The default is now the Verlet scheme, so you will observe
   different behaviour.

Ignoring obsolete mdp entry 'title'
Ignoring obsolete mdp entry 'optimize_fft'
Replacing old mdp entry 'nstxtcout' by 'nstxout-compressed'
ERROR: pull-coord1-groups should have 2 components

Back Off! I just backed up mdout.mdp to ./#mdout.mdp.4#

WARNING 1 [file md_pull.mdp, line 62]:
   Unknown left-hand 'pull_group0' in parameter file



WARNING 2 [file md_pull.mdp, line 62]:
   Unknown left-hand 'pull_group1' in parameter file



WARNING 3 [file md_pull.mdp, line 62]:
   Unknown left-hand 'pull_rate1' in parameter file



WARNING 4 [file md_pull.mdp, line 62]:
   Unknown left-hand 'pull_k1' in parameter file




The simple answer is you have not translated the options properly.  Refer to:

http://manual.gromacs.org/online/mdp_opt.html#pull

I have been somewhat hesitant to update the tutorial for 5.0 because of all the 
changes, and the tutorial is linked directly to one of my papers.  I have been 
considering what to do for some time.  I don't intend to update the tutorial for 
a while at least, just because I don't have the time to make the changes and 
verify its accuracy, so please just ask questions via the list.


-Justin



NOTE 2 [file md_pull.mdp]:
   With Verlet lists the optimal nstlist is = 10, with GPUs = 20. Note
   that with the Verlet scheme, nstlist has no effect on the accuracy of
   your simulation.


NOTE 3 [file md_pull.mdp]:
   nstcomm  nstcalcenergy defeats the purpose of nstcalcenergy, setting
   nstcomm to nstcalcenergy


NOTE 4 [file md_pull.mdp]:
   leapfrog does not yet support Nose-Hoover chains, nhchainlength reset to 1

Setting the LD random seed to 1032180820
Generated 165 of the 1596 non-bonded parameter combinations
Excluding 3 bonded neighbours molecule type 'Protein_chain_A'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_B'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_C'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_D'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_E'
turning all bonds into constraints...
Excluding 2 bonded neighbours molecule type 'SOL'
turning all bonds into constraints...
Excluding 1 bonded neighbours molecule type 'NA'
turning all bonds into constraints...
Excluding 1 bonded neighbours molecule type 'CL'
turning all bonds into constraints...
Removing all charge groups because cutoff-scheme=Verlet
The center of mass of the position restraint coord's is  3.265  2.171  2.977
The center of mass of the position restraint coord's is  3.265  2.171  2.977

---
Program gmx, VERSION 5.0.2
Source code file:
/home/nima/Documents/Gromacs-Program/gromacs-5.0.2/src/gromacs/gmxpreprocess/readpull.c,
line: 257

Fatal error:
Group pull_group1 required by grompp was undefined.
For more information and tips for troubleshooting, please check the GROMACS
website at 

[gmx-users] Creating POSRES option for MDP file

2015-02-27 Thread Agnivo Gosai
Dear Users

My system can be identified into 2 groups ( default ) namely DNA and
Protein. To use the pull code , I want to apply pulling force on DNA and
keep Protein fixed.

Now, I have two itp files for my protein ( two chains,A and B). So do I
need to include something like this :-
; Include Position restraint file
#ifdef POSRES_P
#include posre_Protein_chain_A.itp
#endif
and
; Include Position restraint file
#ifdef POSRES_P
#include posre_Protein_chain_B.itp
#endif

in both of the itp files for the two chains and then use -DPOSRES_P in the
mdp file for the pulling simulation ?

Or there is some other method ?

Thanks  Regards
Agnivo Gosai
Grad Student, Iowa State University.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-27 Thread Carmen Di Giovanni

I report the changes made to improve the performance of a molecular dynamics
on a protein of 1925 running on GPU INVIDIA K20 Tesla :



a.. To limit the number of cores used in the calculation (option -pin on)
and to have a better performance:
gmx_mpi mdrun ... -ntomp 16 -pin on



where ntomp is the number of OpenMP threads



a.. clock frequency using the NVDIA management tool is been increased from
the default 705 MHz to 758 MHz.


a.. to reduce runtime to calculate energies every step in mdp file:


nstcalcenergy option = -1



The actual performance is about 7ns /day against 2ns/day without these
changes.

Carmen





- Original Message - 
From: Szilárd Páll pall.szil...@gmail.com

To: Carmen Di Giovanni cdigi...@unina.it
Cc: Discussion list for GROMACS users gmx-us...@gromacs.org
Sent: Friday, February 20, 2015 1:25 AM
Subject: Re: [gmx-users] GPU low performance


Please consult the manual an wiki.


--
Szilárd


On Thu, Feb 19, 2015 at 6:44 PM, Carmen Di Giovanni cdigi...@unina.it 
wrote:


Szilard,
about:

Fatal error
1) Setting the number of thread-MPI threads is only supported with
thread-MPI
and Gromacs was compiled without thread-MPI
For more information and tips for troubleshooting, please check the 
GROMACS

website at http://www.gromacs.org/Documentation/Errors
---
The error is quite clearly explains that you're trying to use mdrun's
built-in thread-MPI parallelization, but you have a binary that does
not support it. Use the MPI launching syntax instead.

Can you help me about the MPI launching syntax?  What is the suitable
command ?


A previous poster has already pointed you to the Acceleration and
parallelization page which, I believe describes the matter in detail.




2) Have you looked at the the performance table at the end of the log?
You are wasting a large amount of runtime calculating energies every
step and this overhead comes in multiple places in the code - one of
them being the non-timed code parts which typically take 3%.


As can I reduce runtime to calculate the energies every step?
I must to modify something in mdp file ?


This is discussed throughly in the manual, you should be looking for
the nstcalcenergy option.



Thank you in advance

Carmen
--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Szilárd Páll pall.szil...@gmail.com:


On Thu, Feb 19, 2015 at 11:32 AM, Carmen Di Giovanni cdigi...@unina.it
wrote:


Dear Szilárd,

1) the output of command nvidia-smi -ac 2600,758 is

[root@localhost test_gpu]# nvidia-smi -ac 2600,758
Applications clocks set to (MEM 2600, SM 758) for GPU :03:00.0

Warning: persistence mode is disabled on this device. This settings will
go
back to default as soon as driver unloads (e.g. last application like
nvidia-smi or cuda application terminates). Run with [--help | -h] 
switch

to
get more information on how to enable persistence mode.



run nvidia-smi -pm 1 if you want to avoid that.


Setting applications clocks is not supported for GPU :82:00.0.
Treating as warning and moving on.
All done.


2) I decreased nlists to 20
However when I do the command:
 gmx_mpi mdrun -deffnm nvt -ntmpi 8 -gpu_id 
give me a fatal error:

GROMACS:  gmx mdrun, VERSION 5.0
Executable:   /opt/SW/gromacs-5.0/build/mpi-cuda/bin/gmx_mpi
Library dir:  /opt/SW/gromacs-5.0/share/top
Command line:
  gmx_mpi mdrun -deffnm nvt -ntmpi 8 -gpu_id 


Back Off! I just backed up nvt.log to ./#nvt.log.8#
Reading file nvt.tpr, VERSION 5.0 (single precision)
Changing nstlist from 10 to 40, rlist from 1 to 1.097


---
Program gmx_mpi, VERSION 5.0
Source code file: /opt/SW/gromacs-5.0/src/programs/mdrun/runner.c, line:
876

Fatal error:
Setting the number of thread-MPI threads is only supported with
thread-MPI
and Gromacs was compiled without thread-MPI
For more information and tips for troubleshooting, please check the
GROMACS
website at http://www.gromacs.org/Documentation/Errors
---



The error is quite clearly explains that you're trying to use mdrun's
built-in thread-MPI parallelization, but you have a binary that does
not support it. Use the MPI launching syntax instead.


Halting program gmx_mpi

gcq#223: Jesus Not Only Saves, He Also Frequently Makes Backups. 
(Myron

Bradshaw)


--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

Re: [gmx-users] GTX980 performance

2015-02-27 Thread Carmen Di Giovanni
Szilárd thank you for the useful adivices about the configuration of the new 
server machine.


I report the changes made to improve the performance of a molecular dynamics 
on a  protein of 1925 running on GPU INVIDIA K20 Tesla :




 a.. To limit the number of cores used in the calculation (option -pin on) 
and to have a better performance:

 gmx_mpi mdrun ... -ntomp 16 -pin on



where ntomp is the number of OpenMP threads



 a.. clock frequency using the NVDIA management tool is been increased from 
the default 705 MHz to 758 MHz.



 a.. to reduce runtime  to calculate energies every step in mdp file:


   nstcalcenergy option = -1



The actual performance is about 7ns /day against 2ns/day without these 
changes.


Carmen




- Original Message - 
From: Szilárd Páll pall.szil...@gmail.com

To: Carmen Di Giovanni cdigi...@unina.it
Cc: Discussion list for GROMACS users gmx-us...@gromacs.org
Sent: Thursday, February 26, 2015 2:37 PM
Subject: Re: GTX980 performance


On Wed, Feb 25, 2015 at 1:21 PM, Carmen Di Giovanni cdigi...@unina.it 
wrote:

A special thank you to Szilárd Páll for the good advices in GPU low
performance discussion.
The perfomance calculation is much improved after its suggestions.



I'm glad it helped. Could you post the changes you made to your
mdp/command line and the results these gave? It would allow others to
learn from it.


Dear GROMACS users and developers,

we are thinking to buy a new tyan server machine with  these features:

SERVER SYSTEM TYAN FT48 - Tower/Rack,Dual Xeon ,8xSATA

N. 2 CPU INTEL XEON E5-2620 2.0Ghz - 6 CORE 15MCache LGA2011

N. 4 Scheda Video Nvidia GTX980, 4GB GDDR5,PCIE 3.0 , 2DVI, HD

N. 4 DDR3 8 GB 1600 Mhz
HARD DISK 1 TB SATA 3 WD

I known that the GTX980 offer good performance for GROMACS 5.0
What are your views about this ?


That CPU-GPU combination will give heavily CPU-bound GROMACS runs,
those GTX 980s are 1.5-2x faster than what you can use with those CPUs
- conversely, if you'd get 6-core 3 GHz CPUs, you'll see a huge,
nearly 50% improvement in performance.

This will change in the futur, but at least with GROMACS v5.1 and
earlier, the performance on this machine won't be much higher than
with a single fast CPU and one GTX 980.

For better performance with GROMACS, consider getting better CPUs in
this machine or for the same (or less) money get two workstations with
i7 4930K or 4960X CPUs.


--
Szilárd



Thank you in advance
Carmen




Carmen Di Giovanni, PhD
Postdoctoral Researcher
Dept. of Pharmacy
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GTX980 performance

2015-02-27 Thread Carmen Di Giovanni
Szilárd thank you for the useful adivices about the configuration of the new 
server machine.


I report the changes made to improve the performance of a molecular dynamics 
on a  protein of 1925 running on GPU INVIDIA K20 Tesla :




 a.. To limit the number of cores used in the calculation (option -pin on) 
and to have a better performance:

 gmx_mpi mdrun ... -ntomp 16 -pin on



where ntomp is the number of OpenMP threads



 a.. clock frequency using the NVDIA management tool is been increased from 
the default 705 MHz to 758 MHz.



 a.. to reduce runtime  to calculate energies every step in mdp file:


   nstcalcenergy option = -1



The actual performance is about 7ns /day against 2ns/day without these 
changes.


Carmen




- Original Message - 
From: Szilárd Páll pall.szil...@gmail.com

To: Carmen Di Giovanni cdigi...@unina.it
Cc: Discussion list for GROMACS users gmx-us...@gromacs.org
Sent: Thursday, February 26, 2015 2:37 PM
Subject: Re: GTX980 performance


On Wed, Feb 25, 2015 at 1:21 PM, Carmen Di Giovanni cdigi...@unina.it 
wrote:

A special thank you to Szilárd Páll for the good advices in GPU low
performance discussion.
The perfomance calculation is much improved after its suggestions.



I'm glad it helped. Could you post the changes you made to your
mdp/command line and the results these gave? It would allow others to
learn from it.


Dear GROMACS users and developers,

we are thinking to buy a new tyan server machine with  these features:

SERVER SYSTEM TYAN FT48 - Tower/Rack,Dual Xeon ,8xSATA

N. 2 CPU INTEL XEON E5-2620 2.0Ghz - 6 CORE 15MCache LGA2011

N. 4 Scheda Video Nvidia GTX980, 4GB GDDR5,PCIE 3.0 , 2DVI, HD

N. 4 DDR3 8 GB 1600 Mhz
HARD DISK 1 TB SATA 3 WD

I known that the GTX980 offer good performance for GROMACS 5.0
What are your views about this ?


That CPU-GPU combination will give heavily CPU-bound GROMACS runs,
those GTX 980s are 1.5-2x faster than what you can use with those CPUs
- conversely, if you'd get 6-core 3 GHz CPUs, you'll see a huge,
nearly 50% improvement in performance.

This will change in the futur, but at least with GROMACS v5.1 and
earlier, the performance on this machine won't be much higher than
with a single fast CPU and one GTX 980.

For better performance with GROMACS, consider getting better CPUs in
this machine or for the same (or less) money get two workstations with
i7 4930K or 4960X CPUs.


--
Szilárd



Thank you in advance
Carmen




Carmen Di Giovanni, PhD
Postdoctoral Researcher
Dept. of Pharmacy
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Beyond the KALP15 in DPPC tutorial and doing analysis in GROMACS

2015-02-27 Thread Justin Lemkul



On 2/26/15 5:39 PM, Thomas Lipscomb wrote:

Dear gmx-users,
Ok I understand.  My specific question is how do I do these two tasks:
1. What do I need to change about the KALP15 with DPPC tutorial (again 
substituting maximin 3 for KALP15) so that when I repeat it I get better data, 
because some parts of the tutorial are just for practice and not for better 
data, eg. the 1 nanosecond simulation time.


Increased time and probably a better force field.  I strongly recommend 
CHARMM36.


2. What is your general advice about how to simulate several maximin 3 at once 
interacting with the membrane?  Eg., If I put the maximin 3 somewhat close to 
each other on or in the membrane at the initial conditions, will they diffuse 
around enough during the simulation that they can aggregate, or is the 
diffusion so low that I need to put the maximin 3 in one of the five possible 
aggregates at the initial conditions of the simulation?  To help you answer 
this question please see the diagram below that shows the five possible ways 
that maximin 3 may aggregate.



I doubt you'll ever see dynamics on that scale in atomistic simulations 
happening spontaneously.  Consider multiple starting/assembly states and perhaps 
consider coarse graining.


-Justin


The diagram linked below visualizes all of the five possible ways maximin 3 
might be acting to disrupt the membrane and have antimicrobial activity.  The 
red part of the peptide represents a hydrophilic surface and the blue part 
represents a hydrophobic surface.  The cylindrical shape of the peptide 
represents the maximin 3 being an alpha-helix.


http://imgur.com/y4yw0Mq

Image was taken from: Tryptophan- and arginine-rich antimicrobial peptides: 
Structures and mechanisms of action
Thank you.
Sincerely,
Thomas

On 2/24/15 10:18 PM, Thomas Lipscomb wrote:

Dear gmx-users,
Ok Justin here is the information you asked for:



My questions were rhetorical.  I honestly don't have time to through all of this
and tell you how to do a thesis project :)

If you have specific questions about using GROMACS to carry out specific tasks,
that's the main purpose of this list.

-Justin




--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error in generated Replica Exchange statistics in log file

2015-02-27 Thread Mark Abraham
On Fri, Feb 27, 2015 at 12:07 PM, Abhi Acharya abhi117acha...@gmail.com
wrote:

 Hello GROMACS users,

 I have a problem with the Replica Exchange statistics printed in log file.
 I ran a 25 ns Replica exchange on a cluster through a queuing system.
 However, due to insufficient wall time the job stopped abruptly around 24
 ns and no stats were printed in the log file. I then extended the run by
 1 steps using -nsteps  and provided additional -cpi flag for
 continuation. The simulation ran without errors but the replica exchange
 stats that were printed in the log files of each of the replica is as
 follows:

 Replica exchange statistics
 Repl  10 attempts, 5 odd, 5 even
 Repl  average probabilities:
 Repl 0123456789   10   11   12   13
   14   15   16   17   18   19   20   21   22   23   24   25   26   27   28
   29   30   31
 Repl  .23  .45  .57  .12  .28  .20  .82  .46  .44  .50  .07  .74  .53
  .46  .44  .51  .22  .56  .24  .56  .23  .82  .30  .28  .59  .54  .34  .53
  .75  .29  .62
 Repl  number of exchanges:
 Repl 0123456789   10   11   12   13
   14   15   16   17   18   19   20   21   22   23   24   25   26   27   28
   29   30   31
 Repl1230024222043
  224140324102421
  412
 Repl  average number of exchanges:
 Repl 0123456789   10   11   12   13
   14   15   16   17   18   19   20   21   22   23   24   25   26   27   28
   29   30   31
 Repl  .20  .40  .60  .00  .00  .40  .80  .40  .40  .40  .00  .80  .60
  .40  .40  .80  .20  .80  .00  .60  .40  .80  .20  .00  .40  .80  .40  .20
  .80  .20  .40



 I rate of exchange attempt used was 1000, which means that the above stats
 somehow only take into account the last 1 steps of the simulations. Is


This is normal. The subsequent mdrun will not read from the old .log file,
and the data is not in the checkpoint.


 this some bug or I did something wrong ? More importantly, is there a way
 to regenerate the replica exchange stats from the log file ?


Not without writing a custom script to parse the per-exchange information.

Mark


 Regards,
 Abhishek Acharya
 Shasara Research Foundation
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding RDF calculations

2015-02-27 Thread Justin Lemkul



On 2/27/15 2:00 AM, soumadwip ghosh wrote:

Dear users,
   I have a query about some reviews one a paper which I
submitted recently. It deals with the binding of molecular ions with double
stranded DNA segments. My questions are-

1. Is it customary to take into account The centre-of -mass for DNA grooves
or backbones while calculating RDF? Without that being taken into account,
what kind of artifacts are to be seen?



Dealing with RDFs or occupancies around DNA is extremely challenging.  Normal 
RDF lack sensitivity and do not fully describe the behavior of ions around DNA, 
especially given the asymmetry between major and minor grooves.  See, for 
instance, dx.doi.org/10.1021/ja0629460



2. Why is it necessary to show the running co-ordination number up to the
first solvation shell? I have calculated the excess cordination number for
each ions in DNA up to half of the box length using the Kirkwood-Buff
integrals. But I think there is a confusion between running coordination
and the excess coordination number. They state that if a coordination number

3. They have asked about the CHARMM 27 force field being outdated now..How
is this true?



Yes.  There are specific improvements in CHARMM36 for DNA and RNA, with a 
reparametrization of some important backbone torsions in DNA that have 
implications for sampling substates of B-DNA and correct sugar puckering.


-Justin


Please help me out in addressing the reviews. Thanks for your help in
advance.

P.S: I did not take the center-of-mass of molecular ions into account. I
dont think it is wrong for a small molecule..


Regards
Soumadwip Ghosh
Research Fellow
IITB, Mumbai
India



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Re : Re : GROMACS 4.6.7 not running on more than 16 MPI threads

2015-02-27 Thread Agnivo Gosai
The following problem is still there :
Number of CPUs detected (16) does not match the number reported by OpenMP
(1).
Consider setting the launch configuration manually!

The above message is always there and I am not sure about how to set it.
Even when I set OpenMP threads manually it shows.

I ran my system using only MPI with automatic PME node selection. I found
that for nodes = 16 and ppn = 16 there was issue with domain decomposition.

Message from standard output and error files of job script :-
There is no domain decomposition for 224 nodes that is compatible with the
given box and a minimum cell size of 1.08875 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS
settings
Look in the log file for details on the domain decomposition

turning all bonds into constraints...
turning all bonds into constraints...
turning all bonds into constraints...
turning all bonds into constraints...
turning all bonds into constraints...
turning all bonds into constraints...
Largest charge group radii for Van der Waals: 0.039, 0.039 nm
Largest charge group radii for Coulomb:   0.078, 0.078 nm
Calculating fourier grid dimensions for X Y Z
Using a fourier grid of 64x64x128, spacing 0.117 0.117 0.117

The log file was incomplete as the simulation crashed.

Then I fiddled with the node numbers and found that for nodes = 10 and ppn
= 16 mdrun_mpi could successfully work.

This is part of the log file :
Average load imbalance: 36.5 %
 Part of the total run time spent waiting due to load imbalance: 8.7 %
 Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0
% Y 3 % Z 17 %
 Average PME mesh/force load: 0.847
 Part of the total run time spent waiting due to PP/PME imbalance: 1.4 %

NOTE: 8.7 % of the available CPU time was lost due to load imbalance
  in the domain decomposition.


   Core t (s)   Wall t (s)(%)
   Time:   212092.050 1347.16715743.6
 (ns/day)(hour/ns)
Performance:   32.0670.748

..

Then I set OpenMP threads = 8 and this is what happened. ( I had to use
Verlet cutoff scheme )

Number of CPUs detected (16) does not match the number reported by OpenMP
(1).
Consider setting the launch configuration manually!
Reading file pull1.tpr, VERSION 4.6.7 (double precision)
The number of OpenMP threads was set by environment variable
OMP_NUM_THREADS to 8 (and the command-line setting agreed with that)

Will use 144 particle-particle and 16 PME only nodes
This is a guess, check the performance at the end of the log file
Using 160 MPI processes
Using 8 OpenMP threads per MPI process
..
..

NOTE: 9.9 % performance was lost because the PME nodes
  had more work to do than the PP nodes.
  You might want to increase the number of PME nodes
  or increase the cut-off and the grid spacing.


NOTE: 11 % of the run time was spent in domain decomposition,
  9 % of the run time was spent in pair search,
  you might want to increase nstlist (this has no effect on accuracy)


   Core t (s)   Wall t (s)(%)
   Time:   319188.130 2019.34915806.5
 33:39
 (ns/day)(hour/ns)
Performance:   21.3931.122

So , I have bad performance.
Now I am using only MPI for running the jobs.

Any suggestions on performance improvement ?



Thanks  Regards
Agnivo Gosai
Grad Student, Iowa State University.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Re : GROMACS 4.6.7 not running on more than 16 MPI threads

2015-02-27 Thread Szilárd Páll
Good to hear! However, are you sure the Number of CPUs detected (16)
does not match the number reported by OpenMP (1) message was also
solved by the same? You should try to make this go away by telling
your MPI what threads/rank count you want, otherwise you *may*
experience performance loss.
--
Szilárd


On Fri, Feb 27, 2015 at 7:25 PM, Agnivo Gosai agnivogromac...@gmail.com wrote:
 Dear Users

 My problem is solved and GROMACS is running successfully on multiple nodes
 now. The problem was that I accidentally modified a .ssh file in such a
 way that it broke the system's ability to spread the job among cluster
 nodes.
 This was fixed by my system administrator.

 Thanks  Regards
 Agnivo Gosai
 Grad Student, Iowa State University.
 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problems Running Gromacs v-5.0.4

2015-02-27 Thread Justin Lemkul



On 2/27/15 4:01 PM, Stephen P. Molnar wrote:

The original of this message was sent on 2/18.  To date it has greeted freeted
with thundering silence.  I would really appreciate an answer, even if no
solution is readily apparent.


I have compiled and installed Gromacs v-5.0.4 (rather than using the
version bundled with the distribution() in RoboLinux 7.81. without any
warning or error messages.

However, I have encountered problems running gmxdemo.  Although I get
the message Display variable is set I see a flash, but the window does
not stay open.  However, I do get the final display of the box and the
simulation.



Probably my first question is how are you running gmxdemo?  It was removed from 
the source code in 2012...before version even the release of 4.6-beta1.



The second problem probably involves permissions.  If I run the demo as
the superuser, (RoboLinux does not allow the enablement of the root
during the installation) The demo runs to completion without problems,
other that the one of the vanishing window.

Running gmxdemo as the superuser generated 39 files (including gmxdemo
and cpeptide.pdb) while running as a user generates 23 files.



Depends on where GROMACS is installed and what you've done to get gmxdemo 
running.  FWIW, it was removed because it wasn't useful and no one wanted to 
maintain it.  If it's not working, the solution is to find better tutorial 
material, as suggested in README.tutor :)


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.