Re: [gmx-users] Energy minimisation goes to several values

2015-06-26 Thread Kevin C Chan
Thanks Justin for the reply. 

Honestly I thought minimizing a large and complex (usually biomolecular) system 
part by part instead of as a whole will effectively shorten the computational 
time cost while causing no effect on the final structure. When you say 
“impedes”, do you mean it causes a longer calculation time in total or it will 
give a bad final structure?

Thanks in advance,
Kevin

 On 26 Jun, 2015, at 22:21, 
 gromacs.org_gmx-users-requ...@maillist.sys.kth.se 
 gromacs.org_gmx-users-requ...@maillist.sys.kth.se wrote:
 
 From: Justin Lemkul jalem...@vt.edu mailto:jalem...@vt.edu
 Subject: Re: [gmx-users] Energy minimisation goes to several values
 Date: 26 June, 2015 21:27:32 HKT
 To: gmx-us...@gromacs.org mailto:gmx-us...@gromacs.org
 Reply-To: gmx-us...@gromacs.org mailto:gmx-us...@gromacs.org
 
 
 
 
 On 6/25/15 9:45 PM, Kevin C Chan wrote:
 Dear Users,
 
 I am energy minimising a quite large solvated system containing protein and
 lipids (~800,000 atoms). I used to fix components of the system in order to
 speed-up energy minimisation and sometimes it is easier to debug such
 processes. Here is my protocol:
 1. fix all except water and so to minimise water
 2. fix water and then minimise all the rest atoms
 3. fix nothin and then minimise the whole system
 
 While monitoring the energy of the system thought minimisations, it goes
 fine for step 1 and 2 and converged after just few hundred steps. However
 it goes back to several higher values of energy (bouncing between the
 values) and they started to increase very slowly for step 3. This makes no
 sense to me and did anyone have a similar experience?
 
 There are two unusual points:
 1. The system energy drops suddenly instead of decreased gradually during
 step2 and then stays at a constant value.
 2. If I use the resulting structure from step3 to proceed a, say, heating
 process, it simply blows up.
 
 To be clear, my system was solvated and auto-ionized using VMD tools and
 some water inside the membrane has been directly deleted. Backbone of the
 protein and phosphorus atoms of the membrane are under a
 position constraint during all the minimisations. I was choosing conjugate
 gradient for minimization.
 
 
 Does a normal minimization (just one overall minimization with nothing 
 fixed) yield a stable starting point?  Fixing atoms (using freezegrps?) often 
 actually impedes minimization.
 
 -Justin
 
 -- 
 ==
 
 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow
 
 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201
 
 jalem...@outerbanks.umaryland.edu mailto:jalem...@outerbanks.umaryland.edu 
 | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul 
 http://mackerell.umaryland.edu/~jalemkul
 
 ==

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] error gcq#360

2015-06-26 Thread Justin Lemkul



On 6/26/15 9:15 AM, Urszula Uciechowska wrote:

Dear gmx users,

after running grompp -f em.mdp -c COM_ions.gro -p COM.top -o em.tpr

I obtained:

GROMACS:  gmx grompp, VERSION 5.0
Executable:
/software/local/el6/INTEL/gromacs/5.0.0/intel-ompi-fftw-blas-lapack/bin/gmx
Library dir:
/software/local/el6/INTEL/gromacs/5.0.0/intel-ompi-fftw-blas-lapack/share/gromacs/top
Command line:
   grompp -f em.mdp -c COM_ions.gro -p COM.top -o em.tpr


Back Off! I just backed up mdout.mdp to ./#mdout.mdp.3#
Setting the LD random seed to 2993272762
Generated 2211 of the 2211 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 2211 of the 2211 1-4 parameter combinations
Excluding 3 bonded neighbours molecule type 'DNA'
Excluding 3 bonded neighbours molecule type 'DNA2'
Excluding 3 bonded neighbours molecule type 'Protein3'
Excluding 3 bonded neighbours molecule type 'Protein4'
Excluding 3 bonded neighbours molecule type 'Protein5'
Excluding 3 bonded neighbours molecule type 'Protein6'
Excluding 2 bonded neighbours molecule type 'SOL'
Excluding 1 bonded neighbours molecule type 'NA'
Excluding 1 bonded neighbours molecule type 'CL'
Removing all charge groups because cutoff-scheme=Verlet
Analysing residue names:
There are:   136DNA residues
There are:  1140Protein residues
There are: 557197  Water residues
There are:  2152Ion residues
Analysing residues not classified as Protein/DNA/RNA/Water and splitting
into groups...
Analysing Protein...
Analysing residues not classified as Protein/DNA/RNA/Water and splitting
into groups...
Number of degrees of freedom in T-Coupling group rest is 5090256.00
Calculating fourier grid dimensions for X Y Z
Using a fourier grid of 216x216x216, spacing 0.119 0.119 0.119
Estimate for the relative computational load of the PME mesh part: 0.27

NOTE 1 [file em.mdp]:
   This run will generate roughly 586540 Mb of data


There was 1 note

gcq#360: error: too many template-parameter-lists (g++)


At the end I have em.tpr file but I am not sure if everything is ok.

Any suggestions.



gcq are GROMACS cool quotes and serve no functional purpose except the 
amusement of the user.  There is no error here.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Configuration bug preventing GROMACS 5.0 build with Intel compiler?

2015-06-26 Thread Mark Abraham
Hi,

I haven't tried to build with IntelMPI for a while, but we might consider
patches to work around issues. Depends how ugly they make the things that
are not broken :-) Perhaps Roland has experience here?

Mark

On Fri, Jun 26, 2015 at 3:37 PM Åke Sandgren ake.sandg...@hpc2n.umu.se
wrote:

 Just noticed this question from Sep 2014.

 The problem was that configuring with the Intel compiler and IntelMPI
 causes incorrect detection of _finite.

 This is caused by bugs in the IntelMPI wrappers for both gcc and intel
 compilers, including the MIC wrappers.

 I have patches available if someone want to have them.
 I have not had time to report this to Intel yet...

 --
 Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
 Internet: a...@hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90-580 14
 Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] The pullf.xvg and pullx.xvg files

2015-06-26 Thread Laura Tociu
Dear Justin,

I have done two parallel short pullings, one with pull_geometry = cylinder
and one with pull_geometry = direction.

When pull_geometry = direction, everything works perfectly: the grompp
output says that the distance at start, as well as the reference at t=0 are
both equal to 3.019 nm, and the pullx.xvg file reflects this: at t=0 the
distance is also 3.019 nm. This distance is reasonable based on my
geometry. The force at t=0 is 10^-6 and the COM pull energy in the md.log
file is on the order of 10^-10.

When pull_geometry = cylinder, the grompp output says the distance at start
and the reference distance are 1.215 nm (which they aren't) but the
distance at t=0 in the pullx.xvg file shows up as 2.8 nm (which it is
roughly -  it's hard to tell where the COM as computed by the cylinder
approach would be but it is likely to be close to the COM as usually
computed). This huge discrepancy leads to huge forces as well as energies
at t=0.

Is this some kind of bug that has been fixed since Gromacs 5.0.2, which is
the version I'm using, or should I report it?  I think since the distance
computed by grompp and that in the pullx.xvg file at t=0 are different, the
method will likely give inaccurate forces and thus PMF's... What do you
think?

Laura



On Thu, Jun 25, 2015 at 11:07 AM, Laura Tociu lto...@princeton.edu wrote:

 Ok thanks! I will analyze this more deeply and maybe also try differen
 geometries. And sorry for the stupid question, I was reading force but was
 thinking potential instead...

 Laura
 --
 From: Justin Lemkul jalem...@vt.edu
 Sent: ‎25/‎06/‎2015 02:23
 To: gmx-us...@gromacs.org
 Subject: Re: [gmx-users] The pullf.xvg and pullx.xvg files



 On 6/24/15 9:20 AM, Laura Tociu wrote:
  Dear Justin,
 
  Thanks for the reply! Yeah, I understand how the pulling works now. The
  forces at time t=0 are not zero, though. There are huge forces such as
 -270
  or even -1700 kj/mol/nm acting on my pull group at time t=0.  What do you
  believe could be the cause of that? And why is there a /nm in that
 force? I
  mean, isn't the force per nm (force constant) of the spring always 1000
  kj/mol/nm, but the actual force adopts various different values as time
  goes by?
 

 The force constant is in kJ/mol/nm^2 (same as all bonded force constants).
 Force is kJ/mol/nm because force is the negative derivative of potential
 with
 respect to position.  You can also convert this rather easily to pN or
 some more
 familiar unit.

  I ran a short simulation with pull_coord1_rate = 0, and when I did that I
  got reasonable forces such as 40-60 kj/mol/nm (??) but still not a zero
  force at time t=0. Please let me know if this is normal behavior or not.

 Sorry, can't tell without fully analyzing all of your files (not something
 I
 have time for).  Check what grompp reports as the reference distance at
 t=0 vs.
 what you calculate in the input coordinate file/first frame of the
 trajectory.
 I've never used the cylinder geometry; could be something specific to that.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] problem to restart REMD

2015-06-26 Thread Mark Abraham
Hi,

I can't tell what you've done so that md0.log doesn't match, but that's why
I suggested you make a backup. You also don't have to have appending,
that's just for convenience. The advice about node count mismatch doesn't
matter here... Use your judgement!

Mark

On Thu, 25 Jun 2015 16:23 leila salimi leilasal...@gmail.com wrote:

 Thanks very much. Ok I will check again, it seems that they are at the same
 step!
 only the thing that comes to my mind is that I used different number of
 cpus when I tried to update few steps for some replicas, and then I used
 the primary numbers of cpu that I used.

 Also I got this error when I update it the  some state.cpt
 Fatal error:
 Checksum wrong for 'md0.log'. The file has been replaced or its contents
 have been modified. Cannot do appending because of this condition.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors

 and also this!

  #nodes mismatch,
 current program: 2
 checkpoint file: 128

   #PME-nodes mismatch,
 current program: -1
 checkpoint file: 32

 I hope to figure out this problem, otherwise I have to run it from
 beginning!
 Thanks!

 Leila



 On Thu, Jun 25, 2015 at 4:15 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  Hi,
 
  I can't tell either. Please run gmxcheck on all your input files, to
 check
  the simulation part, time and step number are all what you think they are
  (and that they match across the simulations) and try again.
 
  Mark
 
  On Thu, Jun 25, 2015 at 4:12 PM leila salimi leilasal...@gmail.com
  wrote:
 
   Dear Mark,
  
   When I tried with new update of the state.cpt files, I got this error.
  
   Abort(1) on node 896 (rank 896 in comm 1140850688): Fatal error in
   MPI_Allreduce: Message truncated, error stack:
   MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
   rbuf=0x7ffc783af760, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8402)
  failed
   MPIR_Allreduce_impl(769).:
   MPIR_Allreduce_intra(419):
   MPIC_Sendrecv(467)...:
   MPIDI_Buffer_copy(73): Message truncated; 64 bytes received but
  buffer
   size is 32
   Abort(1) on node 768 (rank 768 in comm 1140850688): Fatal error in
   MPI_Allreduce: Message truncated, error stack:
   MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
   rbuf=0x7ffdba5176a0, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8402)
  failed
   MPIR_Allreduce_impl(769).:
   MPIR_Allreduce_intra(419):
   MPIC_Sendrecv(467)...:
   MPIDI_Buffer_copy(73): Message truncated; 64 bytes received but
  buffer
   size is 32
   ERROR: 0031-300  Forcing all remote tasks to exit due to exit code 1 in
   task 896
   job.err.1011016.out 399L, 17608C
  
   Actually I don't know what is the problem!
  
   Regards,
   Leila
  
  
   On Thu, Jun 18, 2015 at 12:00 AM, leila salimi leilasal...@gmail.com
   wrote:
  
I understand what you meant, I run only few steps for the other
  replicas
and then continue with the whole replicas.
I hope every thing is going well.
   
Thanks very much.
   
On Wed, Jun 17, 2015 at 11:43 PM, leila salimi 
 leilasal...@gmail.com
wrote:
   
Thanks Mark for your suggestion.
Actually I don't understand the new two state6.cpt and state7,cpt
  files,
because the time that it shows is  127670.062  !
That is strange! because my time step is 2 fs and I saved the output
every 250 steps, means every 500 fs. I expect the time should be
 like
127670.000 or 127670.500 .
   
By the way you mean with mdrun_mpi ... -nsteps ... , I can get the
  steps
that I need for the old state.cpt files?
   
Regards,
Leila
   
On Wed, Jun 17, 2015 at 11:22 PM, Mark Abraham 
   mark.j.abra...@gmail.com
wrote:
   
Hi,
   
That's all extremely strange. Given that you aren't going to
 exchange
   in
that short period of time, you can probably do some arithmetic and
  work
out
how many steps you'd need to advance whichever set of files is
 behind
   the
other. Then mdrun_mpi ... -nsteps y can write a set of checkpoint
  files
that will be all at the same time!
   
Mark
   
On Wed, Jun 17, 2015 at 10:18 PM leila salimi 
 leilasal...@gmail.com
  
wrote:
   
 Hi Mark,

 Thanks very much. Unfortunately both the state6.cpt,
  state6_prev,cpt
and
 state7.cpt and state7_prev.cpt updated and their time are
 different
from
 other replicas file (also with *_prev.cpt )!

 I am thinking maybe I can use init-step in mdp file, and start
 from
   the
 time that I have, because all trr files have the same time! I
  checked
with
 gmxcheck. But I am not sure that I will get correct results!
 Actually I got confused that with the mentioned Note, only two
   replicas
 were running and the state file is changed and the others not!

 ​regards,
 Leila
 --
 Gromacs Users mailing list

 * Please search the archive at

Re: [gmx-users] problem to restart REMD

2015-06-26 Thread leila salimi
Actually when I check for several times I checked the steps for all
state.cpt files and they are the same.
I try to restart it, it is run only for few steps ( It took only 3 minutes
) and then it stopped with this lines in the error file :

Abort(1) on node 12 (rank 12 in comm 1140850688): Fatal error in
MPI_Allreduce: Other MPI error, error stack:
MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
rbuf=0x7fff8606aa00, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8401) failed
MPIR_Allreduce_impl(769).:
MPIR_Allreduce_intra(270):
MPIR_Bcast_impl(1462):
MPIR_Bcast(1486).:
MPIR_Bcast_intra(1295)...:
MPIR_Bcast_binomial(252).: message sizes do not match across processes in
the collective routine: Received 64 but expected 32
ERROR: 0031-300  Forcing all remote tasks to exit due to exit code 1 in
task 12

That I guess the problem is related to MPI, and I don't get why, because my
other simulation is running well.

Thanks for your suggestion.
Leila

On Fri, Jun 26, 2015 at 7:10 PM, Mark Abraham mark.j.abra...@gmail.com
wrote:

 Hi,

 I can't tell what you've done so that md0.log doesn't match, but that's why
 I suggested you make a backup. You also don't have to have appending,
 that's just for convenience. The advice about node count mismatch doesn't
 matter here... Use your judgement!

 Mark

 On Thu, 25 Jun 2015 16:23 leila salimi leilasal...@gmail.com wrote:

  Thanks very much. Ok I will check again, it seems that they are at the
 same
  step!
  only the thing that comes to my mind is that I used different number of
  cpus when I tried to update few steps for some replicas, and then I used
  the primary numbers of cpu that I used.
 
  Also I got this error when I update it the  some state.cpt
  Fatal error:
  Checksum wrong for 'md0.log'. The file has been replaced or its contents
  have been modified. Cannot do appending because of this condition.
  For more information and tips for troubleshooting, please check the
 GROMACS
  website at http://www.gromacs.org/Documentation/Errors
 
  and also this!
 
   #nodes mismatch,
  current program: 2
  checkpoint file: 128
 
#PME-nodes mismatch,
  current program: -1
  checkpoint file: 32
 
  I hope to figure out this problem, otherwise I have to run it from
  beginning!
  Thanks!
 
  Leila
 
 
 
  On Thu, Jun 25, 2015 at 4:15 PM, Mark Abraham mark.j.abra...@gmail.com
  wrote:
 
   Hi,
  
   I can't tell either. Please run gmxcheck on all your input files, to
  check
   the simulation part, time and step number are all what you think they
 are
   (and that they match across the simulations) and try again.
  
   Mark
  
   On Thu, Jun 25, 2015 at 4:12 PM leila salimi leilasal...@gmail.com
   wrote:
  
Dear Mark,
   
When I tried with new update of the state.cpt files, I got this
 error.
   
Abort(1) on node 896 (rank 896 in comm 1140850688): Fatal error in
MPI_Allreduce: Message truncated, error stack:
MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
rbuf=0x7ffc783af760, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8402)
   failed
MPIR_Allreduce_impl(769).:
MPIR_Allreduce_intra(419):
MPIC_Sendrecv(467)...:
MPIDI_Buffer_copy(73): Message truncated; 64 bytes received but
   buffer
size is 32
Abort(1) on node 768 (rank 768 in comm 1140850688): Fatal error in
MPI_Allreduce: Message truncated, error stack:
MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
rbuf=0x7ffdba5176a0, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8402)
   failed
MPIR_Allreduce_impl(769).:
MPIR_Allreduce_intra(419):
MPIC_Sendrecv(467)...:
MPIDI_Buffer_copy(73): Message truncated; 64 bytes received but
   buffer
size is 32
ERROR: 0031-300  Forcing all remote tasks to exit due to exit code 1
 in
task 896
job.err.1011016.out 399L, 17608C
   
Actually I don't know what is the problem!
   
Regards,
Leila
   
   
On Thu, Jun 18, 2015 at 12:00 AM, leila salimi 
 leilasal...@gmail.com
wrote:
   
 I understand what you meant, I run only few steps for the other
   replicas
 and then continue with the whole replicas.
 I hope every thing is going well.

 Thanks very much.

 On Wed, Jun 17, 2015 at 11:43 PM, leila salimi 
  leilasal...@gmail.com
 wrote:

 Thanks Mark for your suggestion.
 Actually I don't understand the new two state6.cpt and state7,cpt
   files,
 because the time that it shows is  127670.062  !
 That is strange! because my time step is 2 fs and I saved the
 output
 every 250 steps, means every 500 fs. I expect the time should be
  like
 127670.000 or 127670.500 .

 By the way you mean with mdrun_mpi ... -nsteps ... , I can get the
   steps
 that I need for the old state.cpt files?

 Regards,
 Leila

 On Wed, Jun 17, 2015 at 11:22 PM, Mark Abraham 
mark.j.abra...@gmail.com
 wrote:

 Hi,

 That's all 

[gmx-users] pretty large coulomb contribution to free energy

2015-06-26 Thread Ahmet Yıldırım
Dear users,

The free energy of decoupling the ligand from the complex I get is about
457 kcal/mol. If for ligand in solution I get -5 kcal/mol, and for the
restraints -8 kcal/mol. Binding free energy is 457-5-8=444 kcal/mol. A
value over 400-500 kcal/mol is not unexpected for decoupling the ligand
from the complex? Anyone experienced similar problem? The problem is the
intramolecular interaction of the ligand? How about this Gromacs version
https://github.com/gromacs/gromacs/tree/022ad08b2fd0c1de085e88ac81a61841c4daea9c
?

Ligand I used is neutral. Gromacs version is 4.6.5

Free energy control parts of my input files are below.

 For decoupling the ligand from the complex #
; Free energy control stuff
init-lambda-state= 5
free_energy  = yes
sc-alpha = 0.5
sc-power = 1.0
sc-sigma = 0.3
restraint-lambdas= 0.0 0.01 0.025 0.05 0.075 0.1 0.2 0.35 0.5 0.75
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0
1.00 1.0 1.00 1.0 1.00 1.0 1.00 1.0
coul-lambdas = 0.0 0.00 0.000 0.00 0.000 0.0 0.0 0.00 0.0 0.00
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0
1.00 1.0 1.00 1.0 1.00 1.0 1.00 1.0
vdw-lambdas  = 0.0 0.00 0.000 0.00 0.000 0.0 0.0 0.00 0.0 0.00
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.05 0.1 0.2 0.3 0.4 0.5 0.6
0.65 0.7 0.75 0.8 0.85 0.9 0.95 1.0
nstdhdl  = 10

pull   = umbrella
pull_geometry  = distance
pull_dim   = Y Y Y
pull_start = no
pull_init1 = 0.2980769

pull_ngroups   = 1
pull_group0= atom-p
pull_group1= atom-l
pull_k1= 0.0   ; kJ*mol^(-1)*nm^(-2)
pull_kB1   = 4184  ; kJ*mol^(-1)*nm^(-2)


 For solvation free energy ###
; Free energy control stuff
init-lambda-state= 5
free_energy  = yes
sc-alpha = 0.5
sc-power = 1.0
sc-sigma = 0.3
coul-lambdas = 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.00
1.0 1.0 1.0 1.0 1.0 1.0 1.00 1.0 1.00 1.0 1.00 1.0 1.00 1.0
vdw-lambdas  = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.05
0.1 0.2 0.3 0.4 0.5 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1.0
nstdhdl  = 10
couple-intramol  = yes
couple-moltype   = MOL
couple-lambda0   = vdw-q
couple-lambda1   = none



-- 
Ahmet Yıldırım
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] counting the number of water molecules surrounding proteins

2015-06-26 Thread sang eun jee
Dear Gromacs Users

Hello.

I have a question about how to count the number of water molecules
surrounding protiens.
I have tried two methods and the results from two methods were different.

The first one is use g_trjorder -f structure.xtc -s structure.tpr -n
index.ndx -nshell water_in_0.22.xvg -r 0.22
I have chose first reference group as protein and second group water.
Using this method, I could calculate the number of water molecules in 0.22
nm as time from protein.

The second method is using mindist.
At first I have calcualted minimum distance from water to protien within
0.5 nm using g_mindist
g_mindist -s structure.tpr -f structure.xtc -n index.ndx -d 0.50
-respertime -od od.xvg -or mindrest.xvg
Here I chose the first reference group water and second reference group
protein. As long as I understand, mindrest.xvg include minimum distance
data of the water-protein per each water molecule.

And then using grep, I have extracted the number of water molecules in the
range of r0.22

grep -v '[#|@|S]' mindistres.xvg | awk '{a1=0;;for(i=2;i=NF;i++)
if($i'0.22') a1++;print $1,a1}'  water_count.xvg
Then I could obtain the number of water molecules in 0.22 nm as a function
of time.

When I got time-averaged values of the number of water molecules within
0.22 nm, the value from first method is different from second method. I got
162 water molecules from first method, while I got 222 water molecules from
second one.
Does anyone have experience in this method?

Thanks,

Sang Eun Jee

Post Doctoral Researcher
School of Materials Science and Engineering
Georgia Institute of Technology
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] problem to restart REMD

2015-06-26 Thread leila salimi
Dear Micholas,
I agree with you! I am trying to find what is wrong with restarting this
system!
I am sure that if I start from begging It will stop at this step and stuck!

I checked every thing seems fine but REMD is not working!
Now I am trying to run only the first 5 repilcas and to see that is it
passing the step or not!

I will tell you my finding.

Leila

On Fri, Jun 26, 2015 at 9:16 PM, Smith, Micholas D. smit...@ornl.gov
wrote:

 Leila, your error is interesting, as I have had a very similar
 MPI_Allreduce error when I try to restart a large scale REMD. The first few
 times the system restarted just fine, but at somepoint it fails.

 Out of curiousity, if we try to re-run from the beginning does it work?

 -Micholas


 ===
 Micholas Dean Smith, PhD.
 Post-doctoral Research Associate
 University of Tennessee/Oak Ridge National Laboratory
 Center for Molecular Biophysics

 
 From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 gromacs.org_gmx-users-boun...@maillist.sys.kth.se on behalf of leila
 salimi leilasal...@gmail.com
 Sent: Friday, June 26, 2015 1:30 PM
 To: gmx-us...@gromacs.org
 Subject: Re: [gmx-users] problem to restart REMD

 Actually when I check for several times I checked the steps for all
 state.cpt files and they are the same.
 I try to restart it, it is run only for few steps ( It took only 3 minutes
 ) and then it stopped with this lines in the error file :

 Abort(1) on node 12 (rank 12 in comm 1140850688): Fatal error in
 MPI_Allreduce: Other MPI error, error stack:
 MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
 rbuf=0x7fff8606aa00, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8401) failed
 MPIR_Allreduce_impl(769).:
 MPIR_Allreduce_intra(270):
 MPIR_Bcast_impl(1462):
 MPIR_Bcast(1486).:
 MPIR_Bcast_intra(1295)...:
 MPIR_Bcast_binomial(252).: message sizes do not match across processes in
 the collective routine: Received 64 but expected 32
 ERROR: 0031-300  Forcing all remote tasks to exit due to exit code 1 in
 task 12

 That I guess the problem is related to MPI, and I don't get why, because my
 other simulation is running well.

 Thanks for your suggestion.
 Leila

 On Fri, Jun 26, 2015 at 7:10 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  Hi,
 
  I can't tell what you've done so that md0.log doesn't match, but that's
 why
  I suggested you make a backup. You also don't have to have appending,
  that's just for convenience. The advice about node count mismatch doesn't
  matter here... Use your judgement!
 
  Mark
 
  On Thu, 25 Jun 2015 16:23 leila salimi leilasal...@gmail.com wrote:
 
   Thanks very much. Ok I will check again, it seems that they are at the
  same
   step!
   only the thing that comes to my mind is that I used different number of
   cpus when I tried to update few steps for some replicas, and then I
 used
   the primary numbers of cpu that I used.
  
   Also I got this error when I update it the  some state.cpt
   Fatal error:
   Checksum wrong for 'md0.log'. The file has been replaced or its
 contents
   have been modified. Cannot do appending because of this condition.
   For more information and tips for troubleshooting, please check the
  GROMACS
   website at http://www.gromacs.org/Documentation/Errors
  
   and also this!
  
#nodes mismatch,
   current program: 2
   checkpoint file: 128
  
 #PME-nodes mismatch,
   current program: -1
   checkpoint file: 32
  
   I hope to figure out this problem, otherwise I have to run it from
   beginning!
   Thanks!
  
   Leila
  
  
  
   On Thu, Jun 25, 2015 at 4:15 PM, Mark Abraham 
 mark.j.abra...@gmail.com
   wrote:
  
Hi,
   
I can't tell either. Please run gmxcheck on all your input files, to
   check
the simulation part, time and step number are all what you think they
  are
(and that they match across the simulations) and try again.
   
Mark
   
On Thu, Jun 25, 2015 at 4:12 PM leila salimi leilasal...@gmail.com
wrote:
   
 Dear Mark,

 When I tried with new update of the state.cpt files, I got this
  error.

 Abort(1) on node 896 (rank 896 in comm 1140850688): Fatal error in
 MPI_Allreduce: Message truncated, error stack:
 MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
 rbuf=0x7ffc783af760, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8402)
failed
 MPIR_Allreduce_impl(769).:
 MPIR_Allreduce_intra(419):
 MPIC_Sendrecv(467)...:
 MPIDI_Buffer_copy(73): Message truncated; 64 bytes received but
buffer
 size is 32
 Abort(1) on node 768 (rank 768 in comm 1140850688): Fatal error in
 MPI_Allreduce: Message truncated, error stack:
 MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
 rbuf=0x7ffdba5176a0, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8402)
failed
 MPIR_Allreduce_impl(769).:
 MPIR_Allreduce_intra(419):
 MPIC_Sendrecv(467)...:
 

Re: [gmx-users] problem to restart REMD

2015-06-26 Thread Smith, Micholas D.
Leila, your error is interesting, as I have had a very similar MPI_Allreduce 
error when I try to restart a large scale REMD. The first few times the system 
restarted just fine, but at somepoint it fails.

Out of curiousity, if we try to re-run from the beginning does it work?

-Micholas


===
Micholas Dean Smith, PhD.
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular Biophysics


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se on behalf of leila salimi 
leilasal...@gmail.com
Sent: Friday, June 26, 2015 1:30 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] problem to restart REMD

Actually when I check for several times I checked the steps for all
state.cpt files and they are the same.
I try to restart it, it is run only for few steps ( It took only 3 minutes
) and then it stopped with this lines in the error file :

Abort(1) on node 12 (rank 12 in comm 1140850688): Fatal error in
MPI_Allreduce: Other MPI error, error stack:
MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
rbuf=0x7fff8606aa00, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8401) failed
MPIR_Allreduce_impl(769).:
MPIR_Allreduce_intra(270):
MPIR_Bcast_impl(1462):
MPIR_Bcast(1486).:
MPIR_Bcast_intra(1295)...:
MPIR_Bcast_binomial(252).: message sizes do not match across processes in
the collective routine: Received 64 but expected 32
ERROR: 0031-300  Forcing all remote tasks to exit due to exit code 1 in
task 12

That I guess the problem is related to MPI, and I don't get why, because my
other simulation is running well.

Thanks for your suggestion.
Leila

On Fri, Jun 26, 2015 at 7:10 PM, Mark Abraham mark.j.abra...@gmail.com
wrote:

 Hi,

 I can't tell what you've done so that md0.log doesn't match, but that's why
 I suggested you make a backup. You also don't have to have appending,
 that's just for convenience. The advice about node count mismatch doesn't
 matter here... Use your judgement!

 Mark

 On Thu, 25 Jun 2015 16:23 leila salimi leilasal...@gmail.com wrote:

  Thanks very much. Ok I will check again, it seems that they are at the
 same
  step!
  only the thing that comes to my mind is that I used different number of
  cpus when I tried to update few steps for some replicas, and then I used
  the primary numbers of cpu that I used.
 
  Also I got this error when I update it the  some state.cpt
  Fatal error:
  Checksum wrong for 'md0.log'. The file has been replaced or its contents
  have been modified. Cannot do appending because of this condition.
  For more information and tips for troubleshooting, please check the
 GROMACS
  website at http://www.gromacs.org/Documentation/Errors
 
  and also this!
 
   #nodes mismatch,
  current program: 2
  checkpoint file: 128
 
#PME-nodes mismatch,
  current program: -1
  checkpoint file: 32
 
  I hope to figure out this problem, otherwise I have to run it from
  beginning!
  Thanks!
 
  Leila
 
 
 
  On Thu, Jun 25, 2015 at 4:15 PM, Mark Abraham mark.j.abra...@gmail.com
  wrote:
 
   Hi,
  
   I can't tell either. Please run gmxcheck on all your input files, to
  check
   the simulation part, time and step number are all what you think they
 are
   (and that they match across the simulations) and try again.
  
   Mark
  
   On Thu, Jun 25, 2015 at 4:12 PM leila salimi leilasal...@gmail.com
   wrote:
  
Dear Mark,
   
When I tried with new update of the state.cpt files, I got this
 error.
   
Abort(1) on node 896 (rank 896 in comm 1140850688): Fatal error in
MPI_Allreduce: Message truncated, error stack:
MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
rbuf=0x7ffc783af760, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8402)
   failed
MPIR_Allreduce_impl(769).:
MPIR_Allreduce_intra(419):
MPIC_Sendrecv(467)...:
MPIDI_Buffer_copy(73): Message truncated; 64 bytes received but
   buffer
size is 32
Abort(1) on node 768 (rank 768 in comm 1140850688): Fatal error in
MPI_Allreduce: Message truncated, error stack:
MPI_Allreduce(912)...: MPI_Allreduce(sbuf=MPI_IN_PLACE,
rbuf=0x7ffdba5176a0, count=4, MPI_DOUBLE, MPI_SUM, comm=0x8402)
   failed
MPIR_Allreduce_impl(769).:
MPIR_Allreduce_intra(419):
MPIC_Sendrecv(467)...:
MPIDI_Buffer_copy(73): Message truncated; 64 bytes received but
   buffer
size is 32
ERROR: 0031-300  Forcing all remote tasks to exit due to exit code 1
 in
task 896
job.err.1011016.out 399L, 17608C
   
Actually I don't know what is the problem!
   
Regards,
Leila
   
   
On Thu, Jun 18, 2015 at 12:00 AM, leila salimi 
 leilasal...@gmail.com
wrote:
   
 I understand what you meant, I run only few steps for the other
   replicas
 and then continue with the whole replicas.
 I hope every thing is going well.

 Thanks 

Re: [gmx-users] pretty large coulomb contribution to free energy

2015-06-26 Thread Ahmet Yıldırım
To test that patched GROMACS version there is not any information about how
[ intermolecular-interactions ] must add to .top file in manual/your
tutorials. Maybe you can paste here an example .top file you have recently
used with that version :)  Is that possible?

2015-06-27 1:56 GMT+02:00 Justin Lemkul jalem...@vt.edu:



 On 6/26/15 4:44 PM, Ahmet Yıldırım wrote:

 Dear users,

 The free energy of decoupling the ligand from the complex I get is about
 457 kcal/mol. If for ligand in solution I get -5 kcal/mol, and for the
 restraints -8 kcal/mol. Binding free energy is 457-5-8=444 kcal/mol. A
 value over 400-500 kcal/mol is not unexpected for decoupling the ligand
 from the complex? Anyone experienced similar problem? The problem is the
 intramolecular interaction of the ligand? How about this Gromacs version

 https://github.com/gromacs/gromacs/tree/022ad08b2fd0c1de085e88ac81a61841c4daea9c
 ?


 For proper absolute binding free energies, you need these restraints.  I'm
 actually using that patched version now and it's essential for getting
 reasonable energies in protein-ligand complexes.  Otherwise you sample a
 ton of totally nonphysical configurations in (nearly) decoupled states.
 Even just the pull code is not enough; that provides a translational
 restraint, but the orientation can vary.  See, e.g.
 dx.doi.org/10.1021/ci300505n for a very robust method that can easily be
 done in GROMACS now using the intermolecular bondeds.

 -Justin


  Ligand I used is neutral. Gromacs version is 4.6.5

 Free energy control parts of my input files are below.

  For decoupling the ligand from the complex #
 ; Free energy control stuff
 init-lambda-state= 5
 free_energy  = yes
 sc-alpha = 0.5
 sc-power = 1.0
 sc-sigma = 0.3
 restraint-lambdas= 0.0 0.01 0.025 0.05 0.075 0.1 0.2 0.35 0.5 0.75
 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0
 1.00 1.0 1.00 1.0 1.00 1.0 1.00 1.0
 coul-lambdas = 0.0 0.00 0.000 0.00 0.000 0.0 0.0 0.00 0.0 0.00
 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0
 1.00 1.0 1.00 1.0 1.00 1.0 1.00 1.0
 vdw-lambdas  = 0.0 0.00 0.000 0.00 0.000 0.0 0.0 0.00 0.0 0.00
 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.05 0.1 0.2 0.3 0.4 0.5 0.6
 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1.0
 nstdhdl  = 10

 pull   = umbrella
 pull_geometry  = distance
 pull_dim   = Y Y Y
 pull_start = no
 pull_init1 = 0.2980769

 pull_ngroups   = 1
 pull_group0= atom-p
 pull_group1= atom-l
 pull_k1= 0.0   ; kJ*mol^(-1)*nm^(-2)
 pull_kB1   = 4184  ; kJ*mol^(-1)*nm^(-2)


  For solvation free energy ###
 ; Free energy control stuff
 init-lambda-state= 5
 free_energy  = yes
 sc-alpha = 0.5
 sc-power = 1.0
 sc-sigma = 0.3
 coul-lambdas = 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
 1.00
 1.0 1.0 1.0 1.0 1.0 1.0 1.00 1.0 1.00 1.0 1.00 1.0 1.00 1.0
 vdw-lambdas  = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
 0.05
 0.1 0.2 0.3 0.4 0.5 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1.0
 nstdhdl  = 10
 couple-intramol  = yes
 couple-moltype   = MOL
 couple-lambda0   = vdw-q
 couple-lambda1   = none




 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
Ahmet Yıldırım
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] 회신: Re: counting the number of water molecules surrounding proteins

2015-06-26 Thread sangeunjee
Dear Justin

Thanks for your suggestion. I will try with g_select too.

Best,
Sang Eun Jee

div 원본 메시지 /divdiv발신: Justin Lemkul jalem...@vt.edu 
/divdiv날짜:26/06/2015  19:56  (GMT-05:00) /divdiv수신: 
gmx-us...@gromacs.org /divdiv제목: Re: [gmx-users] counting the number of 
water molecules surrounding
  proteins /divdiv
/div

On 6/26/15 4:50 PM, sang eun jee wrote:
 Dear Gromacs Users

 Hello.

 I have a question about how to count the number of water molecules
 surrounding protiens.
 I have tried two methods and the results from two methods were different.

 The first one is use g_trjorder -f structure.xtc -s structure.tpr -n
 index.ndx -nshell water_in_0.22.xvg -r 0.22
 I have chose first reference group as protein and second group water.
 Using this method, I could calculate the number of water molecules in 0.22
 nm as time from protein.

 The second method is using mindist.
 At first I have calcualted minimum distance from water to protien within
 0.5 nm using g_mindist
 g_mindist -s structure.tpr -f structure.xtc -n index.ndx -d 0.50
 -respertime -od od.xvg -or mindrest.xvg
 Here I chose the first reference group water and second reference group
 protein. As long as I understand, mindrest.xvg include minimum distance
 data of the water-protein per each water molecule.

 And then using grep, I have extracted the number of water molecules in the
 range of r0.22

 grep -v '[#|@|S]' mindistres.xvg | awk '{a1=0;;for(i=2;i=NF;i++)
 if($i'0.22') a1++;print $1,a1}'  water_count.xvg
 Then I could obtain the number of water molecules in 0.22 nm as a function
 of time.

 When I got time-averaged values of the number of water molecules within
 0.22 nm, the value from first method is different from second method. I got
 162 water molecules from first method, while I got 222 water molecules from
 second one.
 Does anyone have experience in this method?


I would just use gmx select to calculate the number of water oxygens within 
some 
distance of protein atoms of interest.

-Justin

-- 
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] The pullf.xvg and pullx.xvg files

2015-06-26 Thread Justin Lemkul



On 6/26/15 10:34 AM, Laura Tociu wrote:

Dear Justin,

I have done two parallel short pullings, one with pull_geometry = cylinder
and one with pull_geometry = direction.

When pull_geometry = direction, everything works perfectly: the grompp
output says that the distance at start, as well as the reference at t=0 are
both equal to 3.019 nm, and the pullx.xvg file reflects this: at t=0 the
distance is also 3.019 nm. This distance is reasonable based on my
geometry. The force at t=0 is 10^-6 and the COM pull energy in the md.log
file is on the order of 10^-10.

When pull_geometry = cylinder, the grompp output says the distance at start
and the reference distance are 1.215 nm (which they aren't) but the
distance at t=0 in the pullx.xvg file shows up as 2.8 nm (which it is
roughly -  it's hard to tell where the COM as computed by the cylinder
approach would be but it is likely to be close to the COM as usually
computed). This huge discrepancy leads to huge forces as well as energies
at t=0.

Is this some kind of bug that has been fixed since Gromacs 5.0.2, which is
the version I'm using, or should I report it?  I think since the distance


Upgrading to 5.0.5 and trying again is the best way to answer that.  I don't 
know whether or not this is a bug.



computed by grompp and that in the pullx.xvg file at t=0 are different, the
method will likely give inaccurate forces and thus PMF's... What do you
think?



Looking back over your setup again, I really don't think you need the cylinder 
geometry.  That's really intended for layers in which the reference atoms will 
change over time, as in the example shown in the manual.  Here, you have an ion 
and a protein; the reference atoms don't change, so some simpler geometry is 
more appropriate.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Energy minimisation goes to several values

2015-06-26 Thread Justin Lemkul



On 6/26/15 11:54 AM, Kevin C Chan wrote:

Thanks Justin for the reply.

Honestly I thought minimizing a large and complex (usually biomolecular) system
part by part instead of as a whole will effectively shorten the computational
time cost while causing no effect on the final structure. When you say
“impedes”, do you mean it causes a longer calculation time in total or it will
give a bad final structure?



Well, you're running three minimizations instead of one, and you're achieving an 
unstable result.  I'd say the three-step approach is not worth doing :)


Consider something really simple - a polar, surface residue on a protein 
surrounded by water.  Let's say you freeze the protein and let the water relax. 
 The local waters respond to the fixed geometry of the side chain, which is 
(maybe) from a crystal and therefore perhaps not the correct conformation in 
solution.  So the waters reorganize a bit.  Then you let the protein relax but 
the waters are fixed.  The side chain responds to a fixed clathrate of water 
that have been minimized around the wrong side chain conformation.  What have 
you achieved?  Nothing.  Sure, you then minimize the whole system, but your 
starting point is potentially less plausible than it was to start with!  At 
minimum, it's just a waste of time.  Occasional use of restraints can be 
beneficial in some cases.  Any time you talk about absolutely fixing large 
groups of atoms (like immobilizing water), I think it's really a waste of time.


If a single, unrestrained minimization still leads to an unstable system, then 
it's not your minimization protocol that's to blame, rather an unresolvable 
starting structure or a bad topology.


-Justin


Thanks in advance,
Kevin


On 26 Jun, 2015, at 22:21, gromacs.org_gmx-users-requ...@maillist.sys.kth.se
mailto:gromacs.org_gmx-users-requ...@maillist.sys.kth.se
gromacs.org_gmx-users-requ...@maillist.sys.kth.se
mailto:gromacs.org_gmx-users-requ...@maillist.sys.kth.se wrote:

*From:*Justin Lemkul jalem...@vt.edu mailto:jalem...@vt.edu
*Subject:**Re: [gmx-users] Energy minimisation goes to several values*
*Date:*26 June, 2015 21:27:32 HKT
*To:*gmx-us...@gromacs.org mailto:gmx-us...@gromacs.org
*Reply-To:*gmx-us...@gromacs.org mailto:gmx-us...@gromacs.org




On 6/25/15 9:45 PM, Kevin C Chan wrote:

Dear Users,

I am energy minimising a quite large solvated system containing protein and
lipids (~800,000 atoms). I used to fix components of the system in order to
speed-up energy minimisation and sometimes it is easier to debug such
processes. Here is my protocol:
1. fix all except water and so to minimise water
2. fix water and then minimise all the rest atoms
3. fix nothin and then minimise the whole system

While monitoring the energy of the system thought minimisations, it goes
fine for step 1 and 2 and converged after just few hundred steps. However
it goes back to several higher values of energy (bouncing between the
values) and they started to increase very slowly for step 3. This makes no
sense to me and did anyone have a similar experience?

There are two unusual points:
1. The system energy drops suddenly instead of decreased gradually during
step2 and then stays at a constant value.
2. If I use the resulting structure from step3 to proceed a, say, heating
process, it simply blows up.

To be clear, my system was solvated and auto-ionized using VMD tools and
some water inside the membrane has been directly deleted. Backbone of the
protein and phosphorus atoms of the membrane are under a
position constraint during all the minimisations. I was choosing conjugate
gradient for minimization.



Does a normal minimization (just one overall minimization with nothing
fixed) yield a stable starting point?  Fixing atoms (using freezegrps?) often
actually impedes minimization.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu mailto:jalem...@outerbanks.umaryland.edu|
(410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==




--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit

Re: [gmx-users] counting the number of water molecules surrounding proteins

2015-06-26 Thread Justin Lemkul



On 6/26/15 4:50 PM, sang eun jee wrote:

Dear Gromacs Users

Hello.

I have a question about how to count the number of water molecules
surrounding protiens.
I have tried two methods and the results from two methods were different.

The first one is use g_trjorder -f structure.xtc -s structure.tpr -n
index.ndx -nshell water_in_0.22.xvg -r 0.22
I have chose first reference group as protein and second group water.
Using this method, I could calculate the number of water molecules in 0.22
nm as time from protein.

The second method is using mindist.
At first I have calcualted minimum distance from water to protien within
0.5 nm using g_mindist
g_mindist -s structure.tpr -f structure.xtc -n index.ndx -d 0.50
-respertime -od od.xvg -or mindrest.xvg
Here I chose the first reference group water and second reference group
protein. As long as I understand, mindrest.xvg include minimum distance
data of the water-protein per each water molecule.

And then using grep, I have extracted the number of water molecules in the
range of r0.22

grep -v '[#|@|S]' mindistres.xvg | awk '{a1=0;;for(i=2;i=NF;i++)
if($i'0.22') a1++;print $1,a1}'  water_count.xvg
Then I could obtain the number of water molecules in 0.22 nm as a function
of time.

When I got time-averaged values of the number of water molecules within
0.22 nm, the value from first method is different from second method. I got
162 water molecules from first method, while I got 222 water molecules from
second one.
Does anyone have experience in this method?



I would just use gmx select to calculate the number of water oxygens within some 
distance of protein atoms of interest.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] energy minimization error

2015-06-26 Thread Justin Lemkul



On 6/26/15 7:19 PM, James Lord wrote:

Hi Justin,
I have asked this before, but this time I started up from scratch and used
the input files that I know they are fine but again I am getting this
error, Would you please tell me what is wrong in my topology files and
where about in .top files the bond length are defined?

https://drive.google.com/file/d/0B0YMTXH1gmQsbkpjTU9tWGFkSDA/view?usp=sharing



There's nothing I can do with a topology that's a series of #include statements.

Please also remind me what this is all about so I don't have to go digging 
through the archives for something from a month ago.  Lots of things have 
happened since then :)


-Justin


Cheers
James

On Mon, May 18, 2015 at 1:13 AM, Justin Lemkul jalem...@vt.edu wrote:




On 5/16/15 11:35 PM, James Lord wrote:


Hi Justin
Thanks for this. Can you tell me which step(s) this bond length is
defined
What should I do (redo) to resolve this issue?



The bonds are defined in the topology.  The DD algorithm reads what's in
the coordinate file and builds cells based on the geometry it finds there.

Without a full description of what's in your system, what steps you've
taken to prepare it (full commands, in order), there's little I can suggest
because it would all be guesswork.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] energy minimization error

2015-06-26 Thread James Lord
Hi Justin,
I have asked this before, but this time I started up from scratch and used
the input files that I know they are fine but again I am getting this
error, Would you please tell me what is wrong in my topology files and
where about in .top files the bond length are defined?

https://drive.google.com/file/d/0B0YMTXH1gmQsbkpjTU9tWGFkSDA/view?usp=sharing

Cheers
James

On Mon, May 18, 2015 at 1:13 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 5/16/15 11:35 PM, James Lord wrote:

 Hi Justin
 Thanks for this. Can you tell me which step(s) this bond length is
 defined
 What should I do (redo) to resolve this issue?


 The bonds are defined in the topology.  The DD algorithm reads what's in
 the coordinate file and builds cells based on the geometry it finds there.

 Without a full description of what's in your system, what steps you've
 taken to prepare it (full commands, in order), there's little I can suggest
 because it would all be guesswork.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Configuration bug preventing GROMACS 5.0 build with Intel compiler?

2015-06-26 Thread Åke Sandgren

Just noticed this question from Sep 2014.

The problem was that configuring with the Intel compiler and IntelMPI 
causes incorrect detection of _finite.


This is caused by bugs in the IntelMPI wrappers for both gcc and intel 
compilers, including the MIC wrappers.


I have patches available if someone want to have them.
I have not had time to report this to Intel yet...

--
Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
Internet: a...@hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90-580 14
Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] error gcq#360

2015-06-26 Thread Urszula Uciechowska
Dear gmx users,

after running grompp -f em.mdp -c COM_ions.gro -p COM.top -o em.tpr

I obtained:

GROMACS:  gmx grompp, VERSION 5.0
Executable:  
/software/local/el6/INTEL/gromacs/5.0.0/intel-ompi-fftw-blas-lapack/bin/gmx
Library dir: 
/software/local/el6/INTEL/gromacs/5.0.0/intel-ompi-fftw-blas-lapack/share/gromacs/top
Command line:
  grompp -f em.mdp -c COM_ions.gro -p COM.top -o em.tpr


Back Off! I just backed up mdout.mdp to ./#mdout.mdp.3#
Setting the LD random seed to 2993272762
Generated 2211 of the 2211 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 2211 of the 2211 1-4 parameter combinations
Excluding 3 bonded neighbours molecule type 'DNA'
Excluding 3 bonded neighbours molecule type 'DNA2'
Excluding 3 bonded neighbours molecule type 'Protein3'
Excluding 3 bonded neighbours molecule type 'Protein4'
Excluding 3 bonded neighbours molecule type 'Protein5'
Excluding 3 bonded neighbours molecule type 'Protein6'
Excluding 2 bonded neighbours molecule type 'SOL'
Excluding 1 bonded neighbours molecule type 'NA'
Excluding 1 bonded neighbours molecule type 'CL'
Removing all charge groups because cutoff-scheme=Verlet
Analysing residue names:
There are:   136DNA residues
There are:  1140Protein residues
There are: 557197  Water residues
There are:  2152Ion residues
Analysing residues not classified as Protein/DNA/RNA/Water and splitting
into groups...
Analysing Protein...
Analysing residues not classified as Protein/DNA/RNA/Water and splitting
into groups...
Number of degrees of freedom in T-Coupling group rest is 5090256.00
Calculating fourier grid dimensions for X Y Z
Using a fourier grid of 216x216x216, spacing 0.119 0.119 0.119
Estimate for the relative computational load of the PME mesh part: 0.27

NOTE 1 [file em.mdp]:
  This run will generate roughly 586540 Mb of data


There was 1 note

gcq#360: error: too many template-parameter-lists (g++)


At the end I have em.tpr file but I am not sure if everything is ok.

Any suggestions.

best regards
Urszula


-
Ta wiadomość została wysłana z serwera Uniwersytetu Gdańskiego
http://www.ug.edu.pl/

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] error gcq#360

2015-06-26 Thread Mark Abraham
Hi,

There's no problem - that's just a fun Gromacs Cool Quote (i.e. gcq),
as the tools often print at the end of their run. This one is perhaps not a
good wording! (There are ways to turn them off, if they annoy people.)

Mark

On Fri, Jun 26, 2015 at 3:24 PM Urszula Uciechowska 
urszula.uciechow...@biotech.ug.edu.pl wrote:

 Dear gmx users,

 after running grompp -f em.mdp -c COM_ions.gro -p COM.top -o em.tpr

 I obtained:

 GROMACS:  gmx grompp, VERSION 5.0
 Executable:
 /software/local/el6/INTEL/gromacs/5.0.0/intel-ompi-fftw-blas-lapack/bin/gmx
 Library dir:

 /software/local/el6/INTEL/gromacs/5.0.0/intel-ompi-fftw-blas-lapack/share/gromacs/top
 Command line:
   grompp -f em.mdp -c COM_ions.gro -p COM.top -o em.tpr


 Back Off! I just backed up mdout.mdp to ./#mdout.mdp.3#
 Setting the LD random seed to 2993272762
 Generated 2211 of the 2211 non-bonded parameter combinations
 Generating 1-4 interactions: fudge = 0.5
 Generated 2211 of the 2211 1-4 parameter combinations
 Excluding 3 bonded neighbours molecule type 'DNA'
 Excluding 3 bonded neighbours molecule type 'DNA2'
 Excluding 3 bonded neighbours molecule type 'Protein3'
 Excluding 3 bonded neighbours molecule type 'Protein4'
 Excluding 3 bonded neighbours molecule type 'Protein5'
 Excluding 3 bonded neighbours molecule type 'Protein6'
 Excluding 2 bonded neighbours molecule type 'SOL'
 Excluding 1 bonded neighbours molecule type 'NA'
 Excluding 1 bonded neighbours molecule type 'CL'
 Removing all charge groups because cutoff-scheme=Verlet
 Analysing residue names:
 There are:   136DNA residues
 There are:  1140Protein residues
 There are: 557197  Water residues
 There are:  2152Ion residues
 Analysing residues not classified as Protein/DNA/RNA/Water and splitting
 into groups...
 Analysing Protein...
 Analysing residues not classified as Protein/DNA/RNA/Water and splitting
 into groups...
 Number of degrees of freedom in T-Coupling group rest is 5090256.00
 Calculating fourier grid dimensions for X Y Z
 Using a fourier grid of 216x216x216, spacing 0.119 0.119 0.119
 Estimate for the relative computational load of the PME mesh part: 0.27

 NOTE 1 [file em.mdp]:
   This run will generate roughly 586540 Mb of data


 There was 1 note

 gcq#360: error: too many template-parameter-lists (g++)


 At the end I have em.tpr file but I am not sure if everything is ok.

 Any suggestions.

 best regards
 Urszula


 -
 Ta wiadomość została wysłana z serwera Uniwersytetu Gdańskiego
 http://www.ug.edu.pl/

 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Energy minimisation goes to several values

2015-06-26 Thread Justin Lemkul



On 6/25/15 9:45 PM, Kevin C Chan wrote:

Dear Users,

I am energy minimising a quite large solvated system containing protein and
lipids (~800,000 atoms). I used to fix components of the system in order to
speed-up energy minimisation and sometimes it is easier to debug such
processes. Here is my protocol:
1. fix all except water and so to minimise water
2. fix water and then minimise all the rest atoms
3. fix nothin and then minimise the whole system

While monitoring the energy of the system thought minimisations, it goes
fine for step 1 and 2 and converged after just few hundred steps. However
it goes back to several higher values of energy (bouncing between the
values) and they started to increase very slowly for step 3. This makes no
sense to me and did anyone have a similar experience?

There are two unusual points:
1. The system energy drops suddenly instead of decreased gradually during
step2 and then stays at a constant value.
2. If I use the resulting structure from step3 to proceed a, say, heating
process, it simply blows up.

To be clear, my system was solvated and auto-ionized using VMD tools and
some water inside the membrane has been directly deleted. Backbone of the
protein and phosphorus atoms of the membrane are under a
position constraint during all the minimisations. I was choosing conjugate
gradient for minimization.



Does a normal minimization (just one overall minimization with nothing fixed) 
yield a stable starting point?  Fixing atoms (using freezegrps?) often actually 
impedes minimization.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.