[gmx-users] increasing md run speed

2018-06-22 Thread neelam wafa
Dear gmx users!

I am running md simmulation of a protein with different ligands but the
speed is decreasing with every simmulation. In first one it was 25hrs/ns,
for second one it became 35 hrs/ns then 36hs/ns. what can be the reason?
I am using this command for the run.
How to select the value of x if I use this command to increase the speed.
Following is the detail of the cores used and the hardware i am using.

Running on 1 node with total 2 cores, 4 logical cores
Hardware detected:
  CPU info:
Vendor: GenuineIntel
Brand:  Intel(R) Core(TM) i3-2370M CPU @ 2.40GHz
SIMD instructions most likely to fit this hardware: AVX_256
SIMD instructions selected at GROMACS compile time: AVX_256

Reading file em.tpr, VERSION 5.1.5 (single precision)
Using 1 MPI thread
Using 4 OpenMP threads


Looking forward for your help and cooperation.
Regards
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] different PMF results using MPI

2018-06-22 Thread Srinivasa Ramisetti

Hi,

Thank you Mark. Previously, I did run similar simulations on the same 
number of cores (72 cores) where I noticed small variation in PMF unlike 
the one I am observing now with 36 cores. I can try running few more 
cases to show you the comparison.


Please follow the link for the figure with PMF curves.
https://www.dropbox.com/s/mqaot2olg1hqcj1/pmf-error.png?dl=0

Thank you
Srinivasa
On 22/06/2018 14:57, Mark Abraham wrote:

Hi,

What range of variations do you see in replicates run on the same number of
cores? Maybe you've just observed that your sampling is not yet sufficient.
(The list does not accept attachments - share a link to a file on a sharing
service if you need to.)

Mark

On Fri, Jun 22, 2018, 15:46 Srinivasa Ramisetti 
wrote:


Dear GMX-USERS,


I have a doubt trusting the PMF curve of a single molecule pulled from an
immobile calcite surface using pull code with umbrella sampling in GROMACS.
I extracted two PMF’s for the same system (all inputs are the same) by
running simulations on 36 and 72 cores with MPI to check the consistency of
the results. I find a significant change in the magnitude of the two curves
(please see attached the image). The figure also shows the error of PMF
obtained using bootstrap method. The histogram for both the cases are well
overlapped.


Is it okay to rely on this results? Can anyone suggest how I can reproduce
similar PMF results by running simulations on any number of cores?


These results were produced using gromacs 2016.4 version with the
following pull code:


pull = yes

pull_ngroups = 2

pull_ncoords = 1

pull_group1_name = SUR ; calcite surface

pull_group2_name = MDX ; organic compound

pull_coord1_type = umbrella ; harmonic biasing force

pull_coord1_geometry = distance ; simple distance increase

pull_coord1_groups = 1 2

pull_coord1_dim = N N Y

pull_coord1_k = 500 ; kJ mol^-1 nm^-2

pull_coord1_rate = 0.0

pull_coord1_init = XXX ; XXX is the COM distance between 2 groups

pull_coord1_start = no


Thank you,

Srinivasa

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Continuation of the gromacs job using gmx convert-tpr

2018-06-22 Thread Own 12121325
Thanks Mark!


assuming that I am interesting to obtain separate files for the each step I
need just one command:

mdrun -v -deffnm step7_1 -cpi step7_1.cpt -noappend
that each time should create step7_1part002 etc

but in case if I want to set manes of each pieces manually (it sounds crazy
but in fact I need to do follow this way!) does the method with gmx
convert-tpr in principle produce separate pieces correctly?

gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
mdrun -v -deffnm step7_2 -cpi step7_1.cpt

in earlier versions I did in the same way but without cpt file and it
worked good:

gmx convert-tpr -s step7_1.tpr -trr step7_1 -edr step7_1 -o step7_2.tpr
-extend 5
mdrun -v -deffnm step7_2



2018-06-22 15:02 GMT+02:00 Mark Abraham :

> Hi,
>
> There are some differences in recent GROMACS versions here (because the old
> implementations were not robust enough), but the checkpoint restart will
> not work with appending unless it finds the output files named in the .cpt
> match those on the command line (here, from -deffnm). You're making extra
> work for yourself in several ways.
>
> I encourage you to not use -deffnm with a new name that merely signifies
> that the extension happened. There's no physical and no real organizational
> reason to do this.
>
> If you want numbered output files for each step, then start your
> simulations with -noappend and let mdrun number them automatically. But IMO
> all that does is make work for you later, concatenating the files again.
>
> If you want appending to work after extending to the number of steps, use
> -s new.tpr -deffnm old rather than -deffnm new, because the former doesn't
> create name mismatches between those output files that the checkpoint
> remembers and those you've instructed mdrun to use now.
>
> And if your reason for using -deffnm is that you want to have multiple
> simulation steps in the same directory, bear in mind that using a single
> directory to contain a single step is much more robust (you are using the
> standard way of grouping related files, called a directory, and using cd is
> not any more difficult than -deffnm), and you can just use the default file
> naming:
>
> (cd step7; mpirun -np whatever gmx_mpi mdrun -s extended)
>
> Mark
>
> On Fri, Jun 22, 2018 at 11:07 AM Own 12121325 
> wrote:
>
> > thanks Mark!
> >
> > could you please also confirm that my method of the prolongation of the
> > simulation would be correct
> >
> > #entend simulation for 50 ns and save these pieces as the separate files
> > with the name step7_2*
> > gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
> > mdrun -v -deffnm step7_2 -cpi step7_1.cpt
> >
> > 2018-06-22 10:57 GMT+02:00 Mark Abraham :
> >
> > > Hi,
> > >
> > > The previous checkpoint has the _prev suffix, in case there is a
> problem
> > > that might require you to go further back in time.
> > >
> > > Mark
> > >
> > > On Fri, Jun 22, 2018, 10:46 Own 12121325 
> wrote:
> > >
> > > > P.S. what the difference between name.cpt and name_prev.cpt produced
> by
> > > > mdrun? What check-point should correspond to the last snapshot in trr
> > > file
> > > > ?
> > > >
> > > > 2018-06-22 10:17 GMT+02:00 Own 12121325 :
> > > >
> > > > > In fact there is an alternative trick :-)
> > > > > If I rename a tpr file via gmx convert-tpr  and then run mdrun
> using
> > > this
> > > > > new tpr as well as previous checkpoint, it will produce all pieces
> of
> > > the
> > > > > trajectory in the separate file:
> > > > >
> > > > > gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
> > > > > mpirun -np ${NB_TASKS} mdrun -v -deffnm step7_2 -cpi step7_1.cpt
> > > > >
> > > > > If I add -noappend flag to the mdrun, its also do the same job but
> > also
> > > > > will add suffix pat002 to each of the new file (that is not
> necessary
> > > for
> > > > > me since I have already renamed tpr).
> > > > >
> > > > > Gleb
> > > > >
> > > > >
> > > > > 2018-06-21 14:17 GMT+02:00 Justin Lemkul :
> > > > >
> > > > >>
> > > > >>
> > > > >> On 6/21/18 2:35 AM, Own 12121325 wrote:
> > > > >>
> > > > >>> and without append flag it will produce an output in the separate
> > > file,
> > > > >>> won't it?
> > > > >>>
> > > > >>
> > > > >> No, because appending is the default behavior. Specifying -append
> > just
> > > > >> invokes what mdrun does on its own. If you want a separate file,
> add
> > > > >> -noappend to your mdrun command.
> > > > >>
> > > > >> -Justin
> > > > >>
> > > > >>
> > > > >> gmx convert-tpr -s init.tpr -o next.tpr -extend 50
> > > > >>> gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt
> > > > >>>
> > > > >>> 2018-06-21 1:12 GMT+02:00 Justin Lemkul :
> > > > >>>
> > > > >>>
> > > >  On 6/19/18 4:45 AM, Own 12121325 wrote:
> > > > 
> > > >  Hello Justin,
> > > > >
> > > > > could you specify please a bit more. Following your method, if
> > the
> > > > > simulation has been terminated by crash without producing gro
> > file
> > > so
> > > > > to
> > 

Re: [gmx-users] fatal error with mpi

2018-06-22 Thread Stefano Guglielmo
Actually this is not a cluster but a single machine with two cpu's and 16
cores/cpu and in fact the non-MPI (tMPI) version works fine; I needed to
switch to the MPI version because I patched plumed which does not recognize
tMPI and need gromacs to be compiled with the same MPI used for it.

Stefano

2018-06-22 16:01 GMT+02:00 Mark Abraham :

> Hi,
>
> Just exiting without a simulation crash suggests different problems, eg
> that your cluster lost the network for MPI to use, etc. Talk to your system
> administrators about their experience and thoughts there.
>
> Mark
>
> On Fri, Jun 22, 2018, 15:57 Stefano Guglielmo 
> wrote:
>
> > Thanks Mark. I must say that I tried reducing timestep (as low as 0.5
> fs-1)
> > and temperature as well (5K both in NvT and NpT) but the simulation
> crashed
> > in any case with warning about cutoff, lincs or bad water. The very
> > previous day I had run the same simulation on the same machine but with
> the
> > non-MPI version of gromacs 2016.5 and everything went smoothly; as I
> > switched to the MPI version I encountered the issue, and in the end I was
> > afraid to have missed something during compilation, so I decided to try
> and
> > recompile and now it is running.
> > Anyway I will check the trajectory as you said.
> > Thanks
> > Stefano
> >
> > 2018-06-22 15:36 GMT+02:00 Mark Abraham :
> >
> > > Hi,
> > >
> > > Recompiling won't fix anything relevant. Molecular frustration isn't
> just
> > > about long bonds. Rearrangment of side chain groupings can do similar
> > > things despite looking happy. The sudden injection of KE means
> collisions
> > > can be more violent than normal, and the timestep is now too large. But
> > you
> > > need to look at the trajectory to understand if this might be the case.
> > >
> > > Mark
> > >
> > > On Fri, Jun 22, 2018 at 3:19 PM Stefano Guglielmo <
> > > stefano.guglie...@unito.it> wrote:
> > >
> > > > Dear Mark,
> > > >
> > > > thanks for your answer. The version is 2016.5, but I apparently
> solved
> > > the
> > > > problem recompiling gromacs: now the simulation is running quite
> > stable.
> > > I
> > > > had minimized and gradually equilibrated the system and I could not
> see
> > > any
> > > > weird bonds or contacts. So in the end, as extrema ratio, I decided
> to
> > > > recompile.
> > > >
> > > > Thanks again
> > > > Stefano
> > > >
> > > > 2018-06-22 15:07 GMT+02:00 Mark Abraham :
> > > >
> > > > > Hi,
> > > > >
> > > > > This could easily be that your system is actually not yet well
> > > > equilibrated
> > > > > (e.g. something was trapped in a high energy state that eventually
> > > > relaxed
> > > > > sharply). Or it could be a code bug. What version were you using?
> > > > >
> > > > > Mark
> > > > >
> > > > > On Thu, Jun 21, 2018 at 2:36 PM Stefano Guglielmo <
> > > > > stefano.guglie...@unito.it> wrote:
> > > > >
> > > > > > Dear users,
> > > > > >
> > > > > > I have installed gromacs with MPI instead of its native tMPI and
> I
> > am
> > > > > > encountering the following error:
> > > > > > "Fatal error:
> > > > > > 4 particles communicated to PME rank 5 are more than 2/3 times
> the
> > > > > cut-off
> > > > > > out
> > > > > > of the domain decomposition cell of their charge group in
> dimension
> > > y.
> > > > > > This usually means that your system is not well equilibrated."
> > > > > >
> > > > > > I am using 8 MPI ranks with 4 omp threads per rank (it was the
> > > > > > configuration used on the same machine by tMPI), but I also
> tried 4
> > > > ranks
> > > > > > with 8 threads, but it did not solve the problem.
> > > > > >
> > > > > > I don't think this is an issue related to my system because the
> > same
> > > > > system
> > > > > > run with the native tMPI works properly (it has already been
> > > termalized
> > > > > and
> > > > > > gradually equilibrated); neverthless I tried to reduce dt and
> > > > temperature
> > > > > > but without any benefit. Does anybody have any suggestions?
> > > > > >
> > > > > > Thanks in advance
> > > > > > Stefano
> > > > > >
> > > > > > --
> > > > > > Stefano GUGLIELMO PhD
> > > > > > Assistant Professor of Medicinal Chemistry
> > > > > > Department of Drug Science and Technology
> > > > > > Via P. Giuria 9
> > > > > > 10125 Turin, ITALY
> > > > > > ph. +39 (0)11 6707178 <011-670%2071%2078> <011-670%2071%2078>
> > > > > > --
> > > > > > Gromacs Users mailing list
> > > > > >
> > > > > > * Please search the archive at
> > > > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
> before
> > > > > > posting!
> > > > > >
> > > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > > >
> > > > > > * For (un)subscribe requests visit
> > > > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_
> gmx-users
> > > or
> > > > > > send a mail to gmx-users-requ...@gromacs.org.
> > > > > >
> > > > > --
> > > > > Gromacs Users mailing list
> > > > >
> > > > > * Please search the archive at http://www.gromacs.org/
> > > > > 

Re: [gmx-users] Regarding -nmol option

2018-06-22 Thread Justin Lemkul




On 6/22/18 3:19 AM, Apramita Chand wrote:

Dear All,
In computing interaction energies between groups from g_energy, the -nmol
option needs an integer argument that ( as mentioned)  should be equal to
total no of particles in the system.


The -nmol option should be set to the number of *molecules* (not 
particles) in a pure liquid when computing the thermodynamic properties 
listed in the table to which that statement refers.



So if I have a protein-urea-water solution of a single protein, 15 urea and
984 water molecules, for protein-urea interaction energy, what should -nmol
be?
No.of protein+urea molecules?
Or total number of molecules, including water??

Again for protein-water interaction energies, -nmol should be protein+
water molecules or total number of molecules?


The option is not really relevant here. If you're computing urea-protein 
interaction energy, you could convert the quantity to a per-molecule 
interaction energy simply by dividing by 15.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] fatal error with mpi

2018-06-22 Thread Mark Abraham
Hi,

Just exiting without a simulation crash suggests different problems, eg
that your cluster lost the network for MPI to use, etc. Talk to your system
administrators about their experience and thoughts there.

Mark

On Fri, Jun 22, 2018, 15:57 Stefano Guglielmo 
wrote:

> Thanks Mark. I must say that I tried reducing timestep (as low as 0.5 fs-1)
> and temperature as well (5K both in NvT and NpT) but the simulation crashed
> in any case with warning about cutoff, lincs or bad water. The very
> previous day I had run the same simulation on the same machine but with the
> non-MPI version of gromacs 2016.5 and everything went smoothly; as I
> switched to the MPI version I encountered the issue, and in the end I was
> afraid to have missed something during compilation, so I decided to try and
> recompile and now it is running.
> Anyway I will check the trajectory as you said.
> Thanks
> Stefano
>
> 2018-06-22 15:36 GMT+02:00 Mark Abraham :
>
> > Hi,
> >
> > Recompiling won't fix anything relevant. Molecular frustration isn't just
> > about long bonds. Rearrangment of side chain groupings can do similar
> > things despite looking happy. The sudden injection of KE means collisions
> > can be more violent than normal, and the timestep is now too large. But
> you
> > need to look at the trajectory to understand if this might be the case.
> >
> > Mark
> >
> > On Fri, Jun 22, 2018 at 3:19 PM Stefano Guglielmo <
> > stefano.guglie...@unito.it> wrote:
> >
> > > Dear Mark,
> > >
> > > thanks for your answer. The version is 2016.5, but I apparently solved
> > the
> > > problem recompiling gromacs: now the simulation is running quite
> stable.
> > I
> > > had minimized and gradually equilibrated the system and I could not see
> > any
> > > weird bonds or contacts. So in the end, as extrema ratio, I decided to
> > > recompile.
> > >
> > > Thanks again
> > > Stefano
> > >
> > > 2018-06-22 15:07 GMT+02:00 Mark Abraham :
> > >
> > > > Hi,
> > > >
> > > > This could easily be that your system is actually not yet well
> > > equilibrated
> > > > (e.g. something was trapped in a high energy state that eventually
> > > relaxed
> > > > sharply). Or it could be a code bug. What version were you using?
> > > >
> > > > Mark
> > > >
> > > > On Thu, Jun 21, 2018 at 2:36 PM Stefano Guglielmo <
> > > > stefano.guglie...@unito.it> wrote:
> > > >
> > > > > Dear users,
> > > > >
> > > > > I have installed gromacs with MPI instead of its native tMPI and I
> am
> > > > > encountering the following error:
> > > > > "Fatal error:
> > > > > 4 particles communicated to PME rank 5 are more than 2/3 times the
> > > > cut-off
> > > > > out
> > > > > of the domain decomposition cell of their charge group in dimension
> > y.
> > > > > This usually means that your system is not well equilibrated."
> > > > >
> > > > > I am using 8 MPI ranks with 4 omp threads per rank (it was the
> > > > > configuration used on the same machine by tMPI), but I also tried 4
> > > ranks
> > > > > with 8 threads, but it did not solve the problem.
> > > > >
> > > > > I don't think this is an issue related to my system because the
> same
> > > > system
> > > > > run with the native tMPI works properly (it has already been
> > termalized
> > > > and
> > > > > gradually equilibrated); neverthless I tried to reduce dt and
> > > temperature
> > > > > but without any benefit. Does anybody have any suggestions?
> > > > >
> > > > > Thanks in advance
> > > > > Stefano
> > > > >
> > > > > --
> > > > > Stefano GUGLIELMO PhD
> > > > > Assistant Professor of Medicinal Chemistry
> > > > > Department of Drug Science and Technology
> > > > > Via P. Giuria 9
> > > > > 10125 Turin, ITALY
> > > > > ph. +39 (0)11 6707178 <011-670%2071%2078> <011-670%2071%2078>
> > > > > --
> > > > > Gromacs Users mailing list
> > > > >
> > > > > * Please search the archive at
> > > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > > posting!
> > > > >
> > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > >
> > > > > * For (un)subscribe requests visit
> > > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> > or
> > > > > send a mail to gmx-users-requ...@gromacs.org.
> > > > >
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at http://www.gromacs.org/
> > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > >
> > >
> > >
> > > --
> > > Stefano GUGLIELMO PhD
> > > Assistant Professor of Medicinal Chemistry
> > > Department of Drug Science and Technology
> > > Via P. Giuria 9
> > > 10125 Turin, ITALY
> > > ph. +39 (0)11 6707178 <011-670%2071%2078>
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search 

Re: [gmx-users] different PMF results using MPI

2018-06-22 Thread Mark Abraham
Hi,

What range of variations do you see in replicates run on the same number of
cores? Maybe you've just observed that your sampling is not yet sufficient.
(The list does not accept attachments - share a link to a file on a sharing
service if you need to.)

Mark

On Fri, Jun 22, 2018, 15:46 Srinivasa Ramisetti 
wrote:

> Dear GMX-USERS,
>
>
> I have a doubt trusting the PMF curve of a single molecule pulled from an
> immobile calcite surface using pull code with umbrella sampling in GROMACS.
> I extracted two PMF’s for the same system (all inputs are the same) by
> running simulations on 36 and 72 cores with MPI to check the consistency of
> the results. I find a significant change in the magnitude of the two curves
> (please see attached the image). The figure also shows the error of PMF
> obtained using bootstrap method. The histogram for both the cases are well
> overlapped.
>
>
> Is it okay to rely on this results? Can anyone suggest how I can reproduce
> similar PMF results by running simulations on any number of cores?
>
>
> These results were produced using gromacs 2016.4 version with the
> following pull code:
>
>
> pull = yes
>
> pull_ngroups = 2
>
> pull_ncoords = 1
>
> pull_group1_name = SUR ; calcite surface
>
> pull_group2_name = MDX ; organic compound
>
> pull_coord1_type = umbrella ; harmonic biasing force
>
> pull_coord1_geometry = distance ; simple distance increase
>
> pull_coord1_groups = 1 2
>
> pull_coord1_dim = N N Y
>
> pull_coord1_k = 500 ; kJ mol^-1 nm^-2
>
> pull_coord1_rate = 0.0
>
> pull_coord1_init = XXX ; XXX is the COM distance between 2 groups
>
> pull_coord1_start = no
>
>
> Thank you,
>
> Srinivasa
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] fatal error with mpi

2018-06-22 Thread Stefano Guglielmo
Thanks Mark. I must say that I tried reducing timestep (as low as 0.5 fs-1)
and temperature as well (5K both in NvT and NpT) but the simulation crashed
in any case with warning about cutoff, lincs or bad water. The very
previous day I had run the same simulation on the same machine but with the
non-MPI version of gromacs 2016.5 and everything went smoothly; as I
switched to the MPI version I encountered the issue, and in the end I was
afraid to have missed something during compilation, so I decided to try and
recompile and now it is running.
Anyway I will check the trajectory as you said.
Thanks
Stefano

2018-06-22 15:36 GMT+02:00 Mark Abraham :

> Hi,
>
> Recompiling won't fix anything relevant. Molecular frustration isn't just
> about long bonds. Rearrangment of side chain groupings can do similar
> things despite looking happy. The sudden injection of KE means collisions
> can be more violent than normal, and the timestep is now too large. But you
> need to look at the trajectory to understand if this might be the case.
>
> Mark
>
> On Fri, Jun 22, 2018 at 3:19 PM Stefano Guglielmo <
> stefano.guglie...@unito.it> wrote:
>
> > Dear Mark,
> >
> > thanks for your answer. The version is 2016.5, but I apparently solved
> the
> > problem recompiling gromacs: now the simulation is running quite stable.
> I
> > had minimized and gradually equilibrated the system and I could not see
> any
> > weird bonds or contacts. So in the end, as extrema ratio, I decided to
> > recompile.
> >
> > Thanks again
> > Stefano
> >
> > 2018-06-22 15:07 GMT+02:00 Mark Abraham :
> >
> > > Hi,
> > >
> > > This could easily be that your system is actually not yet well
> > equilibrated
> > > (e.g. something was trapped in a high energy state that eventually
> > relaxed
> > > sharply). Or it could be a code bug. What version were you using?
> > >
> > > Mark
> > >
> > > On Thu, Jun 21, 2018 at 2:36 PM Stefano Guglielmo <
> > > stefano.guglie...@unito.it> wrote:
> > >
> > > > Dear users,
> > > >
> > > > I have installed gromacs with MPI instead of its native tMPI and I am
> > > > encountering the following error:
> > > > "Fatal error:
> > > > 4 particles communicated to PME rank 5 are more than 2/3 times the
> > > cut-off
> > > > out
> > > > of the domain decomposition cell of their charge group in dimension
> y.
> > > > This usually means that your system is not well equilibrated."
> > > >
> > > > I am using 8 MPI ranks with 4 omp threads per rank (it was the
> > > > configuration used on the same machine by tMPI), but I also tried 4
> > ranks
> > > > with 8 threads, but it did not solve the problem.
> > > >
> > > > I don't think this is an issue related to my system because the same
> > > system
> > > > run with the native tMPI works properly (it has already been
> termalized
> > > and
> > > > gradually equilibrated); neverthless I tried to reduce dt and
> > temperature
> > > > but without any benefit. Does anybody have any suggestions?
> > > >
> > > > Thanks in advance
> > > > Stefano
> > > >
> > > > --
> > > > Stefano GUGLIELMO PhD
> > > > Assistant Professor of Medicinal Chemistry
> > > > Department of Drug Science and Technology
> > > > Via P. Giuria 9
> > > > 10125 Turin, ITALY
> > > > ph. +39 (0)11 6707178 <011-670%2071%2078> <011-670%2071%2078>
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> >
> >
> >
> > --
> > Stefano GUGLIELMO PhD
> > Assistant Professor of Medicinal Chemistry
> > Department of Drug Science and Technology
> > Via P. Giuria 9
> > 10125 Turin, ITALY
> > ph. +39 (0)11 6707178 <011-670%2071%2078>
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read 

[gmx-users] different PMF results using MPI

2018-06-22 Thread Srinivasa Ramisetti
Dear GMX-USERS,


I have a doubt trusting the PMF curve of a single molecule pulled from an 
immobile calcite surface using pull code with umbrella sampling in GROMACS. I 
extracted two PMF’s for the same system (all inputs are the same) by running 
simulations on 36 and 72 cores with MPI to check the consistency of the 
results. I find a significant change in the magnitude of the two curves (please 
see attached the image). The figure also shows the error of PMF obtained using 
bootstrap method. The histogram for both the cases are well overlapped.


Is it okay to rely on this results? Can anyone suggest how I can reproduce 
similar PMF results by running simulations on any number of cores?


These results were produced using gromacs 2016.4 version with the following 
pull code:


pull = yes

pull_ngroups = 2

pull_ncoords = 1

pull_group1_name = SUR ; calcite surface

pull_group2_name = MDX ; organic compound

pull_coord1_type = umbrella ; harmonic biasing force

pull_coord1_geometry = distance ; simple distance increase

pull_coord1_groups = 1 2

pull_coord1_dim = N N Y

pull_coord1_k = 500 ; kJ mol^-1 nm^-2

pull_coord1_rate = 0.0

pull_coord1_init = XXX ; XXX is the COM distance between 2 groups

pull_coord1_start = no


Thank you,

Srinivasa

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] fatal error with mpi

2018-06-22 Thread Mark Abraham
Hi,

Recompiling won't fix anything relevant. Molecular frustration isn't just
about long bonds. Rearrangment of side chain groupings can do similar
things despite looking happy. The sudden injection of KE means collisions
can be more violent than normal, and the timestep is now too large. But you
need to look at the trajectory to understand if this might be the case.

Mark

On Fri, Jun 22, 2018 at 3:19 PM Stefano Guglielmo <
stefano.guglie...@unito.it> wrote:

> Dear Mark,
>
> thanks for your answer. The version is 2016.5, but I apparently solved the
> problem recompiling gromacs: now the simulation is running quite stable. I
> had minimized and gradually equilibrated the system and I could not see any
> weird bonds or contacts. So in the end, as extrema ratio, I decided to
> recompile.
>
> Thanks again
> Stefano
>
> 2018-06-22 15:07 GMT+02:00 Mark Abraham :
>
> > Hi,
> >
> > This could easily be that your system is actually not yet well
> equilibrated
> > (e.g. something was trapped in a high energy state that eventually
> relaxed
> > sharply). Or it could be a code bug. What version were you using?
> >
> > Mark
> >
> > On Thu, Jun 21, 2018 at 2:36 PM Stefano Guglielmo <
> > stefano.guglie...@unito.it> wrote:
> >
> > > Dear users,
> > >
> > > I have installed gromacs with MPI instead of its native tMPI and I am
> > > encountering the following error:
> > > "Fatal error:
> > > 4 particles communicated to PME rank 5 are more than 2/3 times the
> > cut-off
> > > out
> > > of the domain decomposition cell of their charge group in dimension y.
> > > This usually means that your system is not well equilibrated."
> > >
> > > I am using 8 MPI ranks with 4 omp threads per rank (it was the
> > > configuration used on the same machine by tMPI), but I also tried 4
> ranks
> > > with 8 threads, but it did not solve the problem.
> > >
> > > I don't think this is an issue related to my system because the same
> > system
> > > run with the native tMPI works properly (it has already been termalized
> > and
> > > gradually equilibrated); neverthless I tried to reduce dt and
> temperature
> > > but without any benefit. Does anybody have any suggestions?
> > >
> > > Thanks in advance
> > > Stefano
> > >
> > > --
> > > Stefano GUGLIELMO PhD
> > > Assistant Professor of Medicinal Chemistry
> > > Department of Drug Science and Technology
> > > Via P. Giuria 9
> > > 10125 Turin, ITALY
> > > ph. +39 (0)11 6707178 <011-670%2071%2078> <011-670%2071%2078>
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
>
> --
> Stefano GUGLIELMO PhD
> Assistant Professor of Medicinal Chemistry
> Department of Drug Science and Technology
> Via P. Giuria 9
> 10125 Turin, ITALY
> ph. +39 (0)11 6707178 <011-670%2071%2078>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] fatal error with mpi

2018-06-22 Thread Stefano Guglielmo
Dear Mark,

thanks for your answer. The version is 2016.5, but I apparently solved the
problem recompiling gromacs: now the simulation is running quite stable. I
had minimized and gradually equilibrated the system and I could not see any
weird bonds or contacts. So in the end, as extrema ratio, I decided to
recompile.

Thanks again
Stefano

2018-06-22 15:07 GMT+02:00 Mark Abraham :

> Hi,
>
> This could easily be that your system is actually not yet well equilibrated
> (e.g. something was trapped in a high energy state that eventually relaxed
> sharply). Or it could be a code bug. What version were you using?
>
> Mark
>
> On Thu, Jun 21, 2018 at 2:36 PM Stefano Guglielmo <
> stefano.guglie...@unito.it> wrote:
>
> > Dear users,
> >
> > I have installed gromacs with MPI instead of its native tMPI and I am
> > encountering the following error:
> > "Fatal error:
> > 4 particles communicated to PME rank 5 are more than 2/3 times the
> cut-off
> > out
> > of the domain decomposition cell of their charge group in dimension y.
> > This usually means that your system is not well equilibrated."
> >
> > I am using 8 MPI ranks with 4 omp threads per rank (it was the
> > configuration used on the same machine by tMPI), but I also tried 4 ranks
> > with 8 threads, but it did not solve the problem.
> >
> > I don't think this is an issue related to my system because the same
> system
> > run with the native tMPI works properly (it has already been termalized
> and
> > gradually equilibrated); neverthless I tried to reduce dt and temperature
> > but without any benefit. Does anybody have any suggestions?
> >
> > Thanks in advance
> > Stefano
> >
> > --
> > Stefano GUGLIELMO PhD
> > Assistant Professor of Medicinal Chemistry
> > Department of Drug Science and Technology
> > Via P. Giuria 9
> > 10125 Turin, ITALY
> > ph. +39 (0)11 6707178 <011-670%2071%2078>
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
Stefano GUGLIELMO PhD
Assistant Professor of Medicinal Chemistry
Department of Drug Science and Technology
Via P. Giuria 9
10125 Turin, ITALY
ph. +39 (0)11 6707178
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] opnempi

2018-06-22 Thread Mark Abraham
Hi,

Check that you were running the gmx binary that you thought you were
running. Which by default will actually be gmx_mpi ;-)

Mark

On Thu, Jun 21, 2018 at 1:03 AM Stefano Guglielmo <
stefano.guglie...@unito.it> wrote:

> Dear gromacs users,
>
> I am trying to compile gromacs 2016.5 with openmpi compilers installed on
> my machine; here is the configuration command:
>
> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on
> -DGMX_MPI=on -DMPI_C_COMPILER=/usr/lib64/openmpi/bin/mpicc
> -DMPI_CXX_COMPILER=/usr/lib64/openmpi/bin/mpicxx
>
> compilation and installation end up correctly, but when trying to run
> mdrun, gromacs still uses its own tMPI; how can I avoid tMPI and "force" to
> MPI?
>
> Thanks in advance
> Stefano
>
> --
> Stefano GUGLIELMO PhD
> Assistant Professor of Medicinal Chemistry
> Department of Drug Science and Technology
> Via P. Giuria 9
> 10125 Turin, ITALY
> ph. +39 (0)11 6707178 <011-670%2071%2078>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] fatal error with mpi

2018-06-22 Thread Mark Abraham
Hi,

This could easily be that your system is actually not yet well equilibrated
(e.g. something was trapped in a high energy state that eventually relaxed
sharply). Or it could be a code bug. What version were you using?

Mark

On Thu, Jun 21, 2018 at 2:36 PM Stefano Guglielmo <
stefano.guglie...@unito.it> wrote:

> Dear users,
>
> I have installed gromacs with MPI instead of its native tMPI and I am
> encountering the following error:
> "Fatal error:
> 4 particles communicated to PME rank 5 are more than 2/3 times the cut-off
> out
> of the domain decomposition cell of their charge group in dimension y.
> This usually means that your system is not well equilibrated."
>
> I am using 8 MPI ranks with 4 omp threads per rank (it was the
> configuration used on the same machine by tMPI), but I also tried 4 ranks
> with 8 threads, but it did not solve the problem.
>
> I don't think this is an issue related to my system because the same system
> run with the native tMPI works properly (it has already been termalized and
> gradually equilibrated); neverthless I tried to reduce dt and temperature
> but without any benefit. Does anybody have any suggestions?
>
> Thanks in advance
> Stefano
>
> --
> Stefano GUGLIELMO PhD
> Assistant Professor of Medicinal Chemistry
> Department of Drug Science and Technology
> Via P. Giuria 9
> 10125 Turin, ITALY
> ph. +39 (0)11 6707178 <011-670%2071%2078>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Continuation of the gromacs job using gmx convert-tpr

2018-06-22 Thread Mark Abraham
Hi,

There are some differences in recent GROMACS versions here (because the old
implementations were not robust enough), but the checkpoint restart will
not work with appending unless it finds the output files named in the .cpt
match those on the command line (here, from -deffnm). You're making extra
work for yourself in several ways.

I encourage you to not use -deffnm with a new name that merely signifies
that the extension happened. There's no physical and no real organizational
reason to do this.

If you want numbered output files for each step, then start your
simulations with -noappend and let mdrun number them automatically. But IMO
all that does is make work for you later, concatenating the files again.

If you want appending to work after extending to the number of steps, use
-s new.tpr -deffnm old rather than -deffnm new, because the former doesn't
create name mismatches between those output files that the checkpoint
remembers and those you've instructed mdrun to use now.

And if your reason for using -deffnm is that you want to have multiple
simulation steps in the same directory, bear in mind that using a single
directory to contain a single step is much more robust (you are using the
standard way of grouping related files, called a directory, and using cd is
not any more difficult than -deffnm), and you can just use the default file
naming:

(cd step7; mpirun -np whatever gmx_mpi mdrun -s extended)

Mark

On Fri, Jun 22, 2018 at 11:07 AM Own 12121325  wrote:

> thanks Mark!
>
> could you please also confirm that my method of the prolongation of the
> simulation would be correct
>
> #entend simulation for 50 ns and save these pieces as the separate files
> with the name step7_2*
> gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
> mdrun -v -deffnm step7_2 -cpi step7_1.cpt
>
> 2018-06-22 10:57 GMT+02:00 Mark Abraham :
>
> > Hi,
> >
> > The previous checkpoint has the _prev suffix, in case there is a problem
> > that might require you to go further back in time.
> >
> > Mark
> >
> > On Fri, Jun 22, 2018, 10:46 Own 12121325  wrote:
> >
> > > P.S. what the difference between name.cpt and name_prev.cpt produced by
> > > mdrun? What check-point should correspond to the last snapshot in trr
> > file
> > > ?
> > >
> > > 2018-06-22 10:17 GMT+02:00 Own 12121325 :
> > >
> > > > In fact there is an alternative trick :-)
> > > > If I rename a tpr file via gmx convert-tpr  and then run mdrun using
> > this
> > > > new tpr as well as previous checkpoint, it will produce all pieces of
> > the
> > > > trajectory in the separate file:
> > > >
> > > > gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
> > > > mpirun -np ${NB_TASKS} mdrun -v -deffnm step7_2 -cpi step7_1.cpt
> > > >
> > > > If I add -noappend flag to the mdrun, its also do the same job but
> also
> > > > will add suffix pat002 to each of the new file (that is not necessary
> > for
> > > > me since I have already renamed tpr).
> > > >
> > > > Gleb
> > > >
> > > >
> > > > 2018-06-21 14:17 GMT+02:00 Justin Lemkul :
> > > >
> > > >>
> > > >>
> > > >> On 6/21/18 2:35 AM, Own 12121325 wrote:
> > > >>
> > > >>> and without append flag it will produce an output in the separate
> > file,
> > > >>> won't it?
> > > >>>
> > > >>
> > > >> No, because appending is the default behavior. Specifying -append
> just
> > > >> invokes what mdrun does on its own. If you want a separate file, add
> > > >> -noappend to your mdrun command.
> > > >>
> > > >> -Justin
> > > >>
> > > >>
> > > >> gmx convert-tpr -s init.tpr -o next.tpr -extend 50
> > > >>> gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt
> > > >>>
> > > >>> 2018-06-21 1:12 GMT+02:00 Justin Lemkul :
> > > >>>
> > > >>>
> > >  On 6/19/18 4:45 AM, Own 12121325 wrote:
> > > 
> > >  Hello Justin,
> > > >
> > > > could you specify please a bit more. Following your method, if
> the
> > > > simulation has been terminated by crash without producing gro
> file
> > so
> > > > to
> > > > re-initiate it I only need one command:
> > > >
> > > > gmx mdrun -s initial.tpr -cpi the_last_chekpoint.cpt -append
> > > >
> > > > where the last_checkpoint should be something like initial.cpt or
> > > > initial_prev.cpt
> > > >
> > > > Right.
> > > 
> > >  but In case if my simulation has been finished correctly e.g. for
> 50
> > > ns
> > >  and
> > > 
> > > > I now need to extend it for another 50 ns,  should I do the
> > following
> > > > trick
> > > > with 2 GMX programs:
> > > >
> > > > gmx convert-tpr -s init.tpr -o next.tpr -extend 50
> > > > gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt -append
> > > >
> > > > it will produce the second part of the trajectory
> as
> the new file
> > > > (next.trr) or merge together the first and the second part ?
> > > >
> > > > You're specifying -append, so the 

[gmx-users] Error energy minimization: No such moleculetype NA

2018-06-22 Thread Gonzalez Fernandez, Cristina
Dear Gromacs users,



I am trying to perform the energy minimization after adding sodium ions. When I 
do the grompp to generate the .tpr file that will be used for the energy 
minimization, I get this error:



Fatal error:

No such moleculetype NA



I have checked the .top file and its true that there is not NA in moleculetype, 
but I have simulated other molecules with ions and I have never define any 
moleculetype with the ions.



I have introduced a moleculetype with NA, but then I get this error:



Moleculetype "NA" contain contains no atoms



How can I solve it? What are the atoms, exclusions, constraints... I have to 
define?





Could anyone help me?

Any help would be appreciated.

Best,

C.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Continuation of the gromacs job using gmx convert-tpr

2018-06-22 Thread Own 12121325
thanks Mark!

could you please also confirm that my method of the prolongation of the
simulation would be correct

#entend simulation for 50 ns and save these pieces as the separate files
with the name step7_2*
gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
mdrun -v -deffnm step7_2 -cpi step7_1.cpt

2018-06-22 10:57 GMT+02:00 Mark Abraham :

> Hi,
>
> The previous checkpoint has the _prev suffix, in case there is a problem
> that might require you to go further back in time.
>
> Mark
>
> On Fri, Jun 22, 2018, 10:46 Own 12121325  wrote:
>
> > P.S. what the difference between name.cpt and name_prev.cpt produced by
> > mdrun? What check-point should correspond to the last snapshot in trr
> file
> > ?
> >
> > 2018-06-22 10:17 GMT+02:00 Own 12121325 :
> >
> > > In fact there is an alternative trick :-)
> > > If I rename a tpr file via gmx convert-tpr  and then run mdrun using
> this
> > > new tpr as well as previous checkpoint, it will produce all pieces of
> the
> > > trajectory in the separate file:
> > >
> > > gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
> > > mpirun -np ${NB_TASKS} mdrun -v -deffnm step7_2 -cpi step7_1.cpt
> > >
> > > If I add -noappend flag to the mdrun, its also do the same job but also
> > > will add suffix pat002 to each of the new file (that is not necessary
> for
> > > me since I have already renamed tpr).
> > >
> > > Gleb
> > >
> > >
> > > 2018-06-21 14:17 GMT+02:00 Justin Lemkul :
> > >
> > >>
> > >>
> > >> On 6/21/18 2:35 AM, Own 12121325 wrote:
> > >>
> > >>> and without append flag it will produce an output in the separate
> file,
> > >>> won't it?
> > >>>
> > >>
> > >> No, because appending is the default behavior. Specifying -append just
> > >> invokes what mdrun does on its own. If you want a separate file, add
> > >> -noappend to your mdrun command.
> > >>
> > >> -Justin
> > >>
> > >>
> > >> gmx convert-tpr -s init.tpr -o next.tpr -extend 50
> > >>> gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt
> > >>>
> > >>> 2018-06-21 1:12 GMT+02:00 Justin Lemkul :
> > >>>
> > >>>
> >  On 6/19/18 4:45 AM, Own 12121325 wrote:
> > 
> >  Hello Justin,
> > >
> > > could you specify please a bit more. Following your method, if the
> > > simulation has been terminated by crash without producing gro file
> so
> > > to
> > > re-initiate it I only need one command:
> > >
> > > gmx mdrun -s initial.tpr -cpi the_last_chekpoint.cpt -append
> > >
> > > where the last_checkpoint should be something like initial.cpt or
> > > initial_prev.cpt
> > >
> > > Right.
> > 
> >  but In case if my simulation has been finished correctly e.g. for 50
> > ns
> >  and
> > 
> > > I now need to extend it for another 50 ns,  should I do the
> following
> > > trick
> > > with 2 GMX programs:
> > >
> > > gmx convert-tpr -s init.tpr -o next.tpr -extend 50
> > > gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt -append
> > >
> > > it will produce the second part of the trajectory as the new file
> > > (next.trr) or merge together the first and the second part ?
> > >
> > > You're specifying -append, so the output will be concatenated by
> > mdrun.
> >  That's also been default behavior for as long as I can remember, too
> > :)
> > 
> >  -Justin
> > 
> >  --
> >  ==
> > 
> >  Justin A. Lemkul, Ph.D.
> >  Assistant Professor
> >  Virginia Tech Department of Biochemistry
> > 
> >  303 Engel Hall
> >  340 West Campus Dr.
> >  Blacksburg, VA 24061
> > 
> >  jalem...@vt.edu | (540) 231-3129
> >  http://www.thelemkullab.com
> > 
> >  ==
> > 
> >  --
> >  Gromacs Users mailing list
> > 
> >  * Please search the archive at http://www.gromacs.org/Support
> >  /Mailing_Lists/GMX-Users_List before posting!
> > 
> >  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > 
> >  * For (un)subscribe requests visit
> >  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> >  send a mail to gmx-users-requ...@gromacs.org.
> > 
> > 
> > >> --
> > >> ==
> > >>
> > >> Justin A. Lemkul, Ph.D.
> > >> Assistant Professor
> > >> Virginia Tech Department of Biochemistry
> > >>
> > >> 303 Engel Hall
> > >> 340 West Campus Dr.
> > >> Blacksburg, VA 24061
> > >>
> > >> jalem...@vt.edu | (540) 231-3129
> > >> http://www.thelemkullab.com
> > >>
> > >> ==
> > >>
> > >> --
> > >> Gromacs Users mailing list
> > >>
> > >> * Please search the archive at http://www.gromacs.org/Support
> > >> /Mailing_Lists/GMX-Users_List before posting!
> > >>
> > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >>
> > >> * For (un)subscribe 

Re: [gmx-users] Continuation of the gromacs job using gmx convert-tpr

2018-06-22 Thread Mark Abraham
Hi,

The previous checkpoint has the _prev suffix, in case there is a problem
that might require you to go further back in time.

Mark

On Fri, Jun 22, 2018, 10:46 Own 12121325  wrote:

> P.S. what the difference between name.cpt and name_prev.cpt produced by
> mdrun? What check-point should correspond to the last snapshot in trr file
> ?
>
> 2018-06-22 10:17 GMT+02:00 Own 12121325 :
>
> > In fact there is an alternative trick :-)
> > If I rename a tpr file via gmx convert-tpr  and then run mdrun using this
> > new tpr as well as previous checkpoint, it will produce all pieces of the
> > trajectory in the separate file:
> >
> > gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
> > mpirun -np ${NB_TASKS} mdrun -v -deffnm step7_2 -cpi step7_1.cpt
> >
> > If I add -noappend flag to the mdrun, its also do the same job but also
> > will add suffix pat002 to each of the new file (that is not necessary for
> > me since I have already renamed tpr).
> >
> > Gleb
> >
> >
> > 2018-06-21 14:17 GMT+02:00 Justin Lemkul :
> >
> >>
> >>
> >> On 6/21/18 2:35 AM, Own 12121325 wrote:
> >>
> >>> and without append flag it will produce an output in the separate file,
> >>> won't it?
> >>>
> >>
> >> No, because appending is the default behavior. Specifying -append just
> >> invokes what mdrun does on its own. If you want a separate file, add
> >> -noappend to your mdrun command.
> >>
> >> -Justin
> >>
> >>
> >> gmx convert-tpr -s init.tpr -o next.tpr -extend 50
> >>> gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt
> >>>
> >>> 2018-06-21 1:12 GMT+02:00 Justin Lemkul :
> >>>
> >>>
>  On 6/19/18 4:45 AM, Own 12121325 wrote:
> 
>  Hello Justin,
> >
> > could you specify please a bit more. Following your method, if the
> > simulation has been terminated by crash without producing gro file so
> > to
> > re-initiate it I only need one command:
> >
> > gmx mdrun -s initial.tpr -cpi the_last_chekpoint.cpt -append
> >
> > where the last_checkpoint should be something like initial.cpt or
> > initial_prev.cpt
> >
> > Right.
> 
>  but In case if my simulation has been finished correctly e.g. for 50
> ns
>  and
> 
> > I now need to extend it for another 50 ns,  should I do the following
> > trick
> > with 2 GMX programs:
> >
> > gmx convert-tpr -s init.tpr -o next.tpr -extend 50
> > gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt -append
> >
> > it will produce the second part of the trajectory as the new file
> > (next.trr) or merge together the first and the second part ?
> >
> > You're specifying -append, so the output will be concatenated by
> mdrun.
>  That's also been default behavior for as long as I can remember, too
> :)
> 
>  -Justin
> 
>  --
>  ==
> 
>  Justin A. Lemkul, Ph.D.
>  Assistant Professor
>  Virginia Tech Department of Biochemistry
> 
>  303 Engel Hall
>  340 West Campus Dr.
>  Blacksburg, VA 24061
> 
>  jalem...@vt.edu | (540) 231-3129
>  http://www.thelemkullab.com
> 
>  ==
> 
>  --
>  Gromacs Users mailing list
> 
>  * Please search the archive at http://www.gromacs.org/Support
>  /Mailing_Lists/GMX-Users_List before posting!
> 
>  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
>  * For (un)subscribe requests visit
>  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>  send a mail to gmx-users-requ...@gromacs.org.
> 
> 
> >> --
> >> ==
> >>
> >> Justin A. Lemkul, Ph.D.
> >> Assistant Professor
> >> Virginia Tech Department of Biochemistry
> >>
> >> 303 Engel Hall
> >> 340 West Campus Dr.
> >> Blacksburg, VA 24061
> >>
> >> jalem...@vt.edu | (540) 231-3129
> >> http://www.thelemkullab.com
> >>
> >> ==
> >>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at http://www.gromacs.org/Support
> >> /Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 

Re: [gmx-users] Continuation of the gromacs job using gmx convert-tpr

2018-06-22 Thread Own 12121325
P.S. what the difference between name.cpt and name_prev.cpt produced by
mdrun? What check-point should correspond to the last snapshot in trr file ?

2018-06-22 10:17 GMT+02:00 Own 12121325 :

> In fact there is an alternative trick :-)
> If I rename a tpr file via gmx convert-tpr  and then run mdrun using this
> new tpr as well as previous checkpoint, it will produce all pieces of the
> trajectory in the separate file:
>
> gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
> mpirun -np ${NB_TASKS} mdrun -v -deffnm step7_2 -cpi step7_1.cpt
>
> If I add -noappend flag to the mdrun, its also do the same job but also
> will add suffix pat002 to each of the new file (that is not necessary for
> me since I have already renamed tpr).
>
> Gleb
>
>
> 2018-06-21 14:17 GMT+02:00 Justin Lemkul :
>
>>
>>
>> On 6/21/18 2:35 AM, Own 12121325 wrote:
>>
>>> and without append flag it will produce an output in the separate file,
>>> won't it?
>>>
>>
>> No, because appending is the default behavior. Specifying -append just
>> invokes what mdrun does on its own. If you want a separate file, add
>> -noappend to your mdrun command.
>>
>> -Justin
>>
>>
>> gmx convert-tpr -s init.tpr -o next.tpr -extend 50
>>> gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt
>>>
>>> 2018-06-21 1:12 GMT+02:00 Justin Lemkul :
>>>
>>>
 On 6/19/18 4:45 AM, Own 12121325 wrote:

 Hello Justin,
>
> could you specify please a bit more. Following your method, if the
> simulation has been terminated by crash without producing gro file so
> to
> re-initiate it I only need one command:
>
> gmx mdrun -s initial.tpr -cpi the_last_chekpoint.cpt -append
>
> where the last_checkpoint should be something like initial.cpt or
> initial_prev.cpt
>
> Right.

 but In case if my simulation has been finished correctly e.g. for 50 ns
 and

> I now need to extend it for another 50 ns,  should I do the following
> trick
> with 2 GMX programs:
>
> gmx convert-tpr -s init.tpr -o next.tpr -extend 50
> gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt -append
>
> it will produce the second part of the trajectory as the new file
> (next.trr) or merge together the first and the second part ?
>
> You're specifying -append, so the output will be concatenated by mdrun.
 That's also been default behavior for as long as I can remember, too :)

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Assistant Professor
 Virginia Tech Department of Biochemistry

 303 Engel Hall
 340 West Campus Dr.
 Blacksburg, VA 24061

 jalem...@vt.edu | (540) 231-3129
 http://www.thelemkullab.com

 ==

 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/Support
 /Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.


>> --
>> ==
>>
>> Justin A. Lemkul, Ph.D.
>> Assistant Professor
>> Virginia Tech Department of Biochemistry
>>
>> 303 Engel Hall
>> 340 West Campus Dr.
>> Blacksburg, VA 24061
>>
>> jalem...@vt.edu | (540) 231-3129
>> http://www.thelemkullab.com
>>
>> ==
>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Continuation of the gromacs job using gmx convert-tpr

2018-06-22 Thread Own 12121325
In fact there is an alternative trick :-)
If I rename a tpr file via gmx convert-tpr  and then run mdrun using this
new tpr as well as previous checkpoint, it will produce all pieces of the
trajectory in the separate file:

gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
mpirun -np ${NB_TASKS} mdrun -v -deffnm step7_2 -cpi step7_1.cpt

If I add -noappend flag to the mdrun, its also do the same job but also
will add suffix pat002 to each of the new file (that is not necessary for
me since I have already renamed tpr).

Gleb


2018-06-21 14:17 GMT+02:00 Justin Lemkul :

>
>
> On 6/21/18 2:35 AM, Own 12121325 wrote:
>
>> and without append flag it will produce an output in the separate file,
>> won't it?
>>
>
> No, because appending is the default behavior. Specifying -append just
> invokes what mdrun does on its own. If you want a separate file, add
> -noappend to your mdrun command.
>
> -Justin
>
>
> gmx convert-tpr -s init.tpr -o next.tpr -extend 50
>> gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt
>>
>> 2018-06-21 1:12 GMT+02:00 Justin Lemkul :
>>
>>
>>> On 6/19/18 4:45 AM, Own 12121325 wrote:
>>>
>>> Hello Justin,

 could you specify please a bit more. Following your method, if the
 simulation has been terminated by crash without producing gro file so to
 re-initiate it I only need one command:

 gmx mdrun -s initial.tpr -cpi the_last_chekpoint.cpt -append

 where the last_checkpoint should be something like initial.cpt or
 initial_prev.cpt

 Right.
>>>
>>> but In case if my simulation has been finished correctly e.g. for 50 ns
>>> and
>>>
 I now need to extend it for another 50 ns,  should I do the following
 trick
 with 2 GMX programs:

 gmx convert-tpr -s init.tpr -o next.tpr -extend 50
 gmx mdrun -s next.tpr -cpi the_last_chekpoint.cpt -append

 it will produce the second part of the trajectory as the new file
 (next.trr) or merge together the first and the second part ?

 You're specifying -append, so the output will be concatenated by mdrun.
>>> That's also been default behavior for as long as I can remember, too :)
>>>
>>> -Justin
>>>
>>> --
>>> ==
>>>
>>> Justin A. Lemkul, Ph.D.
>>> Assistant Professor
>>> Virginia Tech Department of Biochemistry
>>>
>>> 303 Engel Hall
>>> 340 West Campus Dr.
>>> Blacksburg, VA 24061
>>>
>>> jalem...@vt.edu | (540) 231-3129
>>> http://www.thelemkullab.com
>>>
>>> ==
>>>
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at http://www.gromacs.org/Support
>>> /Mailing_Lists/GMX-Users_List before posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-requ...@gromacs.org.
>>>
>>>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Regarding -nmol option

2018-06-22 Thread Apramita Chand
Dear All,
In computing interaction energies between groups from g_energy, the -nmol
option needs an integer argument that ( as mentioned)  should be equal to
total no of particles in the system.

So if I have a protein-urea-water solution of a single protein, 15 urea and
984 water molecules, for protein-urea interaction energy, what should -nmol
be?
No.of protein+urea molecules?
Or total number of molecules, including water??

Again for protein-water interaction energies, -nmol should be protein+
water molecules or total number of molecules?


Regards
Yours sincerely
Apramita
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.