[gmx-users] REMD

2018-05-26 Thread Eric Smoll
Hello Gromacs Users,

I am interested in calculating the equilibrium distribution of molecular
structures at the vacuum-liquid interface of several different low vapor
pressure liquids. All of these liquids are very viscous at or near
room-temperature and I suspect that conformational barriers may inhibit
sampling at the vacuum-liquid interface. However, in NVT MD simulations,
these liquids increase fluidity at higher temperatures (400-500K) while
maintaining a fluid state and a reasonably well-defined vacuum-liquid
interface.

Can I use NVT REMD to efficiently overcome any kinetic trapping that might
be going on and obtain a true equilibrium distribution of molecular
structures at the vacuum-liquid interface? A superficial literature search
does not yield examples of NVT REMD on a liquid interface. I am curious if
there are issues or complications with this approach. Is there a better
alternative?

the manual states that "all possible pairs are tested for exchange" in
Gibbs REMD. Looking through the mdrun help output, it seems like this
option can be selected by setting the "-nex" flag. However, the comment for
this flag suggests using N^3. Isn't something like N*(N-1)/2 more
appropriate (where N is the number of replicas).

Thanks for the guidance!

Best,
Eric
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD

2018-06-08 Thread Eric Smoll
Hello GROMACS users,

As far as I understand, increasing the number of random exchanges to a
large number (mdrun suggests N^3 where N is the number of replicas) moves a
REMD simulation from a neighbor exchange procedure to a Gibbs exchange
procedure.  Can anyone provide some practical advice or references useful
in deciding which to use?  Naively, I would guess that a Gibbs exchange
procedure would converge faster for a REMD equilibration with a large
number of replicas (~100). Is this usually true?

Best,
Eric
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD

2019-07-30 Thread Bratin Kumar Das
Hi,
I have some doubt regarding REMD simulation.
1. In the .mdp file of each replica is it necessary to keep the
gen-temp constant?
as example: 300 k is the lowest temp of REMD simulation. Is it necessary to
keep the gen-temp=300 in each replica.
2. Is it necessary to provide -replex flag during the equilbration
phase of REMD simulation
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD

2019-09-08 Thread Omkar Singh
Hello gmx users,
I am getting an error "load imbalance " in remd nvt equilibrium step.  Can
anyone help me regarding this issue?
Thanks
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD

2013-12-03 Thread Shine A
Sir,
   I want to do an REMD simulation having 16 replicas. But I have only
8 processors. Is it possible to do 16 replicas in 8 processors? How I can
do this?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD error

2016-05-13 Thread YanhuaOuyang
Hi,
I am running a REMD of a protein, when I submit "gmx mdrun -s md_0_${i}.tpr 
-multi 46 -replex 1000 -reseed -1", it fails as the below
Fatal error:
mdrun -multi or -multidir are not supported with the thread-MPI library. Please 
compile GROMACS with a proper external MPI library.
I have installed the openmpi  and gromacs 5.1.
Do anyone know the problem.

yours sincerelly,
Ouyang
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD analysis

2016-11-21 Thread Kalyanashis Jana
Dear all,

I have performed an REMD simulation for protein drug system (8350 + 32500
sol) using gromacs-4.4.5 package. But I could not understand how to do
analysis of REMD. I have used 10 set of replicas (298 K to 308.31K with
r=1.0038, the common ratio of the geometric progression)  for REMD
simulation and carried out a 5 ns simulation. I would like to compare the
thermodynamics of two drug molecules using REMD. Can you please suggest me,
how can I plot potential energy vs probability or how can I get free energy
profile? What types of analysis do I need to understand REMD?

Looking forward to hear from you.

Thanks in advance,

Kalyanashis Jana
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD Simulation

2018-04-16 Thread ISHRAT JAHAN
Dear all,
I am trying to do REMD simulation in different cosolvents. I have generated
temperatures using temperature genrating tools but it gives different
number of temperatures in different solvents with exchange probability of
0.25. Is it fair to do remd with different replicas? In what way it will
effect the results?
Thankyou
-- 
Ishrat Jahan
Research Scholar
Department Of Chemistry
A.M.U Aligarh
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD temperature_space

2018-05-03 Thread Sundari
Dear gromacs users,

can anyone please suggest me that how we  get the time evolution of a
replica (say replica_1) in temperature space and time courses of potential
energy of each replica(  one way is md.edr file??)
As according to GROMACS tutorial, I used demux.pl script and got two files
replica_index.xvg and replica_temp.xvg.  But I want to analyse a single
replica trajectory in all temperatures ( temp. on y-axis)


Thank you in advance..

Sundari
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD tutorial

2014-08-20 Thread shahab shariati
Dear Mark

Before, in following address you said: Google knows about two GROMACS REMD
tutorials.

https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2014-January/086563.html

Unfortunately, I could not find tutorials you mentioned.



Also, in following address you said: I've added a section on
replica-exchange to
http://wiki.gromacs.org/index.php/Steps_to_Perform_a_Simulation

https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2007-December/031188.html
.

Is this link active, now? I have no access to this link.
-

I want to know Is there a tutorial for REMD like what is in
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/.

Any help will highly appreciated.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD Plots

2019-01-08 Thread Shan Jayasinghe
Dear Gromacs users,

How do we plot a graph for temperature vs swap step number using a REMD
simulation with 30 systems. I already generated the replica_temp.xvg and
replica_index.xvg files using demux.pl script.

Thank you.

Best Regards
Shan Jayasinghe
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] remd error

2019-07-17 Thread Bratin Kumar Das
Hi,
   I am running remd simulation in gromacs-2016.5. After generating the
multiple .tpr file in each directory by the following command
*for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
-reseed 175320 -deffnm remd_equil*
It is giving the following error
There are not enough slots available in the system to satisfy the 40 slots
that were requested by the application:
  gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.
--
--
There are not enough slots available in the system to satisfy the 40 slots
that were requested by the application:
  gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.
--
I am not understanding the error. Any suggestion will be highly
appriciated. The mdp file and the qsub.sh file is attached below

qsub.sh...
#! /bin/bash
#PBS -V
#PBS -l nodes=2:ppn=20
#PBS -l walltime=48:00:00
#PBS -N mdrun-serial
#PBS -j oe
#PBS -o output.log
#PBS -e error.log
#cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
cd $PBS_O_WORKDIR
module load openmpi3.0.0
module load gromacs-2016.5
NP='cat $PBS_NODEFILE | wc -1'
# mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
-s nvt.tpr -deffnm nvt
#/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr -multi
8 -replex 1000 -deffnm remd_out
for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done

for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD

2019-08-01 Thread Justin Lemkul




On 7/31/19 1:44 AM, Bratin Kumar Das wrote:

Hi,
 I have some doubt regarding REMD simulation.
 1. In the .mdp file of each replica is it necessary to keep the
gen-temp constant?
as example: 300 k is the lowest temp of REMD simulation. Is it necessary to
keep the gen-temp=300 in each replica.


No, because each subsystem needs to be equilibrated independently at the 
desired temperature.



 2. Is it necessary to provide -replex flag during the equilbration
phase of REMD simulation


No, because these simulations are independent of one another. Only 
during the actual REMD do you need -replex.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD

2019-08-01 Thread Bratin Kumar Das
Thanks for clarification.

On Thu, Aug 1, 2019 at 7:43 PM Justin Lemkul  wrote:

>
>
> On 7/31/19 1:44 AM, Bratin Kumar Das wrote:
> > Hi,
> >  I have some doubt regarding REMD simulation.
> >  1. In the .mdp file of each replica is it necessary to keep the
> > gen-temp constant?
> > as example: 300 k is the lowest temp of REMD simulation. Is it necessary
> to
> > keep the gen-temp=300 in each replica.
>
> No, because each subsystem needs to be equilibrated independently at the
> desired temperature.
>
> >  2. Is it necessary to provide -replex flag during the equilbration
> > phase of REMD simulation
>
> No, because these simulations are independent of one another. Only
> during the actual REMD do you need -replex.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD-error

2019-09-03 Thread Bratin Kumar Das
Dear all,
I am running one REMD simulation with 65 replicas. I am using
130 cores for the simulation. I am getting the following error.

Fatal error:
Your choice of number of MPI ranks and amount of resources results in using
16
OpenMP threads per rank, which is most likely inefficient. The optimum is
usually between 1 and 6 threads per rank. If you want to run with this
setup,
specify the -ntomp option. But we suggest to change the number of MPI ranks.

when I am using -ntomp option ...it is throwing another error

Fatal error:
Setting the number of thread-MPI ranks is only supported with thread-MPI and
GROMACS was compiled without thread-MPI


while GROMACS is compiled with threated-MPI...

plerase help me in this regard.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] remd

2013-11-18 Thread Justin Lemkul



On 11/14/13 1:05 AM, Shine A wrote:

sir,

  I have a basic doubt about remd simulation. In remd is it possible to
run 16 replicas in 8 processors?



No.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gromacs.org_gmx-users mailing listgromacs.org_gmx-users@maillist.sys.kth.se
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] remd

2013-11-18 Thread Mark Abraham
Yes, just tell your MPI setup to do that. Performance will degrade, and
mdrun will complain that it can't set processor affinities, which is fine
for your purpose.

Mark
On Nov 14, 2013 7:06 AM, "Shine A"  wrote:

> sir,
>
>  I have a basic doubt about remd simulation. In remd is it possible to
> run 16 replicas in 8 processors?
> --
> gmx-users mailing listgmx-us...@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gromacs.org_gmx-users mailing listgromacs.org_gmx-users@maillist.sys.kth.se
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] REMD exchange probabilities

2015-03-08 Thread Neha Gandhi
Dear list,

Using an exchange probability of 0.25 and temperature range 293-370  K, I
calculated number of replicas using the server. However, when I did first
run and tried exchanging replicas every 500 steps (1 ps), I don't think the
exchange probabilities make sense in particular replicas 15 and 16. Replica
15 has a low exchange ratio of 0.12 while replica 16 has a high exchange
ratio of 0.55.

Repl  average probabilities:
Repl 0123456789   10   11   12
13   14   15   16   17   18   19   20   21   22   23   24   25   26   27
28   29   30   31   32   33   34   35   36   37   38   39   40   41   42
43   44   45   46   47
Repl  .28  .28  .28  .28  .29  .28  .29  .29  .28  .29  .28  .28  .29
.29  .29  .12  .55  .29  .29  .30  .30  .29  .29  .26  .32  .31  .30  .30
.30  .30  .30  .31  .31  .31  .31  .31  .31  .31  .31  .31  .31  .31  .32
.32  .32  .32  .33
Repl  number of exchanges:
Repl 0123456789   10   11   12
13   14   15   16   17   18   19   20   21   22   23   24   25   26   27
28   29   30   31   32   33   34   35   36   37   38   39   40   41   42
43   44   45   46   47
Repl 2901 2954 2873 3017 3038 2910 3009 2993 2934 3002 2981 2999 2927
3038 3059 1229 5757 3056 3100 3136 3054 3053 3109 2743  3166 3097 3185
3161 3189 3133 3226 3261 3242 3229 3205 3249 3227 3221 3222 3326 3303 3309
3320 3373 3346 3474
Repl  average number of exchanges:
Repl 0123456789   10   11   12
13   14   15   16   17   18   19   20   21   22   23   24   25   26   27
28   29   30   31   32   33   34   35   36   37   38   39   40   41   42
43   44   45   46   47
Repl  .28  .28  .27  .29  .29  .28  .29  .29  .28  .29  .29  .29  .28
.29  .29  .12  .55  .29  .30  .30  .29  .29  .30  .26  .32  .30  .30  .30
.30  .31  .30  .31  .31  .31  .31  .31  .31  .31  .31  .31  .32  .32  .32
.32  .32  .32  .33


Below are the temperatures I have used. How do I manually edit temperatures
to get average exchange probabilities between 0.2-0.3?

ref_t= 293293; reference temperature, one for each
group, in K
ref_t= 294.51 294.51; reference temperature, one for each
group, in K
ref_t= 296.03 296.03; reference temperature, one for each
group, in K
ref_t= 297.56 297.56; reference temperature, one for each
group, in K
ref_t= 299.09 299.09; reference temperature, one for each
group, in K
ref_t= 300.63 300.63; reference temperature, one for each
group, in K
ref_t= 302.18 302.18; reference temperature, one for each
group, in K
ref_t= 303.73 303.73; reference temperature, one for each
group, in K
ref_t= 305.29 305.29; reference temperature, one for each
group, in K
ref_t= 306.86 306.86; reference temperature, one for each
group, in K
ref_t= 308.43 308.43; reference temperature, one for each
group, in K
ref_t= 310.01 310.01; reference temperature, one for each
group, in K
ref_t= 311.60 311.60; reference temperature, one for each
group, in K
ref_t= 313.19 313.19; reference temperature, one for each
group, in K
ref_t= 314.79 314.79; reference temperature, one for each
group, in K
ref_t= 316.40 316.40; reference temperature, one for each
group, in K
ref_t= 318.63 318.63; reference temperature, one for each
group, in K
ref_t= 319.63 319.63; reference temperature, one for each
group, in K
ref_t= 321.26 321.26; reference temperature, one for each
group, in K
ref_t= 322.89 322.89; reference temperature, one for each
group, in K
ref_t= 324.52 324.52; reference temperature, one for each
group, in K
ref_t= 326.17 326.17; reference temperature, one for each
group, in K
ref_t= 327.82 327.82; reference temperature, one for each
group, in K
ref_t= 329.49 329.49; reference temperature, one for each
group, in K
ref_t= 331.26 331.26; reference temperature, one for each
group, in K
ref_t= 332.86 332.86; reference temperature, one for each
group, in K
ref_t= 334.51 334.51; reference temperature, one for each
group, in K
ref_t= 336.20 336.20; reference temperature, one for each
group, in K
ref_t= 337.90 337.90; reference temperature, one for each
group, in K
ref_t= 339.61 339.61; reference temperature, one for each
group, in K
ref_t= 341.32 341.32; reference temperature, one for each
group, in K
ref_t= 343.04 343.04; reference temperature, one for each
group, in K
ref_t= 344.76 344.76; reference temperature, one for each
group, in K
ref_t= 346.49 346.49; reference temperature, one for each
group, in K
re

[gmx-users] REMD mdrun_mpi error

2015-06-22 Thread Nawel Mele
Dear gromacs users,

I am trying to simulate a ligand using REMD method in explicit solvent with
the charmm force field. When I try to equilibrate my system I get this
error :

Double sids (0, 1) for atom 26
Double sids (0, 1) for atom 27
Double sids (0, 1) for atom 28
Double sids (0, 1) for atom 29
Double sids (0, 1) for atom 30
Double sids (0, 1) for atom 31
Double sids (0, 1) for atom 32
Double sids (0, 1) for atom 33
Double sids (0, 1) for atom 34
Double sids (0, 1) for atom 35
Double sids (0, 1) for atom 36
Double sids (0, 1) for atom 37
Double sids (0, 1) for atom 38
Double sids (0, 1) for atom 39
Double sids (0, 1) for atom 40

---
Program mdrun_mpi, VERSION 4.6.5
Source code file:
/local/software/gromacs/4.6.5/source/gromacs-4.6.5/src/gmxlib/invblock.c,
line: 99

Fatal error:
Double entries in block structure. Item 53 is in blocks 1 and 0
 Cannot make an unambiguous inverse block.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors



*My mdp input file looks like this :*












































*title   = CHARMM compound NVT equilibration define  =
-DPOSRES  ; position restrain the protein; Run
parametersintegrator  = sd; leap-frog stochastic dynamics
integratornsteps  = 100   ; 2 * 100 = 100
psdt  = 0.002 ; 2 fs; Output controlnstxout =
500   ; save coordinates every 0.2 psnstvout =
10; save velocities every 0.2 psnstenergy   = 500
; save energies every 0.2 psnstlog  = 500   ; update log
file every 0.2 ps; Bond parameterscontinuation= no; first
dynamics runconstraint_algorithm = SHAKE; holonomic constraints
constraints = h-bonds   ; all bonds (even heavy atom-H bonds)
constrainedshake-tol   = 0.1   ; relative tolerance for SHAKE;
Neighborsearchingns_type = grid  ; search neighboring grid
cellsnstlist = 5 ; 10 fsrlist   = 1.0
; short-range neighborlist cutoff (in nm)rcoulomb= 1.0   ;
short-range electrostatic cutoff (in nm)rvdw= 1.0   ;
short-range van der Waals cutoff (in nm); Electrostaticscoulombtype =
PME   ; Particle Mesh Ewald for long-range
electrostaticspme_order   = 4 ; Interpolation order for
PME. 4 equals cubic interpolationfourierspacing  = 0.16  ; grid
spacing for FFT; Temperature coupling is on;tcoupl = V-rescale
; modified Berendsen thermostattc-grps = LIG SOL   ; two
coupling groups - more accuratetau_t   = 1.0   1.0 ; time
constant, in psref_t   = X X   ; reference
temperature, one for each group, in K;Langevin dynamicsbd-fric = 0
;   ;Brownian dynamics friction coefficient. ld-seed
=-1;;pseudo random seed is used; Pressure coupling is
offpcoupl  = no; no pressure coupling in NVT; Periodic
boundary conditionspbc = xyz   ; 3-D PBC; Dispersion
correctionDispCorr= EnerPres  ; account for cut-off vdW scheme;
Velocity generationgen_vel = yes   ; assign velocities from
Maxwell distributiongen_temp= 0.0   ; temperature for
Maxwell distributiongen_seed= -1; generate a random
seed*


*And my input file to run it in parallel looks like that:*










*#!/bin/bash#PBS -l nodes=3:ppn=16#PBS -l walltime=00:10:00#PBS -o
zzz.qsub.out#PBS -e zzz.qsub.errmodule load openmpi module load
gromacs/4.6.5mpirun -np 48  mdrun_mpi -s eq_.tpr -multi 48 -replex 10
>& faillog-X.log*


Does anyone have seen this issue before??

Many thanks,
-- 

Nawel Mele, PhD Research Student

Jonathan Essex Group, School of Chemistry

University of Southampton,  Highfield

Southampton, SO17 1BJ
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD temperature trajectory

2015-08-28 Thread Nawel Mele
Dear Gromacs user,

I performed a REMD simulation and I want to analyse my result per temperature.
I am interested at looking the trajectory for the lowest and the
highest temperature.
I am used to perform REMD with Amber and I realised that Amber
exchanges temperature during the simulation,compare to Gromacs which
returns a discontinuous trajectories
for each temperatures.
So my question is , do I need to use the demux.pl script to get a
"temperature trajectory" or can I, from the log output file, just
create a trajectory
at the temperature of interest?
For example if I am interested on the lowest temperature, should I
just need to analyse the prod0.log file??

Another question, the output replica_temp.xvg from the demux.pl looks
like this :

0   0123456789
  10   11   12   13   14   15   16   17   18   19   20   21   22   23
 24   25   26   27   28   29   30   31   32   33   34   35   36   37
38   39   40   41   42   43   44   45   46   47
2   1023546789
  10   11   13   12   14   15   16   17   18   19   21   20   23   22
 24   25   27   26   28   29   31   30   33   32   34   35   37   36
39   38   41   40   43   42   44   45   46   47
4   201364587   10
   9   11   13   12   14   15   16   17   18   20   22   19   24   21
 23   26   28   25   27   30   32   29   33   31   34   35   37   36
40   38   41   39   44   42   43   46   45   47
6   310264597   11
   8   10   12   13   15   14   16   17   19   20   22   18   24   21
 23   27   29   25   26   30   32   28   33   31   34   35   37   36
41   39   40   38   45   43   42   47   44   46


Does that mean that, except for the first column, each column
corresponds to each temperature? An so from that we can follow the
trajectory of the replicas for a temperature of interest?

Many thanks in advance

Nawel


-- 

Nawel Mele, PhD Research Student

Jonathan Essex Group, School of Chemistry

University of Southampton,  Highfield

Southampton, SO17 1BJ
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD of IDPs

2016-04-07 Thread YanhuaOuyang
Hi, I have a sequence of an intrinsically disordered protein, I have no idea 
how to start my REMD with gromacs. e.g. how to convert my sequence into a pdb 
file
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD error

2016-05-13 Thread Mark Abraham
Hi,

Yes. Exactly as the error message says, you need to compile GROMACS
differently, with real MPI support. See
http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations

Mark

On Fri, May 13, 2016 at 9:47 AM YanhuaOuyang <15901283...@163.com> wrote:

> Hi,
> I am running a REMD of a protein, when I submit "gmx mdrun -s
> md_0_${i}.tpr -multi 46 -replex 1000 -reseed -1", it fails as the below
> Fatal error:
> mdrun -multi or -multidir are not supported with the thread-MPI library.
> Please compile GROMACS with a proper external MPI library.
> I have installed the openmpi  and gromacs 5.1.
> Do anyone know the problem.
>
> yours sincerelly,
> Ouyang
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD error

2016-05-13 Thread YanhuaOuyang
Hi,
I have installed the openmpi 1.10, and I can run mpirun. When I installed 
grimaces 5.1, I configured -DGMX_MPI=on.
And the error still happens .
> 在 2016年5月13日,下午3:59,Mark Abraham  写道:
> 
> Hi,
> 
> Yes. Exactly as the error message says, you need to compile GROMACS
> differently, with real MPI support. See
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations
> 
> Mark
> 
> On Fri, May 13, 2016 at 9:47 AM YanhuaOuyang <15901283...@163.com> wrote:
> 
>> Hi,
>> I am running a REMD of a protein, when I submit "gmx mdrun -s
>> md_0_${i}.tpr -multi 46 -replex 1000 -reseed -1", it fails as the below
>> Fatal error:
>> mdrun -multi or -multidir are not supported with the thread-MPI library.
>> Please compile GROMACS with a proper external MPI library.
>> I have installed the openmpi  and gromacs 5.1.
>> Do anyone know the problem.
>> 
>> yours sincerelly,
>> Ouyang
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD error

2016-05-13 Thread Mark Abraham
Hi,

If you've configured with GMX_MPI, then the resulting GROMACS binary is
called gmx_mpi, so mpirun -np X gmx_mpi mdrun -multi ...

Mark

On Fri, May 13, 2016 at 10:09 AM YanhuaOuyang <15901283...@163.com> wrote:

> Hi,
> I have installed the openmpi 1.10, and I can run mpirun. When I installed
> grimaces 5.1, I configured -DGMX_MPI=on.
> And the error still happens .
> > 在 2016年5月13日,下午3:59,Mark Abraham  写道:
> >
> > Hi,
> >
> > Yes. Exactly as the error message says, you need to compile GROMACS
> > differently, with real MPI support. See
> >
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations
> >
> > Mark
> >
> > On Fri, May 13, 2016 at 9:47 AM YanhuaOuyang <15901283...@163.com>
> wrote:
> >
> >> Hi,
> >> I am running a REMD of a protein, when I submit "gmx mdrun -s
> >> md_0_${i}.tpr -multi 46 -replex 1000 -reseed -1", it fails as the below
> >> Fatal error:
> >> mdrun -multi or -multidir are not supported with the thread-MPI library.
> >> Please compile GROMACS with a proper external MPI library.
> >> I have installed the openmpi  and gromacs 5.1.
> >> Do anyone know the problem.
> >>
> >> yours sincerelly,
> >> Ouyang
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] REMD implicit solvent

2018-01-05 Thread Urszula Uciechowska


Dear gromacs users,

I am trying to run REMD simulations using 4.5.5 version (implicit
solvent). The MD procedure:

pdb2gmx -f  prot.pdb -o prot.gro -q prot.pdb -ignh -ss.

The input for minimization step:

; Run control parameters
integrator   = cg
nsteps   = 800
vdwtype  = cut-off
coulombtype  = cut-off
;cutoff-scheme= group
pbc  = no
periodic_molecules   = no
nstlist  = 10
ns_type  = grid
rlist= 1.0
rcoulomb = 1.6
rvdw = 1.6
comm-mode= Angular
nstcomm  = 10
;
;Energy minimizing stuff
;
emtol= 100.0
nstcgsteep   = 2
emstep   = 0.01
;
;Relative dielectric constant for the medium and the reaction field
epsilon_r= 1
epsilon_rf   = 1
;
; Implicit solvent
;
implicit_solvent = GBSA
gb_algorithm = OBC  ;Still  HCT   OBC
nstgbradii   = 1.0
rgbradii = 1.0  ; [nm] Cut-off for the calculation of
the Born radii. Currently must be equal to rlist
gb_epsilon_solvent   = 80   ; Dielectric constant for the implicit
solvent
gb_saltconc  = 0; Salt concentration for implicit
solvent models, currently not used
sa_algorithm = Ace-approximation
sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2) for
the SA (nonpolar surface) part of GBSA. The value -1 will set default
value for Still/HCT/OBC GB-models.

and it finished without errors.

The problem is with equilibration step. The input file that I used is:

; MD CONTROL OPTIONS
integrator  = md
dt  = 0.002
nsteps  = 5 ; 10 ns
init_step   = 0; For exact run continuation or
redoing part of a run
comm-mode   = Angular  ; mode for center of mass motion
removal
nstcomm = 10   ; number of steps for center of
mass motion removal

; OUTPUT CONTROL OPTIONS
; Output frequency for coords (x), velocities (v) and forces (f)
nstxout  = 1000
nstvout  = 1000
nstfout  = 1000

; Output frequency for energies to log file and energy file
nstlog   = 1000
nstcalcenergy= 10
nstenergy= 1000

; Neighbor searching and Electrostatitcs
vdwtype  = cut-off
coulombtype  = cut-off
;cutoff-scheme= group
pbc  = no
periodic_molecules   = no
nstlist  = 5
ns_type  = grid
rlist= 1.0
rcoulomb = 1.6
rvdw = 1.0
; Selection of energy groups
energygrps   = fixed not_fixed
freezegrps   = fixed not_fixed
freezedim= Y Y Y N N N

;Relative dielectric constant for the medium and the reaction field
epsilon_r= 1
epsilon_rf   = 1

; Temperutare coupling
tcoupl   = v-rescale
tc_grps  = fixed not_fixed
tau_t= 0.01 0.01
;nst_couple   = 5
ref_t= 300.00 300.00

; Pressure coupling
pcoupl   = no
;pcoupletype  = isotropic
tau_p= 1.0
;compressiblity   = 4.5e-5
ref_p= 1.0
gen_vel  = yes
gen_temp = 300.00 300.00
gen_seed = -1
constraints  = h-bonds


; Implicit solvent
implicit_solvent = GBSA
gb_algorithm = Still ; HCT  ; OBC
nstgbradii   = 1.0
rgbradii = 1.0  ; [nm] Cut-off for the calculation
of the Born radii. Currently must be equal to rlist
gb_epsilon_solvent   = 80   ; Dielectric constant for the
implicit solvent
gb_saltconc  = 0; Salt concentration for implicit
solvent models, currently not used
sa_algorithm = Ace-approximation
sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2)
for the SA (nonpolar surface) part of GBSA. The value -1 will set default
value for Still/HCT/OBC GB-models.


mdrun -v -multidir eq_[12345678]

The error that I obtained is:

Fatal error:
A charge group moved too far between two domain decomposition steps
This usually means that your system is not well equilibrated
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


I do not know what is wrong. I checked the Fatal error at
www.gromacs.org/Documentation/Errors. My system is ok, I tried to increase
the min steps but did not help. I have also checked the
http://www.gromacs.org/Documentation/How-tos/REMD but can not move forward
because of equilibration step.

I appreciate any recommendation.

Thanks

Urszula



Urszula Uciechowska PhD
University of Gdansk and Medical Univesity of Gdansk
Dep

[gmx-users] REMD DLB bug

2018-02-12 Thread Akshay
Hello All,

I was running REMD simulations on Gromacs 2016.1 when my simulation crashed
with the error

Assertion failed:
Condition: comm->cycl_n[ddCyclStep] > 0
When we turned on DLB, we should have measured cycles

I saw that there was a bug #2298 reported about this recently at
https://redmine.gromacs.org/issues/2298. I wanted to know if this fix has
been implemented in the latest 2018 or 2016.4 versions?

Thanks,
Akshay
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Simulation

2018-04-16 Thread Mark Abraham
Hi,

On Mon, Apr 16, 2018 at 10:21 AM ISHRAT JAHAN  wrote:

> Dear all,
> I am trying to do REMD simulation in different cosolvents. I have generated
> temperatures using temperature genrating tools but it gives different
> number of temperatures in different solvents with exchange probability of
> 0.25. Is it fair to do remd with different replicas?


Sure. But first you should understand why the number of degrees of freedom
in the system are relevant for affecting the temperature spacing required
for constant exchange probability. See, among other references
https://pubs.acs.org/doi/abs/10.1021/ct800016r (shameless self-plug...)


> In what way it will
> effect the results?
>

What results are you seeking? Why would the number of replicas be a
relevant parameter determining the result?

Mark


> Thankyou
> --
> Ishrat Jahan
> Research Scholar
> Department Of Chemistry
> A.M.U Aligarh
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD temperature_space

2018-05-03 Thread Mark Abraham
Hi,

It sounds like you just want to use the original data, which you had before
you ran the demux script.

Mark

On Thu, May 3, 2018 at 1:28 PM Sundari  wrote:

> Dear gromacs users,
>
> can anyone please suggest me that how we  get the time evolution of a
> replica (say replica_1) in temperature space and time courses of potential
> energy of each replica(  one way is md.edr file??)
> As according to GROMACS tutorial, I used demux.pl script and got two files
> replica_index.xvg and replica_temp.xvg.  But I want to analyse a single
> replica trajectory in all temperatures ( temp. on y-axis)
>
>
> Thank you in advance..
>
> Sundari
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD temperature_space

2018-05-03 Thread Sundari
Hello,

I got the continuous trajectories by using demux. But now I am confused in
getting potential energy distribution of a single replica (similarly time
evolution of a replica (say replica_1) in temperature space).
I used edr file of original production data files, but I am not getting
what I want. I am attaching the temp.xvg file of one replica (say T= 315K
replica)

Thank You..

On Thu, May 3, 2018 at 5:02 PM, Mark Abraham 
wrote:

> Hi,
>
> It sounds like you just want to use the original data, which you had before
> you ran the demux script.
>
> Mark
>
> On Thu, May 3, 2018 at 1:28 PM Sundari  wrote:
>
> > Dear gromacs users,
> >
> > can anyone please suggest me that how we  get the time evolution of a
> > replica (say replica_1) in temperature space and time courses of
> potential
> > energy of each replica(  one way is md.edr file??)
> > As according to GROMACS tutorial, I used demux.pl script and got two
> files
> > replica_index.xvg and replica_temp.xvg.  But I want to analyse a single
> > replica trajectory in all temperatures ( temp. on y-axis)
> >
> >
> > Thank you in advance..
> >
> > Sundari
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD temperature_space

2018-05-03 Thread Sundari
Hello Guys,

Kindly suggest me something about my doubt.

On Thu, May 3, 2018 at 5:19 PM, Sundari  wrote:

> Hello,
>
> I got the continuous trajectories by using demux. But now I am confused in
> getting potential energy distribution of a single replica (similarly time
> evolution of a replica (say replica_1) in temperature space).
> I used edr file of original production data files, but I am not getting
> what I want. I am attaching the temp.xvg file of one replica (say T= 315K
> replica)
>
> Thank You..
>
> On Thu, May 3, 2018 at 5:02 PM, Mark Abraham 
> wrote:
>
>> Hi,
>>
>> It sounds like you just want to use the original data, which you had
>> before
>> you ran the demux script.
>>
>> Mark
>>
>> On Thu, May 3, 2018 at 1:28 PM Sundari  wrote:
>>
>> > Dear gromacs users,
>> >
>> > can anyone please suggest me that how we  get the time evolution of a
>> > replica (say replica_1) in temperature space and time courses of
>> potential
>> > energy of each replica(  one way is md.edr file??)
>> > As according to GROMACS tutorial, I used demux.pl script and got two
>> files
>> > replica_index.xvg and replica_temp.xvg.  But I want to analyse a single
>> > replica trajectory in all temperatures ( temp. on y-axis)
>> >
>> >
>> > Thank you in advance..
>> >
>> > Sundari
>> > --
>> > Gromacs Users mailing list
>> >
>> > * Please search the archive at
>> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> > posting!
>> >
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> > * For (un)subscribe requests visit
>> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> > send a mail to gmx-users-requ...@gromacs.org.
>> >
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD temperature_space

2018-05-04 Thread Mark Abraham
Hi,

Unfortunately nobody has implemented demux for the energy files. You could
consider contributing a modification of demux.pl :-)

Mark

On Fri, May 4, 2018 at 8:42 AM Sundari  wrote:

> Hello Guys,
>
> Kindly suggest me something about my doubt.
>
> On Thu, May 3, 2018 at 5:19 PM, Sundari  wrote:
>
> > Hello,
> >
> > I got the continuous trajectories by using demux. But now I am confused
> in
> > getting potential energy distribution of a single replica (similarly time
> > evolution of a replica (say replica_1) in temperature space).
> > I used edr file of original production data files, but I am not getting
> > what I want. I am attaching the temp.xvg file of one replica (say T= 315K
> > replica)
> >
> > Thank You..
> >
> > On Thu, May 3, 2018 at 5:02 PM, Mark Abraham 
> > wrote:
> >
> >> Hi,
> >>
> >> It sounds like you just want to use the original data, which you had
> >> before
> >> you ran the demux script.
> >>
> >> Mark
> >>
> >> On Thu, May 3, 2018 at 1:28 PM Sundari  wrote:
> >>
> >> > Dear gromacs users,
> >> >
> >> > can anyone please suggest me that how we  get the time evolution of a
> >> > replica (say replica_1) in temperature space and time courses of
> >> potential
> >> > energy of each replica(  one way is md.edr file??)
> >> > As according to GROMACS tutorial, I used demux.pl script and got two
> >> files
> >> > replica_index.xvg and replica_temp.xvg.  But I want to analyse a
> single
> >> > replica trajectory in all temperatures ( temp. on y-axis)
> >> >
> >> >
> >> > Thank you in advance..
> >> >
> >> > Sundari
> >> > --
> >> > Gromacs Users mailing list
> >> >
> >> > * Please search the archive at
> >> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> > posting!
> >> >
> >> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> >
> >> > * For (un)subscribe requests visit
> >> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> > send a mail to gmx-users-requ...@gromacs.org.
> >> >
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at http://www.gromacs.org/Support
> >> /Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD tutorial

2014-08-21 Thread Mark Abraham
On Thu, Aug 21, 2014 at 8:01 AM, shahab shariati 
wrote:

> Dear Mark
>
> Before, in following address you said: Google knows about two GROMACS REMD
> tutorials.
>
>
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2014-January/086563.html
>
> Unfortunately, I could not find tutorials you mentioned.
>

You can find them here
https://www.google.se/search?q=gromacs+remd+tutorials&oq=gromacs+remd+tutorials


>
> 
>
> Also, in following address you said: I've added a section on
> replica-exchange to
> http://wiki.gromacs.org/index.php/Steps_to_Perform_a_Simulation
>
>
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2007-December/031188.html
> .
>
> Is this link active, now? I have no access to this link.
>

The webpage has been changed since then, see link from
http://www.gromacs.org/Documentation/How-tos/Steps_to_Perform_a_Simulation

Mark


>
> -
>
> I want to know Is there a tutorial for REMD like what is in
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/.
>
> Any help will highly appreciated.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Plots

2019-01-08 Thread Joel Awuah
Hi Shan,
I am not quite sure if you want to generate an REMD simulation mobility in
temperature space for the 30 replicas. If that be the case, then you can
use the data in the replica_temperature.xvg file to plot replica index vs
REMD steps. The 1st column in the file corresponds to the REMD steps and
2nd to 31st correspond to the mobility of replicas 0 to 29.

Hope this  helps?

cheers
Joel


On Wed, 9 Jan 2019 at 13:23, Shan Jayasinghe 
wrote:

> Dear Gromacs users,
>
> How do we plot a graph for temperature vs swap step number using a REMD
> simulation with 30 systems. I already generated the replica_temp.xvg and
> replica_index.xvg files using demux.pl script.
>
> Thank you.
>
> Best Regards
> Shan Jayasinghe
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
Joel Baffour Awuah
PhD Candidate
*Institute for Frontier Materials*

*Deakin University*
*Waurn Ponds, 3126 VIC*
*Australia +61450070635*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Plots

2019-01-12 Thread Shan Jayasinghe
Hi Joel,

Thank you very much.



On Wed, Jan 9, 2019 at 3:27 PM Joel Awuah  wrote:

> Hi Shan,
> I am not quite sure if you want to generate an REMD simulation mobility in
> temperature space for the 30 replicas. If that be the case, then you can
> use the data in the replica_temperature.xvg file to plot replica index vs
> REMD steps. The 1st column in the file corresponds to the REMD steps and
> 2nd to 31st correspond to the mobility of replicas 0 to 29.
>
> Hope this  helps?
>
> cheers
> Joel
>
>
> On Wed, 9 Jan 2019 at 13:23, Shan Jayasinghe  >
> wrote:
>
> > Dear Gromacs users,
> >
> > How do we plot a graph for temperature vs swap step number using a REMD
> > simulation with 30 systems. I already generated the replica_temp.xvg and
> > replica_index.xvg files using demux.pl script.
> >
> > Thank you.
> >
> > Best Regards
> > Shan Jayasinghe
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
> --
> Joel Baffour Awuah
> PhD Candidate
> *Institute for Frontier Materials*
>
> *Deakin University*
> *Waurn Ponds, 3126 VIC*
> *Australia +61450070635*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
Best Regards
Shan Jayasinghe
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] remd error

2019-07-25 Thread Szilárd Páll
This is an MPI / job scheduler error: you are requesting 2 nodes with
20 processes per node (=40 total), but starting 80 ranks.
--
Szilárd

On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
<177cy500.bra...@nitk.edu.in> wrote:
>
> Hi,
>I am running remd simulation in gromacs-2016.5. After generating the
> multiple .tpr file in each directory by the following command
> *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
> topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> -reseed 175320 -deffnm remd_equil*
> It is giving the following error
> There are not enough slots available in the system to satisfy the 40 slots
> that were requested by the application:
>   gmx_mpi
>
> Either request fewer slots for your application, or make more slots
> available
> for use.
> --
> --
> There are not enough slots available in the system to satisfy the 40 slots
> that were requested by the application:
>   gmx_mpi
>
> Either request fewer slots for your application, or make more slots
> available
> for use.
> --
> I am not understanding the error. Any suggestion will be highly
> appriciated. The mdp file and the qsub.sh file is attached below
>
> qsub.sh...
> #! /bin/bash
> #PBS -V
> #PBS -l nodes=2:ppn=20
> #PBS -l walltime=48:00:00
> #PBS -N mdrun-serial
> #PBS -j oe
> #PBS -o output.log
> #PBS -e error.log
> #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> cd $PBS_O_WORKDIR
> module load openmpi3.0.0
> module load gromacs-2016.5
> NP='cat $PBS_NODEFILE | wc -1'
> # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
> -s nvt.tpr -deffnm nvt
> #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr -multi
> 8 -replex 1000 -deffnm remd_out
> for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
> em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
>
> for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] remd error

2019-07-29 Thread Bratin Kumar Das
Hi Szilard,
   Thank you for your reply. I rectified as you said. For trial
purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running
or not. I gave the following command to run remd
*mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
After giving the command it is giving following error
Program: gmx mdrun, version 2018.4
Source file: src/gromacs/utility/futil.cpp (line 514)
MPI rank:0 (out of 32)

File input/output error:
remd0.tpr

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
 I am not able to understand why it is coming

On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll,  wrote:

> This is an MPI / job scheduler error: you are requesting 2 nodes with
> 20 processes per node (=40 total), but starting 80 ranks.
> --
> Szilárd
>
> On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
> <177cy500.bra...@nitk.edu.in> wrote:
> >
> > Hi,
> >I am running remd simulation in gromacs-2016.5. After generating the
> > multiple .tpr file in each directory by the following command
> > *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
> > topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> > I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> > -reseed 175320 -deffnm remd_equil*
> > It is giving the following error
> > There are not enough slots available in the system to satisfy the 40
> slots
> > that were requested by the application:
> >   gmx_mpi
> >
> > Either request fewer slots for your application, or make more slots
> > available
> > for use.
> >
> --
> >
> --
> > There are not enough slots available in the system to satisfy the 40
> slots
> > that were requested by the application:
> >   gmx_mpi
> >
> > Either request fewer slots for your application, or make more slots
> > available
> > for use.
> >
> --
> > I am not understanding the error. Any suggestion will be highly
> > appriciated. The mdp file and the qsub.sh file is attached below
> >
> > qsub.sh...
> > #! /bin/bash
> > #PBS -V
> > #PBS -l nodes=2:ppn=20
> > #PBS -l walltime=48:00:00
> > #PBS -N mdrun-serial
> > #PBS -j oe
> > #PBS -o output.log
> > #PBS -e error.log
> > #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> > cd $PBS_O_WORKDIR
> > module load openmpi3.0.0
> > module load gromacs-2016.5
> > NP='cat $PBS_NODEFILE | wc -1'
> > # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
> > -s nvt.tpr -deffnm nvt
> > #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr
> -multi
> > 8 -replex 1000 -deffnm remd_out
> > for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
> > em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
> >
> > for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> > remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] remd error

2019-07-29 Thread Justin Lemkul



On 7/29/19 7:55 AM, Bratin Kumar Das wrote:

Hi Szilard,
Thank you for your reply. I rectified as you said. For trial
purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running
or not. I gave the following command to run remd
*mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
After giving the command it is giving following error
Program: gmx mdrun, version 2018.4
Source file: src/gromacs/utility/futil.cpp (line 514)
MPI rank:0 (out of 32)

File input/output error:
remd0.tpr

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
  I am not able to understand why it is coming


The error means the input file (remd0.tpr) does not exist in the working 
directory.


-Justin



On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll,  wrote:


This is an MPI / job scheduler error: you are requesting 2 nodes with
20 processes per node (=40 total), but starting 80 ranks.
--
Szilárd

On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
<177cy500.bra...@nitk.edu.in> wrote:

Hi,
I am running remd simulation in gromacs-2016.5. After generating the
multiple .tpr file in each directory by the following command
*for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
-reseed 175320 -deffnm remd_equil*
It is giving the following error
There are not enough slots available in the system to satisfy the 40

slots

that were requested by the application:
   gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.


--
--

There are not enough slots available in the system to satisfy the 40

slots

that were requested by the application:
   gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.


--

I am not understanding the error. Any suggestion will be highly
appriciated. The mdp file and the qsub.sh file is attached below

qsub.sh...
#! /bin/bash
#PBS -V
#PBS -l nodes=2:ppn=20
#PBS -l walltime=48:00:00
#PBS -N mdrun-serial
#PBS -j oe
#PBS -o output.log
#PBS -e error.log
#cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
cd $PBS_O_WORKDIR
module load openmpi3.0.0
module load gromacs-2016.5
NP='cat $PBS_NODEFILE | wc -1'
# mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
-s nvt.tpr -deffnm nvt
#/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr

-multi

8 -replex 1000 -deffnm remd_out
for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done

for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
--
Gromacs Users mailing list

* Please search the archive at

http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or

send a mail to gmx-users-requ...@gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] remd error

2019-07-29 Thread Bratin Kumar Das
Thank you

On Mon 29 Jul, 2019, 6:45 PM Justin Lemkul,  wrote:

>
>
> On 7/29/19 7:55 AM, Bratin Kumar Das wrote:
> > Hi Szilard,
> > Thank you for your reply. I rectified as you said. For
> trial
> > purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is
> running
> > or not. I gave the following command to run remd
> > *mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
> > After giving the command it is giving following error
> > Program: gmx mdrun, version 2018.4
> > Source file: src/gromacs/utility/futil.cpp (line 514)
> > MPI rank:0 (out of 32)
> >
> > File input/output error:
> > remd0.tpr
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> >   I am not able to understand why it is coming
>
> The error means the input file (remd0.tpr) does not exist in the working
> directory.
>
> -Justin
>
> >
> > On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll, 
> wrote:
> >
> >> This is an MPI / job scheduler error: you are requesting 2 nodes with
> >> 20 processes per node (=40 total), but starting 80 ranks.
> >> --
> >> Szilárd
> >>
> >> On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
> >> <177cy500.bra...@nitk.edu.in> wrote:
> >>> Hi,
> >>> I am running remd simulation in gromacs-2016.5. After generating
> the
> >>> multiple .tpr file in each directory by the following command
> >>> *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro
> -p
> >>> topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> >>> I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> >>> -reseed 175320 -deffnm remd_equil*
> >>> It is giving the following error
> >>> There are not enough slots available in the system to satisfy the 40
> >> slots
> >>> that were requested by the application:
> >>>gmx_mpi
> >>>
> >>> Either request fewer slots for your application, or make more slots
> >>> available
> >>> for use.
> >>>
> >>
> --
> >>
> --
> >>> There are not enough slots available in the system to satisfy the 40
> >> slots
> >>> that were requested by the application:
> >>>gmx_mpi
> >>>
> >>> Either request fewer slots for your application, or make more slots
> >>> available
> >>> for use.
> >>>
> >>
> --
> >>> I am not understanding the error. Any suggestion will be highly
> >>> appriciated. The mdp file and the qsub.sh file is attached below
> >>>
> >>> qsub.sh...
> >>> #! /bin/bash
> >>> #PBS -V
> >>> #PBS -l nodes=2:ppn=20
> >>> #PBS -l walltime=48:00:00
> >>> #PBS -N mdrun-serial
> >>> #PBS -j oe
> >>> #PBS -o output.log
> >>> #PBS -e error.log
> >>> #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> >>> cd $PBS_O_WORKDIR
> >>> module load openmpi3.0.0
> >>> module load gromacs-2016.5
> >>> NP='cat $PBS_NODEFILE | wc -1'
> >>> # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun
> -v
> >>> -s nvt.tpr -deffnm nvt
> >>> #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr
> >> -multi
> >>> 8 -replex 1000 -deffnm remd_out
> >>> for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro
> -r
> >>> em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
> >>>
> >>> for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> >>> remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> >>> --
> >>> Gromacs Users mailing list
> >>>
> >>> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gro

Re: [gmx-users] REMD-error

2019-09-04 Thread Mark Abraham
Hi,

We need to see your command line in order to have a chance of helping.

Mark

On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:

> Dear all,
> I am running one REMD simulation with 65 replicas. I am using
> 130 cores for the simulation. I am getting the following error.
>
> Fatal error:
> Your choice of number of MPI ranks and amount of resources results in using
> 16
> OpenMP threads per rank, which is most likely inefficient. The optimum is
> usually between 1 and 6 threads per rank. If you want to run with this
> setup,
> specify the -ntomp option. But we suggest to change the number of MPI
> ranks.
>
> when I am using -ntomp option ...it is throwing another error
>
> Fatal error:
> Setting the number of thread-MPI ranks is only supported with thread-MPI
> and
> GROMACS was compiled without thread-MPI
>
>
> while GROMACS is compiled with threated-MPI...
>
> plerase help me in this regard.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD-error

2019-09-04 Thread Bratin Kumar Das
Respected Mark Abraham,
  The command-line and the job
submission script is given below

#!/bin/bash
#SBATCH -n 130 # Number of cores
#SBATCH -N 5   # no of nodes
#SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
#SBATCH -p cpu # Partition to submit to
#SBATCH -o hostname_%j.out # File to which STDOUT will be written
#SBATCH -e hostname_%j.err # File to which STDERR will be written
#loading gromacs
module load gromacs/2018.4
#specifying work_dir
WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1


mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
-deffnm remd_nvt -cpi remd_nvt.cpt -append

On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
wrote:

> Hi,
>
> We need to see your command line in order to have a chance of helping.
>
> Mark
>
> On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <177cy500.bra...@nitk.edu.in
> >
> wrote:
>
> > Dear all,
> > I am running one REMD simulation with 65 replicas. I am using
> > 130 cores for the simulation. I am getting the following error.
> >
> > Fatal error:
> > Your choice of number of MPI ranks and amount of resources results in
> using
> > 16
> > OpenMP threads per rank, which is most likely inefficient. The optimum is
> > usually between 1 and 6 threads per rank. If you want to run with this
> > setup,
> > specify the -ntomp option. But we suggest to change the number of MPI
> > ranks.
> >
> > when I am using -ntomp option ...it is throwing another error
> >
> > Fatal error:
> > Setting the number of thread-MPI ranks is only supported with thread-MPI
> > and
> > GROMACS was compiled without thread-MPI
> >
> >
> > while GROMACS is compiled with threated-MPI...
> >
> > plerase help me in this regard.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD-error

2019-09-04 Thread Mark Abraham
Hi,

On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:

> Respected Mark Abraham,
>   The command-line and the job
> submission script is given below
>
> #!/bin/bash
> #SBATCH -n 130 # Number of cores
>

Per the docs, this is a guide to sbatch about how many (MPI) tasks you want
to run. It's not a core request.

#SBATCH -N 5   # no of nodes
>

This requires a certain number of nodes. So to implement both your
instructions, MPI has to start 26 tasks per node. That would make sense if
you had nodes with a multiple 26 cores. My guess is that your nodes have a
multiple of 16 cores, based on the error message. MPI saw that you asked to
allocate more tasks on cores than available cores, and decided not to set a
number of OpenMP threads per MPI task, so that fell back on a default,
which produced 16, which GROMACS can see doesn't make sense.

If you want to use -N and -n, then you need to make a choice that makes
sense for the number of cores per node. Easier might be to use -n 130 and
-c 2 to express what I assume is your intent to have 2 cores per MPI task.
Now slurm+MPI can pass that message along properly to OpenMP.

Your other message about -ntomp can only have come from running gmx_mpi_d
-ntmpi, so just a typo we don't need to worry about further.

Mark

#SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
> #SBATCH -p cpu # Partition to submit to
> #SBATCH -o hostname_%j.out # File to which STDOUT will be written
> #SBATCH -e hostname_%j.err # File to which STDERR will be written
> #loading gromacs
> module load gromacs/2018.4
> #specifying work_dir
> WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1
>
>
> mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
> equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
> equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
> equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
> equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
> equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
> equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
> equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
> -deffnm remd_nvt -cpi remd_nvt.cpt -append
>
> On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
> wrote:
>
> > Hi,
> >
> > We need to see your command line in order to have a chance of helping.
> >
> > Mark
> >
> > On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <
> 177cy500.bra...@nitk.edu.in
> > >
> > wrote:
> >
> > > Dear all,
> > > I am running one REMD simulation with 65 replicas. I am
> using
> > > 130 cores for the simulation. I am getting the following error.
> > >
> > > Fatal error:
> > > Your choice of number of MPI ranks and amount of resources results in
> > using
> > > 16
> > > OpenMP threads per rank, which is most likely inefficient. The optimum
> is
> > > usually between 1 and 6 threads per rank. If you want to run with this
> > > setup,
> > > specify the -ntomp option. But we suggest to change the number of MPI
> > > ranks.
> > >
> > > when I am using -ntomp option ...it is throwing another error
> > >
> > > Fatal error:
> > > Setting the number of thread-MPI ranks is only supported with
> thread-MPI
> > > and
> > > GROMACS was compiled without thread-MPI
> > >
> > >
> > > while GROMACS is compiled with threated-MPI...
> > >
> > > plerase help me in this regard.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/m

Re: [gmx-users] REMD-error

2019-09-04 Thread Bratin Kumar Das
Thank you for your email sir.

On Wed, Sep 4, 2019 at 2:42 PM Mark Abraham 
wrote:

> Hi,
>
> On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in
> >
> wrote:
>
> > Respected Mark Abraham,
> >   The command-line and the job
> > submission script is given below
> >
> > #!/bin/bash
> > #SBATCH -n 130 # Number of cores
> >
>
> Per the docs, this is a guide to sbatch about how many (MPI) tasks you want
> to run. It's not a core request.
>
> #SBATCH -N 5   # no of nodes
> >
>
> This requires a certain number of nodes. So to implement both your
> instructions, MPI has to start 26 tasks per node. That would make sense if
> you had nodes with a multiple 26 cores. My guess is that your nodes have a
> multiple of 16 cores, based on the error message. MPI saw that you asked to
> allocate more tasks on cores than available cores, and decided not to set a
> number of OpenMP threads per MPI task, so that fell back on a default,
> which produced 16, which GROMACS can see doesn't make sense.
>
> If you want to use -N and -n, then you need to make a choice that makes
> sense for the number of cores per node. Easier might be to use -n 130 and
> -c 2 to express what I assume is your intent to have 2 cores per MPI task.
> Now slurm+MPI can pass that message along properly to OpenMP.
>
> Your other message about -ntomp can only have come from running gmx_mpi_d
> -ntmpi, so just a typo we don't need to worry about further.
>
> Mark
>
> #SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
> > #SBATCH -p cpu # Partition to submit to
> > #SBATCH -o hostname_%j.out # File to which STDOUT will be written
> > #SBATCH -e hostname_%j.err # File to which STDERR will be written
> > #loading gromacs
> > module load gromacs/2018.4
> > #specifying work_dir
> > WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1
> >
> >
> > mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
> > equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
> > equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
> > equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
> > equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
> > equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
> > equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
> > equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
> > -deffnm remd_nvt -cpi remd_nvt.cpt -append
> >
> > On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > We need to see your command line in order to have a chance of helping.
> > >
> > > Mark
> > >
> > > On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <
> > 177cy500.bra...@nitk.edu.in
> > > >
> > > wrote:
> > >
> > > > Dear all,
> > > > I am running one REMD simulation with 65 replicas. I am
> > using
> > > > 130 cores for the simulation. I am getting the following error.
> > > >
> > > > Fatal error:
> > > > Your choice of number of MPI ranks and amount of resources results in
> > > using
> > > > 16
> > > > OpenMP threads per rank, which is most likely inefficient. The
> optimum
> > is
> > > > usually between 1 and 6 threads per rank. If you want to run with
> this
> > > > setup,
> > > > specify the -ntomp option. But we suggest to change the number of MPI
> > > > ranks.
> > > >
> > > > when I am using -ntomp option ...it is throwing another error
> > > >
> > > > Fatal error:
> > > > Setting the number of thread-MPI ranks is only supported with
> > thread-MPI
> > > > and
> > > > GROMACS was compiled without thread-MPI
> > > >
> > > >
> > > > while GROMACS is compiled with threated-MPI...
> > > >
> > > > plerase help me in this regard.
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailma

[gmx-users] REMD stall out

2020-02-11 Thread Daniel Burns
Hi,

I continue to have trouble getting an REMD job to run.  It never makes it
to the point that it generates trajectory files but it never gives any
error either.

I have switched from a large TREMD with 72 replicas to the Plumed
Hamiltonian method with only 6 replicas.  Everything is now on one node and
each replica has 6 cores.  I've turned off the dynamic load balancing on
this attempt per the recommendation from the Plumed site.

Any ideas on how to troubleshoot?

Thank you,

Dan
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD on GPU

2013-11-28 Thread James Starlight
Dear Gromacs users!

I'd like to perform replica exchange simulation

For this I've made bash script which create n folders like
replica-298
replica-312
replica-323
replica-334
...
replica-N
 with all files needed for simulation considted of specified mdp file with
different ref_t value
No I'd like to launch this simulation using
 -multidir replica-298 replica-312 replica-323 replica-334

Unfortunately I've obtained

Fatal error:
mdrun -multi is not supported with the thread library.Please compile
GROMACS with MPI support

Does it possible to install mpirun on the existing Gromacs-4.6 built from
source (without removing of the installed files)?

Does it possible to run replica simulations in GPU supported mode ?

Thanks for help,

James
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD Melting Curve

2014-03-03 Thread atanu_das
Dear Gromacs users,
   I have performed a REMD simulation with 12 replicas
exponentially spaced between 260-420K. However, when I applied g_kinetics
program to generate the melting curve, I got 161 values starting from 260K
to 420K having the folded fraction reported at each temperature. My question
is if I am applying 12 replicas exponentially spaced in the temperature
range chosen, how could I get 161 values? Am I doing something wrong? Does
g_kinetics in GROMACS use any smoothing function to generate the
intermediate temperature and folded fraction values? 
Please suggest/advise.
Atanu  

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/REMD-Melting-Curve-tp5014920.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD exchange probabilities

2015-03-09 Thread Mark Abraham
On Sun, Mar 8, 2015 at 7:25 PM, Neha Gandhi  wrote:

> Dear list,
>
> Using an exchange probability of 0.25 and temperature range 293-370  K, I
> calculated number of replicas using the server. However, when I did first
> run and tried exchanging replicas every 500 steps (1 ps), I don't think the
> exchange probabilities make sense in particular replicas 15 and 16. Replica
> 15 has a low exchange ratio of 0.12 while replica 16 has a high exchange
> ratio of 0.55.
>

This can be real. See literature from Nadler and Hansmann. If there's some
temperature-dependent change in the range of available configurations (e.g.
a phase transition), then you can have configuration(s) whose energy is
such that they are accessible at one temperature and not at adjacent
temperatures. Such replicas won't cross that temperature barrier until they
have found a region of phase space that permits it. Such an ergodicity
bottleneck suggests adding other replicas around that temperature, because
you need flow over replica space to achieve the desired enhanced sampling.

Repl  average probabilities:
> Repl 0123456789   10   11   12
> 13   14   15   16   17   18   19   20   21   22   23   24   25   26   27
> 28   29   30   31   32   33   34   35   36   37   38   39   40   41   42
> 43   44   45   46   47
> Repl  .28  .28  .28  .28  .29  .28  .29  .29  .28  .29  .28  .28  .29
> .29  .29  .12  .55  .29  .29  .30  .30  .29  .29  .26  .32  .31  .30  .30
> .30  .30  .30  .31  .31  .31  .31  .31  .31  .31  .31  .31  .31  .31  .32
> .32  .32  .32  .33
> Repl  number of exchanges:
> Repl 0123456789   10   11   12
> 13   14   15   16   17   18   19   20   21   22   23   24   25   26   27
> 28   29   30   31   32   33   34   35   36   37   38   39   40   41   42
> 43   44   45   46   47
> Repl 2901 2954 2873 3017 3038 2910 3009 2993 2934 3002 2981 2999 2927
> 3038 3059 1229 5757 3056 3100 3136 3054 3053 3109 2743  3166 3097 3185
> 3161 3189 3133 3226 3261 3242 3229 3205 3249 3227 3221 3222 3326 3303 3309
> 3320 3373 3346 3474
> Repl  average number of exchanges:
> Repl 0123456789   10   11   12
> 13   14   15   16   17   18   19   20   21   22   23   24   25   26   27
> 28   29   30   31   32   33   34   35   36   37   38   39   40   41   42
> 43   44   45   46   47
> Repl  .28  .28  .27  .29  .29  .28  .29  .29  .28  .29  .29  .29  .28
> .29  .29  .12  .55  .29  .30  .30  .29  .29  .30  .26  .32  .30  .30  .30
> .30  .31  .30  .31  .31  .31  .31  .31  .31  .31  .31  .31  .32  .32  .32
> .32  .32  .32  .33
>
>
> Below are the temperatures I have used. How do I manually edit temperatures
> to get average exchange probabilities between 0.2-0.3?
>

The same way you set up the original set of temperatures - make an .mdp
that has a temperature you want, equilibrate, and then insert it into the
set of replicas before a new run.

Your existing set of temperatures has one spacing of exactly one degree,
and the rest seem to be exponential, so that looks funny.

ref_t= 293293; reference temperature, one for each
> group, in K
> ref_t= 294.51 294.51; reference temperature, one for each
> group, in K
> ref_t= 296.03 296.03; reference temperature, one for each
> group, in K
> ref_t= 297.56 297.56; reference temperature, one for each
> group, in K
> ref_t= 299.09 299.09; reference temperature, one for each
> group, in K
> ref_t= 300.63 300.63; reference temperature, one for each
> group, in K
> ref_t= 302.18 302.18; reference temperature, one for each
> group, in K
> ref_t= 303.73 303.73; reference temperature, one for each
> group, in K
> ref_t= 305.29 305.29; reference temperature, one for each
> group, in K
> ref_t= 306.86 306.86; reference temperature, one for each
> group, in K
> ref_t= 308.43 308.43; reference temperature, one for each
> group, in K
> ref_t= 310.01 310.01; reference temperature, one for each
> group, in K
> ref_t= 311.60 311.60; reference temperature, one for each
> group, in K
> ref_t= 313.19 313.19; reference temperature, one for each
> group, in K
> ref_t= 314.79 314.79; reference temperature, one for each
> group, in K
> ref_t= 316.40 316.40; reference temperature, one for each
> group, in K
> ref_t= 318.63 318.63; reference temperature, one for each
> group, in K
> ref_t= 319.63 319.63; reference temperature, one for each
> group, in K
> ref_t= 321.26 321.26; reference temperature, one for each
> group, in K
> ref_t= 322.89 322.89; reference temperature, one for each
> group, in K
> ref_t= 324.52 324.52; reference temperature, one for each
> group, in K
> ref

Re: [gmx-users] REMD mdrun_mpi error

2015-06-23 Thread Mark Abraham
Hi,

Do your individual replica .tpr files run correctly on their own?

Mark

On Mon, Jun 22, 2015 at 3:35 PM Nawel Mele  wrote:

> Dear gromacs users,
>
> I am trying to simulate a ligand using REMD method in explicit solvent with
> the charmm force field. When I try to equilibrate my system I get this
> error :
>
> Double sids (0, 1) for atom 26
> Double sids (0, 1) for atom 27
> Double sids (0, 1) for atom 28
> Double sids (0, 1) for atom 29
> Double sids (0, 1) for atom 30
> Double sids (0, 1) for atom 31
> Double sids (0, 1) for atom 32
> Double sids (0, 1) for atom 33
> Double sids (0, 1) for atom 34
> Double sids (0, 1) for atom 35
> Double sids (0, 1) for atom 36
> Double sids (0, 1) for atom 37
> Double sids (0, 1) for atom 38
> Double sids (0, 1) for atom 39
> Double sids (0, 1) for atom 40
>
> ---
> Program mdrun_mpi, VERSION 4.6.5
> Source code file:
> /local/software/gromacs/4.6.5/source/gromacs-4.6.5/src/gmxlib/invblock.c,
> line: 99
>
> Fatal error:
> Double entries in block structure. Item 53 is in blocks 1 and 0
>  Cannot make an unambiguous inverse block.
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
>
>
>
> *My mdp input file looks like this :*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *title   = CHARMM compound NVT equilibration define  =
> -DPOSRES  ; position restrain the protein; Run
> parametersintegrator  = sd; leap-frog stochastic dynamics
> integratornsteps  = 100   ; 2 * 100 = 100
> psdt  = 0.002 ; 2 fs; Output controlnstxout =
> 500   ; save coordinates every 0.2 psnstvout =
> 10; save velocities every 0.2 psnstenergy   = 500
> ; save energies every 0.2 psnstlog  = 500   ; update log
> file every 0.2 ps; Bond parameterscontinuation= no; first
> dynamics runconstraint_algorithm = SHAKE; holonomic constraints
> constraints = h-bonds   ; all bonds (even heavy atom-H bonds)
> constrainedshake-tol   = 0.1   ; relative tolerance for SHAKE;
> Neighborsearchingns_type = grid  ; search neighboring grid
> cellsnstlist = 5 ; 10 fsrlist   = 1.0
> ; short-range neighborlist cutoff (in nm)rcoulomb= 1.0   ;
> short-range electrostatic cutoff (in nm)rvdw= 1.0   ;
> short-range van der Waals cutoff (in nm); Electrostaticscoulombtype =
> PME   ; Particle Mesh Ewald for long-range
> electrostaticspme_order   = 4 ; Interpolation order for
> PME. 4 equals cubic interpolationfourierspacing  = 0.16  ; grid
> spacing for FFT; Temperature coupling is on;tcoupl = V-rescale
> ; modified Berendsen thermostattc-grps = LIG SOL   ; two
> coupling groups - more accuratetau_t   = 1.0   1.0 ; time
> constant, in psref_t   = X X   ; reference
> temperature, one for each group, in K;Langevin dynamicsbd-fric = 0
> ;   ;Brownian dynamics friction coefficient. ld-seed
> =-1;;pseudo random seed is used; Pressure coupling is
> offpcoupl  = no; no pressure coupling in NVT; Periodic
> boundary conditionspbc = xyz   ; 3-D PBC; Dispersion
> correctionDispCorr= EnerPres  ; account for cut-off vdW scheme;
> Velocity generationgen_vel = yes   ; assign velocities from
> Maxwell distributiongen_temp= 0.0   ; temperature for
> Maxwell distributiongen_seed= -1; generate a random
> seed*
>
>
> *And my input file to run it in parallel looks like that:*
>
>
>
>
>
>
>
>
>
>
> *#!/bin/bash#PBS -l nodes=3:ppn=16#PBS -l walltime=00:10:00#PBS -o
> zzz.qsub.out#PBS -e zzz.qsub.errmodule load openmpi module load
> gromacs/4.6.5mpirun -np 48  mdrun_mpi -s eq_.tpr -multi 48 -replex 10
> >& faillog-X.log*
>
>
> Does anyone have seen this issue before??
>
> Many thanks,
> --
>
> Nawel Mele, PhD Research Student
>
> Jonathan Essex Group, School of Chemistry
>
> University of Southampton,  Highfield
>
> Southampton, SO17 1BJ
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.

Re: [gmx-users] REMD mdrun_mpi error

2015-06-23 Thread Nawel Mele
Hi Mark,

I tried to run an individual tpr file and it crashed:

Double sids (0, 1) for atom 26
Double sids (0, 1) for atom 27
Double sids (0, 1) for atom 28
Double sids (0, 1) for atom 29
Double sids (0, 1) for atom 30
Double sids (0, 1) for atom 31
Double sids (0, 1) for atom 32
Double sids (0, 1) for atom 33
Double sids (0, 1) for atom 34
Double sids (0, 1) for atom 35
Double sids (0, 1) for atom 36
Double sids (0, 1) for atom 37
Double sids (0, 1) for atom 38
Double sids (0, 1) for atom 39
Double sids (0, 1) for atom 40

---
Program mdrun, VERSION 4.6.5
Source code file:
/local/software/gromacs/4.6.5/source/gromacs-4.6.5/src/gmxlib/invblock.c,
line: 99

Fatal error:
Double entries in block structure. Item 53 is in blocks 1 and 0
 Cannot make an unambiguous inverse block.


To create my tpr files I useda bash script like this:





















*#!/bin/bash -fnrep=`wc temperatures.dat | awk '{print $1}'`echo
$nrepcount=0count2=-1for TEMP in `cat temperatures.dat`do   let count2+=1
REP=`printf "%03d" $count2`   REPBIS=`printf "%d" $count2`  echo
"TEMPERATURE: $TEMP K ==> FILE: nvt_$REP.mdp"  sed "s/X/$TEMP/g"
nvt.mdp > nvt_$REP.mdp  grompp -f nvt_$REP.mdp -c min.gro -p topol.top -o
eq_$REPBIS.tpr -maxwarn 1   rm -f tempdoneecho "N REPLICAS  = $nrep"echo "
Done."*

Nawel


2015-06-23 11:47 GMT+01:00 Mark Abraham :

> Hi,
>
> Do your individual replica .tpr files run correctly on their own?
>
> Mark
>
> On Mon, Jun 22, 2015 at 3:35 PM Nawel Mele  wrote:
>
> > Dear gromacs users,
> >
> > I am trying to simulate a ligand using REMD method in explicit solvent
> with
> > the charmm force field. When I try to equilibrate my system I get this
> > error :
> >
> > Double sids (0, 1) for atom 26
> > Double sids (0, 1) for atom 27
> > Double sids (0, 1) for atom 28
> > Double sids (0, 1) for atom 29
> > Double sids (0, 1) for atom 30
> > Double sids (0, 1) for atom 31
> > Double sids (0, 1) for atom 32
> > Double sids (0, 1) for atom 33
> > Double sids (0, 1) for atom 34
> > Double sids (0, 1) for atom 35
> > Double sids (0, 1) for atom 36
> > Double sids (0, 1) for atom 37
> > Double sids (0, 1) for atom 38
> > Double sids (0, 1) for atom 39
> > Double sids (0, 1) for atom 40
> >
> > ---
> > Program mdrun_mpi, VERSION 4.6.5
> > Source code file:
> > /local/software/gromacs/4.6.5/source/gromacs-4.6.5/src/gmxlib/invblock.c,
> > line: 99
> >
> > Fatal error:
> > Double entries in block structure. Item 53 is in blocks 1 and 0
> >  Cannot make an unambiguous inverse block.
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> >
> >
> >
> > *My mdp input file looks like this :*
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > *title   = CHARMM compound NVT equilibration define  =
> > -DPOSRES  ; position restrain the protein; Run
> > parametersintegrator  = sd; leap-frog stochastic dynamics
> > integratornsteps  = 100   ; 2 * 100 = 100
> > psdt  = 0.002 ; 2 fs; Output controlnstxout =
> > 500   ; save coordinates every 0.2 psnstvout =
> > 10; save velocities every 0.2 psnstenergy   = 500
> > ; save energies every 0.2 psnstlog  = 500   ; update log
> > file every 0.2 ps; Bond parameterscontinuation= no; first
> > dynamics runconstraint_algorithm = SHAKE; holonomic constraints
> > constraints = h-bonds   ; all bonds (even heavy atom-H bonds)
> > constrainedshake-tol   = 0.1   ; relative tolerance for
> SHAKE;
> > Neighborsearchingns_type = grid  ; search neighboring
> grid
> > cellsnstlist = 5 ; 10 fsrlist   = 1.0
> > ; short-range neighborlist cutoff (in nm)rcoulomb= 1.0
>  ;
> > short-range electrostatic cutoff (in nm)rvdw= 1.0   ;
> > short-range van der Waals cutoff (in nm); Electrostaticscoulombtype =
> > PME   ; Particle Mesh Ewald for long-range
> > electrostaticspme_order   = 4 ; Interpolation order for
> > PME. 4 equals cubic interpolationfourierspacing  = 0.16  ; grid
> > spacing for FFT; Temperature coupling is on;tcoupl = V-rescale
> > ; modified Berendsen thermostattc-grps = LIG SOL   ; two
> > coupling groups - more accuratetau_t   = 1.0   1.0 ; time
> > constant, in psref_t   = X X   ; reference
> > temperature, one for each group, in K;Langevin dynamicsbd-fric =
> 0
> > ;   ;Brownian dynamics friction coefficient. ld-seed
> > =-1;;pseudo random seed is used; Pressure coupling is
> > offpcoupl  = no; no

[gmx-users] REMD with different structures

2015-06-23 Thread ruchi lohia
Hi


I am trying to do NVT  REMD simulations with gromacs. I have 60 replicas
and each of them have different starting structure . The starting
structures have same number of atoms but slightly different volume and
pressure. I was able to run these simulations but I want to know if having
different volume and pressure is affecting the exchange probability, and if
it does, is it being included in the gromacs REMD simulations ? Please
suggest a method to verify it .

-- 
Regards

Ruchi Lohia
Graduate Student
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD system blowing up

2015-09-28 Thread NISHA Prakash
Hi all,

I would like to know if there is a way to figure out which of the replica
is exploding during REMD simulation?
I am running REMD for 54 replicas and the system is exploding with just
one step14495b.pdb and step14495c.pdb files.
Does this mean there is just one replica that is exploding?
Does this also have to do with the temperature?
The equilibration was carried out for 600 ps and the individual replicas
have no issues.

Awaiting response.

Thanks!

Nisha
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD of IDPs

2016-04-08 Thread João Henriques
​Dear Yanhua,

To my knowledge (prior to gromacs 5.X at least), there​ are no gromacs
tools able to turn a sequence into a PDB. The user must take care of that
pre-processing on his/her own. I work with IDPs quite a lot, so what I can
tell you is what I usually do. I take my fasta sequence and use PyMOL to
construct the PDB. Then I'm able to feed the PDB to pdb2gmx.

*I'm sure there are a million different ways of doing this, given that
there are so many different protein modelling tools out there.*

Here's one example using Histatin 5.

- On PyMOL's command line type the following (without the quotation marks):
"for aa in "AKRHHGYKRKFH": cmd._alt(string.lower(aa))"

- This builds a fully stretched Histatin 5 3D model which can be exported
as PDB.

- Make sure to use "-ignh" on pdb2gmx, as the resulting hydrogen atom names
are usually incompatible with the force fields I routinely use.

- It's also a good idea to use "-renum" on pdb2gmx as for some reason PyMOL
exports the PDB with residue numberings starting from no. 2.

Cheers,
João


On Fri, Apr 8, 2016 at 4:14 AM, YanhuaOuyang <15901283...@163.com> wrote:

> Hi, I have a sequence of an intrinsically disordered protein, I have no
> idea how to start my REMD with gromacs. e.g. how to convert my sequence
> into a pdb file
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD of IDPs

2016-04-08 Thread Smith, Micholas D.
Dear Yanhua,

Converting a sequence into a structure is itself an "open" problem in 
computational biology/biophysics. There are ways to generate potential 
structures if you also happen to have some restraints from NMR or other 
experiments (small-angle scattering or CD-Spectra) noted in the literature, but 
getting to the "native" fold is very challenging. One program that tries to 
address the sequence to structure problem is Rosetta ( 
http://robetta.bakerlab.org/ ). 

If you have a short IDP fragment (less than 20 residues), one thing you can do 
it use something like Schrodinger's Maestro program (its free from their 
webpage www.schrodinger.com) and use the molecule builder to "grow" the chain 
as a random coil (random phi-psi placement), save the PDB from it and then run 
MD at high temp to relax the structure into a potential starting structure. If 
it is longer, the IDP may have small structural segments (the chain is 
dominated by disorder but may have short-lived, meta-stable, secondary 
structure regions) in which case you can either try to build the molecule with 
a corresponding secondary structure distribution (using Maestro) or try using 
Rosetta and refine with energy minimization.

Good Luck! 

===
Micholas Dean Smith, PhD.
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular Biophysics


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of João Henriques 

Sent: Friday, April 08, 2016 3:51 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] REMD of IDPs

​Dear Yanhua,

To my knowledge (prior to gromacs 5.X at least), there​ are no gromacs
tools able to turn a sequence into a PDB. The user must take care of that
pre-processing on his/her own. I work with IDPs quite a lot, so what I can
tell you is what I usually do. I take my fasta sequence and use PyMOL to
construct the PDB. Then I'm able to feed the PDB to pdb2gmx.

*I'm sure there are a million different ways of doing this, given that
there are so many different protein modelling tools out there.*

Here's one example using Histatin 5.

- On PyMOL's command line type the following (without the quotation marks):
"for aa in "AKRHHGYKRKFH": cmd._alt(string.lower(aa))"

- This builds a fully stretched Histatin 5 3D model which can be exported
as PDB.

- Make sure to use "-ignh" on pdb2gmx, as the resulting hydrogen atom names
are usually incompatible with the force fields I routinely use.

- It's also a good idea to use "-renum" on pdb2gmx as for some reason PyMOL
exports the PDB with residue numberings starting from no. 2.

Cheers,
João


On Fri, Apr 8, 2016 at 4:14 AM, YanhuaOuyang <15901283...@163.com> wrote:

> Hi, I have a sequence of an intrinsically disordered protein, I have no
> idea how to start my REMD with gromacs. e.g. how to convert my sequence
> into a pdb file
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD of IDPs

2016-04-08 Thread João Henriques
One small remark to Micholas' email​:

- Make sure the simulation box is big enough to allow the IDP to fully
stretch without interacting with its periodic image(s). This is non-trivial
if you build your system from a random coil. That's why I start from a
fully stretched conformation instead of a more representative conformation
of the system. Much easier to control and the time it takes to get to a
"meaningful" conformation is minimal.

/J



On Fri, Apr 8, 2016 at 2:10 PM, Smith, Micholas D.  wrote:

> Dear Yanhua,
>
> Converting a sequence into a structure is itself an "open" problem in
> computational biology/biophysics. There are ways to generate potential
> structures if you also happen to have some restraints from NMR or other
> experiments (small-angle scattering or CD-Spectra) noted in the literature,
> but getting to the "native" fold is very challenging. One program that
> tries to address the sequence to structure problem is Rosetta (
> http://robetta.bakerlab.org/ ).
>
> If you have a short IDP fragment (less than 20 residues), one thing you
> can do it use something like Schrodinger's Maestro program (its free from
> their webpage www.schrodinger.com) and use the molecule builder to "grow"
> the chain as a random coil (random phi-psi placement), save the PDB from it
> and then run MD at high temp to relax the structure into a potential
> starting structure. If it is longer, the IDP may have small structural
> segments (the chain is dominated by disorder but may have short-lived,
> meta-stable, secondary structure regions) in which case you can either try
> to build the molecule with a corresponding secondary structure distribution
> (using Maestro) or try using Rosetta and refine with energy minimization.
>
> Good Luck!
>
> ===
> Micholas Dean Smith, PhD.
> Post-doctoral Research Associate
> University of Tennessee/Oak Ridge National Laboratory
> Center for Molecular Biophysics
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of João
> Henriques 
> Sent: Friday, April 08, 2016 3:51 AM
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] REMD of IDPs
>
> ​Dear Yanhua,
>
> To my knowledge (prior to gromacs 5.X at least), there​ are no gromacs
> tools able to turn a sequence into a PDB. The user must take care of that
> pre-processing on his/her own. I work with IDPs quite a lot, so what I can
> tell you is what I usually do. I take my fasta sequence and use PyMOL to
> construct the PDB. Then I'm able to feed the PDB to pdb2gmx.
>
> *I'm sure there are a million different ways of doing this, given that
> there are so many different protein modelling tools out there.*
>
> Here's one example using Histatin 5.
>
> - On PyMOL's command line type the following (without the quotation marks):
> "for aa in "AKRHHGYKRKFH": cmd._alt(string.lower(aa))"
>
> - This builds a fully stretched Histatin 5 3D model which can be exported
> as PDB.
>
> - Make sure to use "-ignh" on pdb2gmx, as the resulting hydrogen atom names
> are usually incompatible with the force fields I routinely use.
>
> - It's also a good idea to use "-renum" on pdb2gmx as for some reason PyMOL
> exports the PDB with residue numberings starting from no. 2.
>
> Cheers,
> João
>
>
> On Fri, Apr 8, 2016 at 4:14 AM, YanhuaOuyang <15901283...@163.com> wrote:
>
> > Hi, I have a sequence of an intrinsically disordered protein, I have no
> > idea how to start my REMD with gromacs. e.g. how to convert my sequence
> > into a pdb file
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX

Re: [gmx-users] REMD of IDPs

2016-04-08 Thread Smith, Micholas D.
Very good point from João. Always remember to check that your box length is big 
enough!

===
Micholas Dean Smith, PhD.
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular Biophysics


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of João Henriques 

Sent: Friday, April 08, 2016 8:24 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] REMD of IDPs

One small remark to Micholas' email​:

- Make sure the simulation box is big enough to allow the IDP to fully
stretch without interacting with its periodic image(s). This is non-trivial
if you build your system from a random coil. That's why I start from a
fully stretched conformation instead of a more representative conformation
of the system. Much easier to control and the time it takes to get to a
"meaningful" conformation is minimal.

/J



On Fri, Apr 8, 2016 at 2:10 PM, Smith, Micholas D.  wrote:

> Dear Yanhua,
>
> Converting a sequence into a structure is itself an "open" problem in
> computational biology/biophysics. There are ways to generate potential
> structures if you also happen to have some restraints from NMR or other
> experiments (small-angle scattering or CD-Spectra) noted in the literature,
> but getting to the "native" fold is very challenging. One program that
> tries to address the sequence to structure problem is Rosetta (
> http://robetta.bakerlab.org/ ).
>
> If you have a short IDP fragment (less than 20 residues), one thing you
> can do it use something like Schrodinger's Maestro program (its free from
> their webpage www.schrodinger.com) and use the molecule builder to "grow"
> the chain as a random coil (random phi-psi placement), save the PDB from it
> and then run MD at high temp to relax the structure into a potential
> starting structure. If it is longer, the IDP may have small structural
> segments (the chain is dominated by disorder but may have short-lived,
> meta-stable, secondary structure regions) in which case you can either try
> to build the molecule with a corresponding secondary structure distribution
> (using Maestro) or try using Rosetta and refine with energy minimization.
>
> Good Luck!
>
> ===
> Micholas Dean Smith, PhD.
> Post-doctoral Research Associate
> University of Tennessee/Oak Ridge National Laboratory
> Center for Molecular Biophysics
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of João
> Henriques 
> Sent: Friday, April 08, 2016 3:51 AM
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] REMD of IDPs
>
> ​Dear Yanhua,
>
> To my knowledge (prior to gromacs 5.X at least), there​ are no gromacs
> tools able to turn a sequence into a PDB. The user must take care of that
> pre-processing on his/her own. I work with IDPs quite a lot, so what I can
> tell you is what I usually do. I take my fasta sequence and use PyMOL to
> construct the PDB. Then I'm able to feed the PDB to pdb2gmx.
>
> *I'm sure there are a million different ways of doing this, given that
> there are so many different protein modelling tools out there.*
>
> Here's one example using Histatin 5.
>
> - On PyMOL's command line type the following (without the quotation marks):
> "for aa in "AKRHHGYKRKFH": cmd._alt(string.lower(aa))"
>
> - This builds a fully stretched Histatin 5 3D model which can be exported
> as PDB.
>
> - Make sure to use "-ignh" on pdb2gmx, as the resulting hydrogen atom names
> are usually incompatible with the force fields I routinely use.
>
> - It's also a good idea to use "-renum" on pdb2gmx as for some reason PyMOL
> exports the PDB with residue numberings starting from no. 2.
>
> Cheers,
> João
>
>
> On Fri, Apr 8, 2016 at 4:14 AM, YanhuaOuyang <15901283...@163.com> wrote:
>
> > Hi, I have a sequence of an intrinsically disordered protein, I have no
> > idea how to start my REMD with gromacs. e.g. how to convert my sequence
> > into a pdb file
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Pl

[gmx-users] REMD ensemble of states

2016-11-07 Thread Abramyan, Tigran
Hi,


I conducted REMD, and extracted the trajectories via
trjcat -f *.trr -demux replica_index.xvg
And now I was wondering which *.xtc file is the ensemble of states at the 
baseline replica (lowest temperature replica). Intuitively my guess is that the 
numbers in the names of *_trajout.xtc files correspond to the replica numbers 
starting from the baseline, and hence 0_trajout.xtc is the ensemble of states 
at the baseline replica, but I may be wrong.


Please suggest.


Thank you,

Tigran


--
Tigran M. Abramyan, Ph.D.
Postdoctoral Fellow, Computational Biophysics & Molecular Design
Center for Integrative Chemical Biology and Drug Discovery
Eshelman School of Pharmacy
University of North Carolina at Chapel Hill
Chapel Hill, NC 27599-7363

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD implicit solvent

2018-01-05 Thread Mark Abraham
Hi,

Did you try to debug your setup by running a normal single-replica
simulation first?

Mark

On Fri, Jan 5, 2018 at 12:12 PM Urszula Uciechowska <
urszula.uciechow...@biotech.ug.edu.pl> wrote:

>
>
> Dear gromacs users,
>
> I am trying to run REMD simulations using 4.5.5 version (implicit
> solvent). The MD procedure:
>
> pdb2gmx -f  prot.pdb -o prot.gro -q prot.pdb -ignh -ss.
>
> The input for minimization step:
>
> ; Run control parameters
> integrator   = cg
> nsteps   = 800
> vdwtype  = cut-off
> coulombtype  = cut-off
> ;cutoff-scheme= group
> pbc  = no
> periodic_molecules   = no
> nstlist  = 10
> ns_type  = grid
> rlist= 1.0
> rcoulomb = 1.6
> rvdw = 1.6
> comm-mode= Angular
> nstcomm  = 10
> ;
> ;Energy minimizing stuff
> ;
> emtol= 100.0
> nstcgsteep   = 2
> emstep   = 0.01
> ;
> ;Relative dielectric constant for the medium and the reaction field
> epsilon_r= 1
> epsilon_rf   = 1
> ;
> ; Implicit solvent
> ;
> implicit_solvent = GBSA
> gb_algorithm = OBC  ;Still  HCT   OBC
> nstgbradii   = 1.0
> rgbradii = 1.0  ; [nm] Cut-off for the calculation of
> the Born radii. Currently must be equal to rlist
> gb_epsilon_solvent   = 80   ; Dielectric constant for the implicit
> solvent
> gb_saltconc  = 0; Salt concentration for implicit
> solvent models, currently not used
> sa_algorithm = Ace-approximation
> sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2) for
> the SA (nonpolar surface) part of GBSA. The value -1 will set default
> value for Still/HCT/OBC GB-models.
>
> and it finished without errors.
>
> The problem is with equilibration step. The input file that I used is:
>
> ; MD CONTROL OPTIONS
> integrator  = md
> dt  = 0.002
> nsteps  = 5 ; 10 ns
> init_step   = 0; For exact run continuation or
> redoing part of a run
> comm-mode   = Angular  ; mode for center of mass motion
> removal
> nstcomm = 10   ; number of steps for center of
> mass motion removal
>
> ; OUTPUT CONTROL OPTIONS
> ; Output frequency for coords (x), velocities (v) and forces (f)
> nstxout  = 1000
> nstvout  = 1000
> nstfout  = 1000
>
> ; Output frequency for energies to log file and energy file
> nstlog   = 1000
> nstcalcenergy= 10
> nstenergy= 1000
>
> ; Neighbor searching and Electrostatitcs
> vdwtype  = cut-off
> coulombtype  = cut-off
> ;cutoff-scheme= group
> pbc  = no
> periodic_molecules   = no
> nstlist  = 5
> ns_type  = grid
> rlist= 1.0
> rcoulomb = 1.6
> rvdw = 1.0
> ; Selection of energy groups
> energygrps   = fixed not_fixed
> freezegrps   = fixed not_fixed
> freezedim= Y Y Y N N N
>
> ;Relative dielectric constant for the medium and the reaction field
> epsilon_r= 1
> epsilon_rf   = 1
>
> ; Temperutare coupling
> tcoupl   = v-rescale
> tc_grps  = fixed not_fixed
> tau_t= 0.01 0.01
> ;nst_couple   = 5
> ref_t= 300.00 300.00
>
> ; Pressure coupling
> pcoupl   = no
> ;pcoupletype  = isotropic
> tau_p= 1.0
> ;compressiblity   = 4.5e-5
> ref_p= 1.0
> gen_vel  = yes
> gen_temp = 300.00 300.00
> gen_seed = -1
> constraints  = h-bonds
>
>
> ; Implicit solvent
> implicit_solvent = GBSA
> gb_algorithm = Still ; HCT  ; OBC
> nstgbradii   = 1.0
> rgbradii = 1.0  ; [nm] Cut-off for the calculation
> of the Born radii. Currently must be equal to rlist
> gb_epsilon_solvent   = 80   ; Dielectric constant for the
> implicit solvent
> gb_saltconc  = 0; Salt concentration for implicit
> solvent models, currently not used
> sa_algorithm = Ace-approximation
> sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2)
> for the SA (nonpolar surface) part of GBSA. The value -1 will set default
> value for Still/HCT/OBC GB-models.
>
>
> mdrun -v -multidir eq_[12345678]
>
> The error that I obtained is:
>
> Fatal error:
> A charge group moved too far between two domain decomposition steps
> This usually means that your system is not well equilibrated
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
>
>
> I do n

Re: [gmx-users] REMD implicit solvent

2018-01-05 Thread Urszula Uciechowska

Hi,

I just run a normal single-replica. Now the error that I have is:

Program mdrun_mpi, VERSION 4.5.5
Source code file: domdec.c, line: 3266

Software inconsistency error:
Inconsistent DD boundary staggering limits!
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


Any suggestions? What can I do to run it?

Thanks
Ula

> Hi,
>
> Did you try to debug your setup by running a normal single-replica
> simulation first?
>
> Mark
>
> On Fri, Jan 5, 2018 at 12:12 PM Urszula Uciechowska <
> urszula.uciechow...@biotech.ug.edu.pl> wrote:
>
>>
>>
>> Dear gromacs users,
>>
>> I am trying to run REMD simulations using 4.5.5 version (implicit
>> solvent). The MD procedure:
>>
>> pdb2gmx -f  prot.pdb -o prot.gro -q prot.pdb -ignh -ss.
>>
>> The input for minimization step:
>>
>> ; Run control parameters
>> integrator   = cg
>> nsteps   = 800
>> vdwtype  = cut-off
>> coulombtype  = cut-off
>> ;cutoff-scheme= group
>> pbc  = no
>> periodic_molecules   = no
>> nstlist  = 10
>> ns_type  = grid
>> rlist= 1.0
>> rcoulomb = 1.6
>> rvdw = 1.6
>> comm-mode= Angular
>> nstcomm  = 10
>> ;
>> ;Energy minimizing stuff
>> ;
>> emtol= 100.0
>> nstcgsteep   = 2
>> emstep   = 0.01
>> ;
>> ;Relative dielectric constant for the medium and the reaction field
>> epsilon_r= 1
>> epsilon_rf   = 1
>> ;
>> ; Implicit solvent
>> ;
>> implicit_solvent = GBSA
>> gb_algorithm = OBC  ;Still  HCT   OBC
>> nstgbradii   = 1.0
>> rgbradii = 1.0  ; [nm] Cut-off for the calculation
>> of
>> the Born radii. Currently must be equal to rlist
>> gb_epsilon_solvent   = 80   ; Dielectric constant for the
>> implicit
>> solvent
>> gb_saltconc  = 0; Salt concentration for implicit
>> solvent models, currently not used
>> sa_algorithm = Ace-approximation
>> sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2) for
>> the SA (nonpolar surface) part of GBSA. The value -1 will set default
>> value for Still/HCT/OBC GB-models.
>>
>> and it finished without errors.
>>
>> The problem is with equilibration step. The input file that I used is:
>>
>> ; MD CONTROL OPTIONS
>> integrator  = md
>> dt  = 0.002
>> nsteps  = 5 ; 10 ns
>> init_step   = 0; For exact run continuation or
>> redoing part of a run
>> comm-mode   = Angular  ; mode for center of mass motion
>> removal
>> nstcomm = 10   ; number of steps for center of
>> mass motion removal
>>
>> ; OUTPUT CONTROL OPTIONS
>> ; Output frequency for coords (x), velocities (v) and forces (f)
>> nstxout  = 1000
>> nstvout  = 1000
>> nstfout  = 1000
>>
>> ; Output frequency for energies to log file and energy file
>> nstlog   = 1000
>> nstcalcenergy= 10
>> nstenergy= 1000
>>
>> ; Neighbor searching and Electrostatitcs
>> vdwtype  = cut-off
>> coulombtype  = cut-off
>> ;cutoff-scheme= group
>> pbc  = no
>> periodic_molecules   = no
>> nstlist  = 5
>> ns_type  = grid
>> rlist= 1.0
>> rcoulomb = 1.6
>> rvdw = 1.0
>> ; Selection of energy groups
>> energygrps   = fixed not_fixed
>> freezegrps   = fixed not_fixed
>> freezedim= Y Y Y N N N
>>
>> ;Relative dielectric constant for the medium and the reaction field
>> epsilon_r= 1
>> epsilon_rf   = 1
>>
>> ; Temperutare coupling
>> tcoupl   = v-rescale
>> tc_grps  = fixed not_fixed
>> tau_t= 0.01 0.01
>> ;nst_couple   = 5
>> ref_t= 300.00 300.00
>>
>> ; Pressure coupling
>> pcoupl   = no
>> ;pcoupletype  = isotropic
>> tau_p= 1.0
>> ;compressiblity   = 4.5e-5
>> ref_p= 1.0
>> gen_vel  = yes
>> gen_temp = 300.00 300.00
>> gen_seed = -1
>> constraints  = h-bonds
>>
>>
>> ; Implicit solvent
>> implicit_solvent = GBSA
>> gb_algorithm = Still ; HCT  ; OBC
>> nstgbradii   = 1.0
>> rgbradii = 1.0  ; [nm] Cut-off for the
>> calculation
>> of the Born radii. Currently must be equal to rlist
>> gb_epsilon_solvent   = 80   ; Dielectric constant for the
>> implicit solvent
>> gb_saltconc  = 0; Salt concentration for
>> implicit
>> solvent models, currently not used
>> sa_algorithm = Ace-approx

Re: [gmx-users] REMD implicit solvent

2018-01-05 Thread Qinghua Liao

Hello,

From my experience, the domain decomposition is not compatible with 
implicit solvent, you have to switch

to particle decomposition for the simulations.


All the best,
Qinghua

On 01/05/2018 12:40 PM, Urszula Uciechowska wrote:

Hi,

I just run a normal single-replica. Now the error that I have is:

Program mdrun_mpi, VERSION 4.5.5
Source code file: domdec.c, line: 3266

Software inconsistency error:
Inconsistent DD boundary staggering limits!
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


Any suggestions? What can I do to run it?

Thanks
Ula


Hi,

Did you try to debug your setup by running a normal single-replica
simulation first?

Mark

On Fri, Jan 5, 2018 at 12:12 PM Urszula Uciechowska <
urszula.uciechow...@biotech.ug.edu.pl> wrote:



Dear gromacs users,

I am trying to run REMD simulations using 4.5.5 version (implicit
solvent). The MD procedure:

pdb2gmx -f  prot.pdb -o prot.gro -q prot.pdb -ignh -ss.

The input for minimization step:

; Run control parameters
integrator   = cg
nsteps   = 800
vdwtype  = cut-off
coulombtype  = cut-off
;cutoff-scheme= group
pbc  = no
periodic_molecules   = no
nstlist  = 10
ns_type  = grid
rlist= 1.0
rcoulomb = 1.6
rvdw = 1.6
comm-mode= Angular
nstcomm  = 10
;
;Energy minimizing stuff
;
emtol= 100.0
nstcgsteep   = 2
emstep   = 0.01
;
;Relative dielectric constant for the medium and the reaction field
epsilon_r= 1
epsilon_rf   = 1
;
; Implicit solvent
;
implicit_solvent = GBSA
gb_algorithm = OBC  ;Still  HCT   OBC
nstgbradii   = 1.0
rgbradii = 1.0  ; [nm] Cut-off for the calculation
of
the Born radii. Currently must be equal to rlist
gb_epsilon_solvent   = 80   ; Dielectric constant for the
implicit
solvent
gb_saltconc  = 0; Salt concentration for implicit
solvent models, currently not used
sa_algorithm = Ace-approximation
sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2) for
the SA (nonpolar surface) part of GBSA. The value -1 will set default
value for Still/HCT/OBC GB-models.

and it finished without errors.

The problem is with equilibration step. The input file that I used is:

; MD CONTROL OPTIONS
integrator  = md
dt  = 0.002
nsteps  = 5 ; 10 ns
init_step   = 0; For exact run continuation or
redoing part of a run
comm-mode   = Angular  ; mode for center of mass motion
removal
nstcomm = 10   ; number of steps for center of
mass motion removal

; OUTPUT CONTROL OPTIONS
; Output frequency for coords (x), velocities (v) and forces (f)
nstxout  = 1000
nstvout  = 1000
nstfout  = 1000

; Output frequency for energies to log file and energy file
nstlog   = 1000
nstcalcenergy= 10
nstenergy= 1000

; Neighbor searching and Electrostatitcs
vdwtype  = cut-off
coulombtype  = cut-off
;cutoff-scheme= group
pbc  = no
periodic_molecules   = no
nstlist  = 5
ns_type  = grid
rlist= 1.0
rcoulomb = 1.6
rvdw = 1.0
; Selection of energy groups
energygrps   = fixed not_fixed
freezegrps   = fixed not_fixed
freezedim= Y Y Y N N N

;Relative dielectric constant for the medium and the reaction field
epsilon_r= 1
epsilon_rf   = 1

; Temperutare coupling
tcoupl   = v-rescale
tc_grps  = fixed not_fixed
tau_t= 0.01 0.01
;nst_couple   = 5
ref_t= 300.00 300.00

; Pressure coupling
pcoupl   = no
;pcoupletype  = isotropic
tau_p= 1.0
;compressiblity   = 4.5e-5
ref_p= 1.0
gen_vel  = yes
gen_temp = 300.00 300.00
gen_seed = -1
constraints  = h-bonds


; Implicit solvent
implicit_solvent = GBSA
gb_algorithm = Still ; HCT  ; OBC
nstgbradii   = 1.0
rgbradii = 1.0  ; [nm] Cut-off for the
calculation
of the Born radii. Currently must be equal to rlist
gb_epsilon_solvent   = 80   ; Dielectric constant for the
implicit solvent
gb_saltconc  = 0; Salt concentration for
implicit
solvent models, currently not used
sa_algorithm = Ace-approximation
sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2)
for the SA (nonpolar surface) part of GBSA. The value -1 will set
default
valu

Re: [gmx-users] REMD implicit solvent

2018-01-05 Thread Urszula Uciechowska

Hi,

I should run it by using mdrun_mpi?

best
Urszula

> Hello,
>
>  From my experience, the domain decomposition is not compatible with
> implicit solvent, you have to switch
> to particle decomposition for the simulations.
>
>
> All the best,
> Qinghua
>
> On 01/05/2018 12:40 PM, Urszula Uciechowska wrote:
>> Hi,
>>
>> I just run a normal single-replica. Now the error that I have is:
>>
>> Program mdrun_mpi, VERSION 4.5.5
>> Source code file: domdec.c, line: 3266
>>
>> Software inconsistency error:
>> Inconsistent DD boundary staggering limits!
>> For more information and tips for troubleshooting, please check the
>> GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>>
>>
>> Any suggestions? What can I do to run it?
>>
>> Thanks
>> Ula
>>
>>> Hi,
>>>
>>> Did you try to debug your setup by running a normal single-replica
>>> simulation first?
>>>
>>> Mark
>>>
>>> On Fri, Jan 5, 2018 at 12:12 PM Urszula Uciechowska <
>>> urszula.uciechow...@biotech.ug.edu.pl> wrote:
>>>

 Dear gromacs users,

 I am trying to run REMD simulations using 4.5.5 version (implicit
 solvent). The MD procedure:

 pdb2gmx -f  prot.pdb -o prot.gro -q prot.pdb -ignh -ss.

 The input for minimization step:

 ; Run control parameters
 integrator   = cg
 nsteps   = 800
 vdwtype  = cut-off
 coulombtype  = cut-off
 ;cutoff-scheme= group
 pbc  = no
 periodic_molecules   = no
 nstlist  = 10
 ns_type  = grid
 rlist= 1.0
 rcoulomb = 1.6
 rvdw = 1.6
 comm-mode= Angular
 nstcomm  = 10
 ;
 ;Energy minimizing stuff
 ;
 emtol= 100.0
 nstcgsteep   = 2
 emstep   = 0.01
 ;
 ;Relative dielectric constant for the medium and the reaction field
 epsilon_r= 1
 epsilon_rf   = 1
 ;
 ; Implicit solvent
 ;
 implicit_solvent = GBSA
 gb_algorithm = OBC  ;Still  HCT   OBC
 nstgbradii   = 1.0
 rgbradii = 1.0  ; [nm] Cut-off for the calculation
 of
 the Born radii. Currently must be equal to rlist
 gb_epsilon_solvent   = 80   ; Dielectric constant for the
 implicit
 solvent
 gb_saltconc  = 0; Salt concentration for implicit
 solvent models, currently not used
 sa_algorithm = Ace-approximation
 sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2)
 for
 the SA (nonpolar surface) part of GBSA. The value -1 will set default
 value for Still/HCT/OBC GB-models.

 and it finished without errors.

 The problem is with equilibration step. The input file that I used is:

 ; MD CONTROL OPTIONS
 integrator  = md
 dt  = 0.002
 nsteps  = 5 ; 10 ns
 init_step   = 0; For exact run continuation or
 redoing part of a run
 comm-mode   = Angular  ; mode for center of mass
 motion
 removal
 nstcomm = 10   ; number of steps for center of
 mass motion removal

 ; OUTPUT CONTROL OPTIONS
 ; Output frequency for coords (x), velocities (v) and forces (f)
 nstxout  = 1000
 nstvout  = 1000
 nstfout  = 1000

 ; Output frequency for energies to log file and energy file
 nstlog   = 1000
 nstcalcenergy= 10
 nstenergy= 1000

 ; Neighbor searching and Electrostatitcs
 vdwtype  = cut-off
 coulombtype  = cut-off
 ;cutoff-scheme= group
 pbc  = no
 periodic_molecules   = no
 nstlist  = 5
 ns_type  = grid
 rlist= 1.0
 rcoulomb = 1.6
 rvdw = 1.0
 ; Selection of energy groups
 energygrps   = fixed not_fixed
 freezegrps   = fixed not_fixed
 freezedim= Y Y Y N N N

 ;Relative dielectric constant for the medium and the reaction field
 epsilon_r= 1
 epsilon_rf   = 1

 ; Temperutare coupling
 tcoupl   = v-rescale
 tc_grps  = fixed not_fixed
 tau_t= 0.01 0.01
 ;nst_couple   = 5
 ref_t= 300.00 300.00

 ; Pressure coupling
 pcoupl   = no
 ;pcoupletype  = isotropic
 tau_p= 1.0
 ;compressiblity   = 4.5e-5
 ref_p= 1.0
 gen_vel  = yes
 gen_temp = 

Re: [gmx-users] REMD DLB bug

2018-02-12 Thread Szilárd Páll
Hi,

The fix will be released in an upcoming 2016.5 patch release. (which
you can see in the redmine issue page "Target version" field BTW).

Cheers,
--
Szilárd


On Mon, Feb 12, 2018 at 2:49 PM, Akshay  wrote:
> Hello All,
>
> I was running REMD simulations on Gromacs 2016.1 when my simulation crashed
> with the error
>
> Assertion failed:
> Condition: comm->cycl_n[ddCyclStep] > 0
> When we turned on DLB, we should have measured cycles
>
> I saw that there was a bug #2298 reported about this recently at
> https://redmine.gromacs.org/issues/2298. I wanted to know if this fix has
> been implemented in the latest 2018 or 2016.4 versions?
>
> Thanks,
> Akshay
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] REMD - subsystems not compatible

2019-04-24 Thread Per Larsson
Hi gmx-users, 

I am trying to start a replica exchange simulation of a model peptide in water, 
but can’t get it to run properly. 
I have limited experience with REMD, so I thought I’d ask here for all the 
rookie mistakes it is possible to do.
I have also seen the earlier discussions about the error message, but those 
seemed to be related to restarts and/or continuations, rather than not being 
able to run at all. 

My gromacs version is 2016 (for compatibility reasons), and the exact error 
message I get is this:

---
Program: gmx mdrun, version 2016.5
Source file: src/gromacs/mdlib/main.cpp (line 115)
MPI rank:32 (out of 62)

Fatal error:
The 62 subsystems are not compatible

I followed Marks tutorial on the gromacs website and have a small bash-script 
that loops over all desired temperatures, run equilibration etc. 
I then start the simulation like this:

$MPIRUN $GMX mdrun $ntmpi -ntomp $ntomp -deffnm sim -replex 500 -multidir 
~pfs/ferring/gnrh_aa/dipep_remd/sim* 

What could be the source of this incompatibility?

Many thanks
/Per


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] REMD analysis of trajectories

2017-05-31 Thread YanhuaOuyang
Hi,
   I have run a 100ns-REMD of protein, which has 20 replicas (i.e. remd1.xtc, 
remd2.xtc, ..., remd20.xtc).  I want to analyze a trajectory at specific 
temperature  such as a trajectory at experiment temperature 298K rather than 
analyzing the continuous trajectory. I have known GROMACS exchange coordinate 
when REMD running. Do I just analyze remd2.xtc of replica 2(T=298K) if I want 
to analyze a trajectory at 298K? Do I need to do something else on the 
trajectories to get a trajectory at specific temperature(i.e. 298K)?

Best regards,
Ouyang
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD stall out

2020-02-17 Thread Szilárd Páll
Hi,

If I understand correctly your jobs stall, what is in the log output? What
about the console? Does this happen without PLUMED?

--
Szilárd


On Tue, Feb 11, 2020 at 7:56 PM Daniel Burns  wrote:

> Hi,
>
> I continue to have trouble getting an REMD job to run.  It never makes it
> to the point that it generates trajectory files but it never gives any
> error either.
>
> I have switched from a large TREMD with 72 replicas to the Plumed
> Hamiltonian method with only 6 replicas.  Everything is now on one node and
> each replica has 6 cores.  I've turned off the dynamic load balancing on
> this attempt per the recommendation from the Plumed site.
>
> Any ideas on how to troubleshoot?
>
> Thank you,
>
> Dan
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD stall out

2020-02-17 Thread Daniel Burns
HI Szilard,

I've deleted all my output but all the writing to the log and console stops
around the step noting the domain decomposition (or other preliminary
task).  It is the same with or without Plumed - the TREMD with Gromacs only
was the first thing to present this issue.

I've discovered that if each replica is assigned its own node, the
simulations proceed.  If I try to run several replicas on each node
(divided evenly), the simulations stall out before any trajectories get
written.

I have tried many different -np and -ntomp options as well as several slurm
job submission scripts with node/ thread configurations but multiple
simulations per node will not work.  I need to be able to run several
replicas on the same node to get enough data since it's hard to get more
than 8 nodes (and as a result, replicas).

Thanks for your reply.

-Dan

On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns  wrote:

> Hi,
>
> I continue to have trouble getting an REMD job to run.  It never makes it
> to the point that it generates trajectory files but it never gives any
> error either.
>
> I have switched from a large TREMD with 72 replicas to the Plumed
> Hamiltonian method with only 6 replicas.  Everything is now on one node and
> each replica has 6 cores.  I've turned off the dynamic load balancing on
> this attempt per the recommendation from the Plumed site.
>
> Any ideas on how to troubleshoot?
>
> Thank you,
>
> Dan
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD stall out

2020-02-17 Thread Szilárd Páll
Hi Dan,

What you describe in not an expected behaviro and it is something we should
look into.

What GROMACS version were you using? One thing that may help diagnosing the
issue is: try to disable replica exchange and run -multidir that way. Does
the simulation proceed?

Can you please open an issue on redmine.gromacs.org and upload the
necessary input files to reproduce, logs of your runs that reproduced the
issue.

Cheers,
--
Szilárd


On Mon, Feb 17, 2020 at 3:56 PM Daniel Burns  wrote:

> HI Szilard,
>
> I've deleted all my output but all the writing to the log and console stops
> around the step noting the domain decomposition (or other preliminary
> task).  It is the same with or without Plumed - the TREMD with Gromacs only
> was the first thing to present this issue.
>
> I've discovered that if each replica is assigned its own node, the
> simulations proceed.  If I try to run several replicas on each node
> (divided evenly), the simulations stall out before any trajectories get
> written.
>
> I have tried many different -np and -ntomp options as well as several slurm
> job submission scripts with node/ thread configurations but multiple
> simulations per node will not work.  I need to be able to run several
> replicas on the same node to get enough data since it's hard to get more
> than 8 nodes (and as a result, replicas).
>
> Thanks for your reply.
>
> -Dan
>
> On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns  wrote:
>
> > Hi,
> >
> > I continue to have trouble getting an REMD job to run.  It never makes it
> > to the point that it generates trajectory files but it never gives any
> > error either.
> >
> > I have switched from a large TREMD with 72 replicas to the Plumed
> > Hamiltonian method with only 6 replicas.  Everything is now on one node
> and
> > each replica has 6 cores.  I've turned off the dynamic load balancing on
> > this attempt per the recommendation from the Plumed site.
> >
> > Any ideas on how to troubleshoot?
> >
> > Thank you,
> >
> > Dan
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD stall out

2020-02-17 Thread Mark Abraham
Hi,

That could be caused by configuration of the parallel file system or MPI on
your cluster. If only one file descriptor is available per node to an MPI
job, then your symptoms are explained. Some kinds of compute jobs follow
such a model, so maybe someone optimized something for that.

Mark

On Mon, 17 Feb 2020 at 15:56, Daniel Burns  wrote:

> HI Szilard,
>
> I've deleted all my output but all the writing to the log and console stops
> around the step noting the domain decomposition (or other preliminary
> task).  It is the same with or without Plumed - the TREMD with Gromacs only
> was the first thing to present this issue.
>
> I've discovered that if each replica is assigned its own node, the
> simulations proceed.  If I try to run several replicas on each node
> (divided evenly), the simulations stall out before any trajectories get
> written.
>
> I have tried many different -np and -ntomp options as well as several slurm
> job submission scripts with node/ thread configurations but multiple
> simulations per node will not work.  I need to be able to run several
> replicas on the same node to get enough data since it's hard to get more
> than 8 nodes (and as a result, replicas).
>
> Thanks for your reply.
>
> -Dan
>
> On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns  wrote:
>
> > Hi,
> >
> > I continue to have trouble getting an REMD job to run.  It never makes it
> > to the point that it generates trajectory files but it never gives any
> > error either.
> >
> > I have switched from a large TREMD with 72 replicas to the Plumed
> > Hamiltonian method with only 6 replicas.  Everything is now on one node
> and
> > each replica has 6 cores.  I've turned off the dynamic load balancing on
> > this attempt per the recommendation from the Plumed site.
> >
> > Any ideas on how to troubleshoot?
> >
> > Thank you,
> >
> > Dan
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD stall out

2020-02-17 Thread Daniel Burns
Thanks Mark and Szilard,

I forwarded Mark's suggestion to IT.  I'll see what they have to say and
then I'll try the simulation again and open an issue on redime.

Thank you,

Dan

On Mon, Feb 17, 2020 at 9:09 AM Mark Abraham 
wrote:

> Hi,
>
> That could be caused by configuration of the parallel file system or MPI on
> your cluster. If only one file descriptor is available per node to an MPI
> job, then your symptoms are explained. Some kinds of compute jobs follow
> such a model, so maybe someone optimized something for that.
>
> Mark
>
> On Mon, 17 Feb 2020 at 15:56, Daniel Burns  wrote:
>
> > HI Szilard,
> >
> > I've deleted all my output but all the writing to the log and console
> stops
> > around the step noting the domain decomposition (or other preliminary
> > task).  It is the same with or without Plumed - the TREMD with Gromacs
> only
> > was the first thing to present this issue.
> >
> > I've discovered that if each replica is assigned its own node, the
> > simulations proceed.  If I try to run several replicas on each node
> > (divided evenly), the simulations stall out before any trajectories get
> > written.
> >
> > I have tried many different -np and -ntomp options as well as several
> slurm
> > job submission scripts with node/ thread configurations but multiple
> > simulations per node will not work.  I need to be able to run several
> > replicas on the same node to get enough data since it's hard to get more
> > than 8 nodes (and as a result, replicas).
> >
> > Thanks for your reply.
> >
> > -Dan
> >
> > On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns 
> wrote:
> >
> > > Hi,
> > >
> > > I continue to have trouble getting an REMD job to run.  It never makes
> it
> > > to the point that it generates trajectory files but it never gives any
> > > error either.
> > >
> > > I have switched from a large TREMD with 72 replicas to the Plumed
> > > Hamiltonian method with only 6 replicas.  Everything is now on one node
> > and
> > > each replica has 6 cores.  I've turned off the dynamic load balancing
> on
> > > this attempt per the recommendation from the Plumed site.
> > >
> > > Any ideas on how to troubleshoot?
> > >
> > > Thank you,
> > >
> > > Dan
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD stall out

2020-02-20 Thread Daniel Burns
Hi again,

It seems including our openmp module was responsible for the issue the
whole time.  When I submit the job only loading pmix and gromacs, replica
exchange proceeds.

Thank you,

Dan

On Mon, Feb 17, 2020 at 9:09 AM Mark Abraham 
wrote:

> Hi,
>
> That could be caused by configuration of the parallel file system or MPI on
> your cluster. If only one file descriptor is available per node to an MPI
> job, then your symptoms are explained. Some kinds of compute jobs follow
> such a model, so maybe someone optimized something for that.
>
> Mark
>
> On Mon, 17 Feb 2020 at 15:56, Daniel Burns  wrote:
>
> > HI Szilard,
> >
> > I've deleted all my output but all the writing to the log and console
> stops
> > around the step noting the domain decomposition (or other preliminary
> > task).  It is the same with or without Plumed - the TREMD with Gromacs
> only
> > was the first thing to present this issue.
> >
> > I've discovered that if each replica is assigned its own node, the
> > simulations proceed.  If I try to run several replicas on each node
> > (divided evenly), the simulations stall out before any trajectories get
> > written.
> >
> > I have tried many different -np and -ntomp options as well as several
> slurm
> > job submission scripts with node/ thread configurations but multiple
> > simulations per node will not work.  I need to be able to run several
> > replicas on the same node to get enough data since it's hard to get more
> > than 8 nodes (and as a result, replicas).
> >
> > Thanks for your reply.
> >
> > -Dan
> >
> > On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns 
> wrote:
> >
> > > Hi,
> > >
> > > I continue to have trouble getting an REMD job to run.  It never makes
> it
> > > to the point that it generates trajectory files but it never gives any
> > > error either.
> > >
> > > I have switched from a large TREMD with 72 replicas to the Plumed
> > > Hamiltonian method with only 6 replicas.  Everything is now on one node
> > and
> > > each replica has 6 cores.  I've turned off the dynamic load balancing
> on
> > > this attempt per the recommendation from the Plumed site.
> > >
> > > Any ideas on how to troubleshoot?
> > >
> > > Thank you,
> > >
> > > Dan
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD stall out

2020-02-21 Thread Daniel Burns
This was not actually the solution.  Wanted to follow up in case
someone else is experiencing this problem.  We are reinstalling the openmp
version.

On Thu, Feb 20, 2020 at 3:10 PM Daniel Burns  wrote:

> Hi again,
>
> It seems including our openmp module was responsible for the issue the
> whole time.  When I submit the job only loading pmix and gromacs, replica
> exchange proceeds.
>
> Thank you,
>
> Dan
>
> On Mon, Feb 17, 2020 at 9:09 AM Mark Abraham 
> wrote:
>
>> Hi,
>>
>> That could be caused by configuration of the parallel file system or MPI
>> on
>> your cluster. If only one file descriptor is available per node to an MPI
>> job, then your symptoms are explained. Some kinds of compute jobs follow
>> such a model, so maybe someone optimized something for that.
>>
>> Mark
>>
>> On Mon, 17 Feb 2020 at 15:56, Daniel Burns  wrote:
>>
>> > HI Szilard,
>> >
>> > I've deleted all my output but all the writing to the log and console
>> stops
>> > around the step noting the domain decomposition (or other preliminary
>> > task).  It is the same with or without Plumed - the TREMD with Gromacs
>> only
>> > was the first thing to present this issue.
>> >
>> > I've discovered that if each replica is assigned its own node, the
>> > simulations proceed.  If I try to run several replicas on each node
>> > (divided evenly), the simulations stall out before any trajectories get
>> > written.
>> >
>> > I have tried many different -np and -ntomp options as well as several
>> slurm
>> > job submission scripts with node/ thread configurations but multiple
>> > simulations per node will not work.  I need to be able to run several
>> > replicas on the same node to get enough data since it's hard to get more
>> > than 8 nodes (and as a result, replicas).
>> >
>> > Thanks for your reply.
>> >
>> > -Dan
>> >
>> > On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns 
>> wrote:
>> >
>> > > Hi,
>> > >
>> > > I continue to have trouble getting an REMD job to run.  It never
>> makes it
>> > > to the point that it generates trajectory files but it never gives any
>> > > error either.
>> > >
>> > > I have switched from a large TREMD with 72 replicas to the Plumed
>> > > Hamiltonian method with only 6 replicas.  Everything is now on one
>> node
>> > and
>> > > each replica has 6 cores.  I've turned off the dynamic load balancing
>> on
>> > > this attempt per the recommendation from the Plumed site.
>> > >
>> > > Any ideas on how to troubleshoot?
>> > >
>> > > Thank you,
>> > >
>> > > Dan
>> > >
>> > --
>> > Gromacs Users mailing list
>> >
>> > * Please search the archive at
>> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> > posting!
>> >
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> > * For (un)subscribe requests visit
>> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> > send a mail to gmx-users-requ...@gromacs.org.
>> >
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD on GPU

2013-11-28 Thread Mark Abraham
On Thu, Nov 28, 2013 at 3:01 PM, James Starlight wrote:

> Dear Gromacs users!
>
> I'd like to perform replica exchange simulation
>
> For this I've made bash script which create n folders like
> replica-298
> replica-312
> replica-323
> replica-334
> ...
> replica-N
>  with all files needed for simulation considted of specified mdp file with
> different ref_t value
> No I'd like to launch this simulation using
>  -multidir replica-298 replica-312 replica-323 replica-334
>
> Unfortunately I've obtained
>
> Fatal error:
> mdrun -multi is not supported with the thread library.Please compile
> GROMACS with MPI support
>
> Does it possible to install mpirun on the existing Gromacs-4.6 built from
> source (without removing of the installed files)?
>

No, please compile GROMACS with MPI support, per the installation
instructions.


> Does it possible to run replica simulations in GPU supported mode ?
>

Yes, but you need to compile for both GPU and MPI. And see
http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Using_multi-simulations_and_GPUs
for
mdrun tips.

Mark

Thanks for help,
>
> James
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD slow's down drastically

2014-02-23 Thread Singam Karthick
Dear members,
I am trying to run REMD simulation for poly Alanine (12 residue) system. I used 
remd generator to get the range of temperature with the exchange probability of 
0.3. I was getting the 125 replicas. I tried to simulate 125 replicas its 
drastically slow down the simulation time (for 70 pico seconds it took around 
17 hours ) could anyone please tell me how to solve this issue.

Following is the MDP file 

title           = G4Ga3a4a5 production. 
;define         = ;-DPOSRES     ; position restrain the protein
; Run parameters
integrator      = md            ; leap-frog integrator
nsteps          = 1250      ; 2 * 500 = 3ns
dt              = 0.002         ; 2 fs
; Output control
nstxout         = 0             ; save coordinates every 0.2 ps
nstvout         = 1         ; save velocities every 0.2 ps
nstxtcout       = 500           ; save xtc coordinate every 0.2 ps
nstenergy       = 500           ; save energies every 0.2 ps
nstlog          = 100           ; update log file every 0.2 ps
; Bond parameters
continuation    = yes           ; Restarting after NVT 
constraint_algorithm = lincs    ; holonomic constraints 
constraints     = hbonds        ; all bonds (even heavy atom-H bonds) 
constrained
lincs_iter      = 1             ; accuracy of LINCS
lincs_order     = 4             ; also related to accuracy
morse           = no
; Neighborsearching
ns_type         = grid          ; search neighboring grid cels
nstlist         = 5             ; 10 fs
rlist           = 1.0           ; short-range neighborlist cutoff (in nm)
rcoulomb        = 1.0           ; short-range electrostatic cutoff (in nm)
rvdw            = 1.0           ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype     = PME           ; Particle Mesh Ewald for long-range 
electrostatics
pme_order       = 4             ; cubic interpolation
fourierspacing  = 0.16          ; grid spacing for FFT
; Temperature coupling is on
tcoupl          = V-rescale     ; modified Berendsen thermostat
tc-grps         =  protein SOL Cl       ;two coupling groups - more accurate
tau_t                 = 0.1 0.1  0.1 ; time constant, in ps
ref_t                 = X  X  X    ; reference temperature, one for 
each group, in K
; Pressure coupling is on
pcoupl          = Parrinello-Rahman     ; Pressure coupling on in NPT
pcoupltype      = isotropic     ; uniform scaling of box vectors
tau_p           = 2.0           ; time constant, in ps
ref_p           = 1.0           ; reference pressure, in bar
compressibility = 4.5e-5        ; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc             = xyz           ; 3-D PBC
; Dispersion correction

DispCorr        = EnerPres      ; account for cut-off vdW scheme


regards
singam
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Melting Curve

2014-03-03 Thread David van der Spoel

On 2014-03-04 00:20, atanu_das wrote:

Dear Gromacs users,
I have performed a REMD simulation with 12 replicas
exponentially spaced between 260-420K. However, when I applied g_kinetics
program to generate the melting curve, I got 161 values starting from 260K
to 420K having the folded fraction reported at each temperature. My question
is if I am applying 12 replicas exponentially spaced in the temperature
range chosen, how could I get 161 values? Am I doing something wrong? Does
g_kinetics in GROMACS use any smoothing function to generate the
intermediate temperature and folded fraction values?
Please suggest/advise.
Atanu

What kind of values? You get one fraction folded per replica, right?
Did you succesfully run the demux.pl?


--
View this message in context: 
http://gromacs.5086.x6.nabble.com/REMD-Melting-Curve-tp5014920.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.




--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell & Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Melting Curve

2014-03-04 Thread atanu das
Dear Sir,
               I successfully ran demix.pl and generated the xvg files - 
replica_ndx.xvg and replica_temp.xvg. I am attaching the file that I got as 
melt.xvg below ..

# This file was created Wed Feb 26 10:10:27 2014
# by the following command:
# g_kinetics_d -f replica_temp.xvg -d replica_index.xvg 
#
# g_kinetics_d is part of G R O M A C S:
#
# Groningen Machine for Chemical Simulation
#
@    title "Melting curve"
@    xaxis  label "T (K)"
@    yaxis  label ""
@TYPE xy
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend "Folded fraction"
@ s1 legend "DG (kJ/mole)"
  260     0.929     5.556
  261     0.927     5.500
  262     0.924     5.445
  263     0.922     5.389
  264     0.919     5.334
  265     0.916     5.278
  266     0.914     5.223
  267     0.911     5.167
  268     0.908     5.112
  269     0.906     5.056
  270     0.903     5.000
  271     0.900     4.945
  272     0.897     4.889
  273     0.894     4.834
  274     0.891     4.778
  275     0.888     4.723
  276     0.884     4.667
  277     0.881     4.612
  278     0.878     4.556
  279     0.874     4.500
  280     0.871     4.445
  281     0.867     4.389
  282     0.864     4.334
  283     0.860     4.278
  284     0.857     4.223
  285     0.853     4.167
  286     0.849     4.112
  287     0.845     4.056
  288     0.842     4.000
  289     0.838     3.945
  290     0.834     3.889
  291     0.830     3.834
  292     0.826     3.778
  293     0.822     3.723
  294     0.818     3.667
  295     0.813     3.612
  296     0.809     3.556
  297     0.805     3.500
  298     0.801     3.445
  299     0.796     3.389
  300     0.792     3.334
  301     0.787     3.278
  302     0.783     3.223
  303     0.779     3.167
  304     0.774     3.112
  305     0.769     3.056
  306     0.765     3.000
  307     0.760     2.945
  308     0.756     2.889
  309     0.751     2.834
  310     0.746     2.778
  311     0.741     2.723
  312     0.737     2.667
  313     0.732     2.612
  314     0.727     2.556
  315     0.722     2.500
  316     0.717     2.445
  317     0.712     2.389
  318     0.707     2.334
  319     0.702     2.278
  320     0.697     2.223
  321     0.693     2.167
  322     0.688     2.112
  323     0.683     2.056
  324     0.678     2.001
  325     0.673     1.945
  326     0.668     1.889
  327     0.663     1.834
  328     0.657     1.778
  329     0.652     1.723
  330     0.647     1.667
  331     0.642     1.612
  332     0.637     1.556
  333     0.632     1.501
  334     0.627     1.445
  335     0.622     1.389
  336     0.617     1.334
  337     0.612     1.278
  338     0.607     1.223
  339     0.602     1.167
  340     0.597     1.112
  341     0.592     1.056
  342     0.587     1.001
  343     0.582     0.945
  344     0.577     0.889
  345     0.572     0.834
  346     0.567     0.778
  347     0.562     0.723
  348     0.557     0.667
  349     0.553     0.612
  350     0.548     0.556
  351     0.543     0.501
  352     0.538     0.445
  353     0.533     0.389
  354     0.528     0.334
  355     0.524     0.278
  356     0.519     0.223
  357     0.514     0.167
  358     0.509     0.112
  359     0.505     0.056
  360     0.500     0.001
  361     0.495    -0.055
  362     0.491    -0.111
  363     0.486    -0.166
  364     0.482    -0.222
  365     0.477    -0.277
  366     0.473    -0.333
  367     0.468    -0.388
  368     0.464    -0.444
  369     0.459    -0.499
  370     0.455    -0.555
  371     0.451    -0.611
  372     0.446    -0.666
  373     0.442    -0.722
  374     0.438    -0.777
  375     0.434    -0.833
  376     0.429    -0.888
  377     0.425    -0.944
  378     0.421    -0.999
  379     0.417    -1.055
  380     0.413    -1.111
  381     0.409    -1.166
  382     0.405    -1.222
  383     0.401    -1.277
  384     0.397    -1.333
  385     0.393    -1.388
  386     0.389    -1.444
  387     0.386    -1.499
  388     0.382    -1.555
  389     0.378    -1.610
  390     0.374    -1.666
  391     0.371    -1.722
  392     0.367    -1.777
  393     0.363    -1.833
  394     0.360    -1.888
  395     0.356    -1.944
  396     0.353    -1.999
  397     0.349    -2.055
  398     0.346    -2.110
  399     0.342    -2.166
  400     0.339    -2.222
  401     0.336    -2.277
  402     0.332    -2.333
  403     0.329    -2.388
  404     0.326    -2.444
  405     0.323    -2.499
  406     0.319    -2.555
  407     0.316    -2.610
  408     0.313    -2.666
  409     0.310    -2.722
  410     0.307    -2.777
  411     0.304    -2.833
  412     0.301    -2.888
  413     0.298    -2.944
  414     0.295    -2.999
  415     0.292    -3.055
  416     0.289    -3.110
  417     0.286    -3.166
  418     0.284    -3.222
  419     0.281    -3.277
  420     0.278    -3.333

So, as you can see, I got folded fraction and associated free energy change at 
each temperature ranging from 260-420K.
Atanu



On Tuesday, 4 Mar

Re: [gmx-users] REMD Melting Curve

2014-03-04 Thread Justin Lemkul



On 3/4/14, 12:22 PM, atanu das wrote:

Dear Sir,
I successfully ran demix.pl and generated the xvg files - 
replica_ndx.xvg and replica_temp.xvg. I am attaching the file that I got as 
melt.xvg below ..

# This file was created Wed Feb 26 10:10:27 2014
# by the following command:
# g_kinetics_d -f replica_temp.xvg -d replica_index.xvg
#
# g_kinetics_d is part of G R O M A C S:
#
# Groningen Machine for Chemical Simulation
#
@title "Melting curve"
@xaxis  label "T (K)"
@yaxis  label ""
@TYPE xy
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend "Folded fraction"
@ s1 legend "DG (kJ/mole)"
   260 0.929 5.556
   261 0.927 5.500
   262 0.924 5.445
   263 0.922 5.389
   264 0.919 5.334
   265 0.916 5.278
   266 0.914 5.223
   267 0.911 5.167
   268 0.908 5.112
   269 0.906 5.056
   270 0.903 5.000
   271 0.900 4.945
   272 0.897 4.889
   273 0.894 4.834
   274 0.891 4.778
   275 0.888 4.723
   276 0.884 4.667
   277 0.881 4.612
   278 0.878 4.556
   279 0.874 4.500
   280 0.871 4.445
   281 0.867 4.389
   282 0.864 4.334
   283 0.860 4.278
   284 0.857 4.223
   285 0.853 4.167
   286 0.849 4.112
   287 0.845 4.056
   288 0.842 4.000
   289 0.838 3.945
   290 0.834 3.889
   291 0.830 3.834
   292 0.826 3.778
   293 0.822 3.723
   294 0.818 3.667
   295 0.813 3.612
   296 0.809 3.556
   297 0.805 3.500
   298 0.801 3.445
   299 0.796 3.389
   300 0.792 3.334
   301 0.787 3.278
   302 0.783 3.223
   303 0.779 3.167
   304 0.774 3.112
   305 0.769 3.056
   306 0.765 3.000
   307 0.760 2.945
   308 0.756 2.889
   309 0.751 2.834
   310 0.746 2.778
   311 0.741 2.723
   312 0.737 2.667
   313 0.732 2.612
   314 0.727 2.556
   315 0.722 2.500
   316 0.717 2.445
   317 0.712 2.389
   318 0.707 2.334
   319 0.702 2.278
   320 0.697 2.223
   321 0.693 2.167
   322 0.688 2.112
   323 0.683 2.056
   324 0.678 2.001
   325 0.673 1.945
   326 0.668 1.889
   327 0.663 1.834
   328 0.657 1.778
   329 0.652 1.723
   330 0.647 1.667
   331 0.642 1.612
   332 0.637 1.556
   333 0.632 1.501
   334 0.627 1.445
   335 0.622 1.389
   336 0.617 1.334
   337 0.612 1.278
   338 0.607 1.223
   339 0.602 1.167
   340 0.597 1.112
   341 0.592 1.056
   342 0.587 1.001
   343 0.582 0.945
   344 0.577 0.889
   345 0.572 0.834
   346 0.567 0.778
   347 0.562 0.723
   348 0.557 0.667
   349 0.553 0.612
   350 0.548 0.556
   351 0.543 0.501
   352 0.538 0.445
   353 0.533 0.389
   354 0.528 0.334
   355 0.524 0.278
   356 0.519 0.223
   357 0.514 0.167
   358 0.509 0.112
   359 0.505 0.056
   360 0.500 0.001
   361 0.495-0.055
   362 0.491-0.111
   363 0.486-0.166
   364 0.482-0.222
   365 0.477-0.277
   366 0.473-0.333
   367 0.468-0.388
   368 0.464-0.444
   369 0.459-0.499
   370 0.455-0.555
   371 0.451-0.611
   372 0.446-0.666
   373 0.442-0.722
   374 0.438-0.777
   375 0.434-0.833
   376 0.429-0.888
   377 0.425-0.944
   378 0.421-0.999
   379 0.417-1.055
   380 0.413-1.111
   381 0.409-1.166
   382 0.405-1.222
   383 0.401-1.277
   384 0.397-1.333
   385 0.393-1.388
   386 0.389-1.444
   387 0.386-1.499
   388 0.382-1.555
   389 0.378-1.610
   390 0.374-1.666
   391 0.371-1.722
   392 0.367-1.777
   393 0.363-1.833
   394 0.360-1.888
   395 0.356-1.944
   396 0.353-1.999
   397 0.349-2.055
   398 0.346-2.110
   399 0.342-2.166
   400 0.339-2.222
   401 0.336-2.277
   402 0.332-2.333
   403 0.329-2.388
   404 0.326-2.444
   405 0.323-2.499
   406 0.319-2.555
   407 0.316-2.610
   408 0.313-2.666
   409 0.310-2.722
   410 0.307-2.777
   411 0.304-2.833
   412 0.301-2.888
   413 0.298-2.944
   414 0.295-2.999
   415 0.292-3.055
   416 0.289-3.110
   417 0.286-3.166
   418 0.284-3.22

Re: [gmx-users] REMD Melting Curve

2014-03-04 Thread atanu das
Thanks Justin. That's reassuring. However, I was wondering if any smoothing 
function is used in getting the melted fractions at intermediate temperatures 
with 1-K resolution. I could not find it in the description of the g_kinetics 
program. I need to know this because - (1) to justify my result to a person who 
would ask how I am getting values more than the number of replicas being used 
and (2) if I know the smoothing function then I would be able to estimate the 
temperature when folded fraction f=0. Because then I would be able to extract 
the free energy of complete unfolding as currently the melted fraction stops as 
0.278.
-Atanu    



On Tuesday, 4 March 2014 4:01 PM, Justin Lemkul  wrote:
 


On 3/4/14, 12:22 PM, atanu das wrote:
> Dear Sir,
>                 I successfully ran demix.pl and generated the xvg files - 
>replica_ndx.xvg and replica_temp.xvg. I am attaching the file that I got as 
>melt.xvg below ..
>
> # This file was created Wed Feb 26 10:10:27 2014
> # by the following command:
> # g_kinetics_d -f replica_temp.xvg -d replica_index.xvg
> #
> # g_kinetics_d is part of G R O M A C S:
> #
> # Groningen Machine for Chemical Simulation
> #
> @    title "Melting curve"
> @    xaxis  label "T (K)"
> @    yaxis  label ""
> @TYPE xy
> @ view 0.15, 0.15, 0.75, 0.85
> @ legend on
> @ legend box on
> @ legend loctype view
> @ legend 0.78, 0.8
> @ legend length 2
> @ s0 legend "Folded fraction"
> @ s1 legend "DG (kJ/mole)"
>    260     0.929     5.556
>    261     0.927     5.500
>    262     0.924     5.445
>    263     0.922     5.389
>    264     0.919     5.334
>    265     0.916     5.278
>    266     0.914     5.223
>    267     0.911     5.167
>    268     0.908     5.112
>    269     0.906     5.056
>    270     0.903     5.000
>    271     0.900     4.945
>    272     0.897     4.889
>    273     0.894     4.834
>    274     0.891     4.778
>    275     0.888     4.723
>    276     0.884     4.667
>   
 277     0.881     4.612
>    278     0.878     4.556
>    279     0.874     4.500
>    280     0.871     4.445
>    281     0.867     4.389
>    282     0.864     4.334
>    283     0.860     4.278
>    284     0.857     4.223
>    285     0.853     4.167
>    286     0.849     4.112
>    287     0.845     4.056
>    288     0.842     4.000
>    289     0.838 
    3.945
>    290     0.834     3.889
>    291     0.830     3.834
>    292     0.826     3.778
>    293     0.822     3.723
>    294     0.818     3.667
>    295     0.813     3.612
>    296     0.809     3.556
>    297     0.805     3.500
>    298     0.801     3.445
>    299     0.796     3.389
>    300     0.792     3.334
>    301     0.787     3.278
>    302     0.783     3.223
>    303     0.779     3.167
>    304     0.774     3.112
>    305     0.769     3.056
>    306     0.765     3.000
>    307     0.760     2.945
>    308     0.756     2.889
>    309     0.751     2.834
>    310     0.746     2.778
>    311     0.741     2.723
>    312     0.737     2.667
>    313     0.732     2.612
>   
 314     0.727     2.556
>    315     0.722     2.500
>    316     0.717     2.445
>    317     0.712     2.389
>    318     0.707     2.334
>    319     0.702     2.278
>    320     0.697     2.223
>    321     0.693     2.167
>    322     0.688     2.112
>    323     0.683     2.056
>    324     0.678     2.001
>    325     0.673     1.945
>    326     0.668 
    1.889
>    327     0.663     1.834
>    328     0.657     1.778
>    329     0.652     1.723
>    330     0.647     1.667
>    331     0.642     1.612
>    332     0.637     1.556
>    333     0.632     1.501
>    334     0.627     1.445
>    335     0.622     1.389
>    336     0.617     1.334
>    337     0.612     1.278
>    338     0.607     1.223
>    339     0.602     1.167
>    340     0.597     1.112
>    341     0.592     1.056
>    342     0.587     1.001
>    343     0.582     0.945
>    344     0.577     0.889
>    345     0.572     0.834
>    346     0.567     0.778
>    347     0.562     0.723
>    348     0.557     0.667
>    349     0.553     0.612
>    350     0.548     0.556
>   
 351     0.543     0.501
>    352     0.538     0.445
>    353     0.533     0.389
>    354     0.528     0.334
>    355     0.524     0.278
>    356     0.519     0.223
>    357     0.514     0.167
>    358     0.509     0.112
>    359     0.505     0.056
>    360     0.500     0.001
>    361     0.495    -0.055
>    362     0.491    -0.111
>    363     0.486 
   -0.166
>    364     0.482    -0.222
>    365     0.477    -0.277
>    366     0.473    -0.333
>    367     0.468    -0.388
>    368     0.464    -0.444
>    369     0.459    -0.499
>    370     0.455    -0.555
>    371     0.451    -0.611
>    372     0.446    -0.666
>    373     0.442    -0.722
>    374     0.438    -0.777
>    375     0.434    -0.833
>    376     0.429    -0.888
>    377     0.425    -0.944
>    378     0.421    -0.999
>    379     0.417    -1.055
>    380     0.413    -1.111
>    381     0.409

Re: [gmx-users] REMD Melting Curve

2014-03-05 Thread Justin Lemkul



On 3/4/14, 11:26 PM, atanu das wrote:

Thanks Justin. That's reassuring. However, I was wondering if any smoothing
function is used in getting the melted fractions at intermediate temperatures
with 1-K resolution. I could not find it in the description of the g_kinetics
program. I need to know this because - (1) to justify my result to a person
who would ask how I am getting values more than the number of replicas being
used and (2) if I know the smoothing function then I would be able to
estimate the temperature when folded fraction f=0. Because then I would be
able to extract the free energy of complete unfolding as currently the melted
fraction stops as 0.278. -Atanu


There are no smoothing functions applied in the melting curve.  It is a very 
simple calculation relating probability to free energy.  It should be very easy 
to expand the code to create a profile over a greater temperature range, and 
even make these command-line options for greater flexibility.


-Justin





On Tuesday, 4 March 2014 4:01 PM, Justin Lemkul  wrote:



On 3/4/14, 12:22 PM, atanu das wrote:

Dear Sir, I successfully ran demix.pl and generated the xvg files -
replica_ndx.xvg and replica_temp.xvg. I am attaching the file that I got as
melt.xvg below ..

# This file was created Wed Feb 26 10:10:27 2014 # by the following
command: # g_kinetics_d -f replica_temp.xvg -d replica_index.xvg # #
g_kinetics_d is part of G R O M A C S: # # Groningen Machine for Chemical
Simulation # @title "Melting curve" @xaxis  label "T (K)" @
yaxis  label "" @TYPE xy @ view 0.15, 0.15, 0.75, 0.85 @ legend on @ legend
box on @ legend loctype view @ legend 0.78, 0.8 @ legend length 2 @ s0
legend "Folded fraction" @ s1 legend "DG (kJ/mole)" 260 0.929
5.556 261 0.927 5.500 262 0.924 5.445 263 0.922
5.389 264 0.919 5.334 265 0.916 5.278 266 0.914
5.223 267 0.911 5.167 268 0.908 5.112 269 0.906
5.056 270 0.903 5.000 271 0.900 4.945 272 0.897
4.889 273 0.894 4.834 274 0.891 4.778 275 0.888
4.723 276 0.884 4.667


277 0.881 4.612

278 0.878 4.556 279 0.874 4.500 280 0.871 4.445 281
0.867 4.389 282 0.864 4.334 283 0.860 4.278 284
0.857 4.223 285 0.853 4.167 286 0.849 4.112 287
0.845 4.056 288 0.842 4.000 289 0.838

3.945

290 0.834 3.889 291 0.830 3.834 292 0.826 3.778 293
0.822 3.723 294 0.818 3.667 295 0.813 3.612 296
0.809 3.556 297 0.805 3.500 298 0.801 3.445 299
0.796 3.389 300 0.792 3.334 301 0.787 3.278 302
0.783 3.223 303 0.779 3.167 304 0.774 3.112 305
0.769 3.056 306 0.765 3.000 307 0.760 2.945 308
0.756 2.889 309 0.751 2.834 310 0.746 2.778 311
0.741 2.723 312 0.737 2.667 313 0.732 2.612


314 0.727 2.556

315 0.722 2.500 316 0.717 2.445 317 0.712 2.389 318
0.707 2.334 319 0.702 2.278 320 0.697 2.223 321
0.693 2.167 322 0.688 2.112 323 0.683 2.056 324
0.678 2.001 325 0.673 1.945 326 0.668

1.889

327 0.663 1.834 328 0.657 1.778 329 0.652 1.723 330
0.647 1.667 331 0.642 1.612 332 0.637 1.556 333
0.632 1.501 334 0.627 1.445 335 0.622 1.389 336
0.617 1.334 337 0.612 1.278 338 0.607 1.223 339
0.602 1.167 340 0.597 1.112 341 0.592 1.056 342
0.587 1.001 343 0.582 0.945 344 0.577 0.889 345
0.572 0.834 346 0.567 0.778 347 0.562 0.723 348
0.557 0.667 349 0.553 0.612 350 0.548 0.556


351 0.543 0.501

352 0.538 0.445 353 0.533 0.389 354 0.528 0.334 355
0.524 0.278 356 0.519 0.223 357 0.514 0.167 358
0.509 0.112 359 0.505 0.056 360 0.500 0.001 361
0.495-0.055 362 0.491-0.111 363 0.486

-0.166

364 0.482-0.222 365 0.477-0.277 366 0.473-0.333 367
0.468-0.388 368 0.464-0.444 369 0.459-0.499 370
0.455-0.555 371 0.451-0.611 372 0.446-0.666 373
0.442-0.722 374 0.438-0.777 375 0.434-0.833 376
0.429-0.888 377 0.425-0.944 378 0.421-0.999 379
0.417-1.055 380 0.413-1.111 381 0.409-1.166 382
0.405-1.222 383 0.401-1.277 384 0.397-1.333 385
0.393-1.388 386 0.389-1.444 387 0.386-1.499


388 0.382-1.555

389 0.378-1.610 390 0.374-1.666 391 0.371-1.722 392
0.367-1.777 393 0.363-1.833 394 0.360-1.888 395
0.356-1.944 396 0.353-1.999 397 0.349-2.055 398
0.346-2.110 399 0.342-2.166 400 0.339

-2.222

401 0.336-2.277 

Re: [gmx-users] REMD ensemble of states

2016-11-08 Thread Mark Abraham
Hi,

Mdrun wrote that. You made the trajectories contiguous with the demux.

Mark

On Tue, 8 Nov 2016 04:55 Abramyan, Tigran  wrote:

> Hi,
>
>
> I conducted REMD, and extracted the trajectories via
> trjcat -f *.trr -demux replica_index.xvg
> And now I was wondering which *.xtc file is the ensemble of states at the
> baseline replica (lowest temperature replica). Intuitively my guess is that
> the numbers in the names of *_trajout.xtc files correspond to the replica
> numbers starting from the baseline, and hence 0_trajout.xtc is the ensemble
> of states at the baseline replica, but I may be wrong.
>
>
> Please suggest.
>
>
> Thank you,
>
> Tigran
>
>
> --
> Tigran M. Abramyan, Ph.D.
> Postdoctoral Fellow, Computational Biophysics & Molecular Design
> Center for Integrative Chemical Biology and Drug Discovery
> Eshelman School of Pharmacy
> University of North Carolina at Chapel Hill
> Chapel Hill, NC 27599-7363
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD ensemble of states

2016-11-08 Thread Abramyan, Tigran
Hi Mark,

Thanks a lot for your prompt response. So  demux.pl creates continuous 
trajectories, *_trajout.xtc, but the ensemble of states (lowest energy 
ensemble, typically of interest in the analysis of REMD results) is saved in 
the original  0.xtc file produced during REMD before using demux.pl?

Thank you,
Tigran


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Tuesday, November 8, 2016 5:53 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] REMD ensemble of states

Hi,

Mdrun wrote that. You made the trajectories contiguous with the demux.

Mark

On Tue, 8 Nov 2016 04:55 Abramyan, Tigran  wrote:

> Hi,
>
>
> I conducted REMD, and extracted the trajectories via
> trjcat -f *.trr -demux replica_index.xvg
> And now I was wondering which *.xtc file is the ensemble of states at the
> baseline replica (lowest temperature replica). Intuitively my guess is that
> the numbers in the names of *_trajout.xtc files correspond to the replica
> numbers starting from the baseline, and hence 0_trajout.xtc is the ensemble
> of states at the baseline replica, but I may be wrong.
>
>
> Please suggest.
>
>
> Thank you,
>
> Tigran
>
>
> --
> Tigran M. Abramyan, Ph.D.
> Postdoctoral Fellow, Computational Biophysics & Molecular Design
> Center for Integrative Chemical Biology and Drug Discovery
> Eshelman School of Pharmacy
> University of North Carolina at Chapel Hill
> Chapel Hill, NC 27599-7363
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD ensemble of states

2016-11-08 Thread Mark Abraham
Yes

On Tue, 8 Nov 2016 18:43 Abramyan, Tigran  wrote:

> Hi Mark,
>
> Thanks a lot for your prompt response. So  demux.pl creates continuous
> trajectories, *_trajout.xtc, but the ensemble of states (lowest energy
> ensemble, typically of interest in the analysis of REMD results) is saved
> in the original  0.xtc file produced during REMD before using demux.pl?
>
> Thank you,
> Tigran
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Tuesday, November 8, 2016 5:53 AM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] REMD ensemble of states
>
> Hi,
>
> Mdrun wrote that. You made the trajectories contiguous with the demux.
>
> Mark
>
> On Tue, 8 Nov 2016 04:55 Abramyan, Tigran  wrote:
>
> > Hi,
> >
> >
> > I conducted REMD, and extracted the trajectories via
> > trjcat -f *.trr -demux replica_index.xvg
> > And now I was wondering which *.xtc file is the ensemble of states at the
> > baseline replica (lowest temperature replica). Intuitively my guess is
> that
> > the numbers in the names of *_trajout.xtc files correspond to the replica
> > numbers starting from the baseline, and hence 0_trajout.xtc is the
> ensemble
> > of states at the baseline replica, but I may be wrong.
> >
> >
> > Please suggest.
> >
> >
> > Thank you,
> >
> > Tigran
> >
> >
> > --
> > Tigran M. Abramyan, Ph.D.
> > Postdoctoral Fellow, Computational Biophysics & Molecular Design
> > Center for Integrative Chemical Biology and Drug Discovery
> > Eshelman School of Pharmacy
> > University of North Carolina at Chapel Hill
> > Chapel Hill, NC 27599-7363
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD ensemble of states

2016-11-13 Thread Abramyan, Tigran
Thank you Mark,

One more question regarding the centering of the frames at 300 replica (0.xtc) 
using trjconv. I have used a few trjconv options, however do not seem to be 
removing jumps from the original trajectory. For example, the command below 
works for me when applied to the *xtc file produced in regular MD, however, 
with REMD it produces a trajectory which won't be of use for example in VMD:

 echo 1 | trjconv -s 0.tpr -f 0.xtc -o 300.xtc -pbc nojump -dt 40

I am assuming I may need to use a combination of tpr files to produce the 
nojump 300.xtc file?

Please advise,
Thank you very much.
Tigran



From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Tuesday, November 8, 2016 1:15 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] REMD ensemble of states

Yes

On Tue, 8 Nov 2016 18:43 Abramyan, Tigran  wrote:

> Hi Mark,
>
> Thanks a lot for your prompt response. So  demux.pl creates continuous
> trajectories, *_trajout.xtc, but the ensemble of states (lowest energy
> ensemble, typically of interest in the analysis of REMD results) is saved
> in the original  0.xtc file produced during REMD before using demux.pl?
>
> Thank you,
> Tigran
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Tuesday, November 8, 2016 5:53 AM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] REMD ensemble of states
>
> Hi,
>
> Mdrun wrote that. You made the trajectories contiguous with the demux.
>
> Mark
>
> On Tue, 8 Nov 2016 04:55 Abramyan, Tigran  wrote:
>
> > Hi,
> >
> >
> > I conducted REMD, and extracted the trajectories via
> > trjcat -f *.trr -demux replica_index.xvg
> > And now I was wondering which *.xtc file is the ensemble of states at the
> > baseline replica (lowest temperature replica). Intuitively my guess is
> that
> > the numbers in the names of *_trajout.xtc files correspond to the replica
> > numbers starting from the baseline, and hence 0_trajout.xtc is the
> ensemble
> > of states at the baseline replica, but I may be wrong.
> >
> >
> > Please suggest.
> >
> >
> > Thank you,
> >
> > Tigran
> >
> >
> > --
> > Tigran M. Abramyan, Ph.D.
> > Postdoctoral Fellow, Computational Biophysics & Molecular Design
> > Center for Integrative Chemical Biology and Drug Discovery
> > Eshelman School of Pharmacy
> > University of North Carolina at Chapel Hill
> > Chapel Hill, NC 27599-7363
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD ensemble of states

2016-11-13 Thread Mark Abraham
Hi,

The ensemble at each temperature is intrinsically discontinuous. You can't
make it look continuous. What are you trying to do?

Mark

On Mon, 14 Nov 2016 05:26 Abramyan, Tigran  wrote:

> Thank you Mark,
>
> One more question regarding the centering of the frames at 300 replica
> (0.xtc) using trjconv. I have used a few trjconv options, however do not
> seem to be removing jumps from the original trajectory. For example, the
> command below works for me when applied to the *xtc file produced in
> regular MD, however, with REMD it produces a trajectory which won't be of
> use for example in VMD:
>
>  echo 1 | trjconv -s 0.tpr -f 0.xtc -o 300.xtc -pbc nojump -dt 40
>
> I am assuming I may need to use a combination of tpr files to produce the
> nojump 300.xtc file?
>
> Please advise,
> Thank you very much.
> Tigran
>
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Tuesday, November 8, 2016 1:15 PM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] REMD ensemble of states
>
> Yes
>
> On Tue, 8 Nov 2016 18:43 Abramyan, Tigran  wrote:
>
> > Hi Mark,
> >
> > Thanks a lot for your prompt response. So  demux.pl creates continuous
> > trajectories, *_trajout.xtc, but the ensemble of states (lowest energy
> > ensemble, typically of interest in the analysis of REMD results) is saved
> > in the original  0.xtc file produced during REMD before using demux.pl?
> >
> > Thank you,
> > Tigran
> >
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> > Abraham 
> > Sent: Tuesday, November 8, 2016 5:53 AM
> > To: gmx-us...@gromacs.org
> > Subject: Re: [gmx-users] REMD ensemble of states
> >
> > Hi,
> >
> > Mdrun wrote that. You made the trajectories contiguous with the demux.
> >
> > Mark
> >
> > On Tue, 8 Nov 2016 04:55 Abramyan, Tigran  wrote:
> >
> > > Hi,
> > >
> > >
> > > I conducted REMD, and extracted the trajectories via
> > > trjcat -f *.trr -demux replica_index.xvg
> > > And now I was wondering which *.xtc file is the ensemble of states at
> the
> > > baseline replica (lowest temperature replica). Intuitively my guess is
> > that
> > > the numbers in the names of *_trajout.xtc files correspond to the
> replica
> > > numbers starting from the baseline, and hence 0_trajout.xtc is the
> > ensemble
> > > of states at the baseline replica, but I may be wrong.
> > >
> > >
> > > Please suggest.
> > >
> > >
> > > Thank you,
> > >
> > > Tigran
> > >
> > >
> > > --
> > > Tigran M. Abramyan, Ph.D.
> > > Postdoctoral Fellow, Computational Biophysics & Molecular Design
> > > Center for Integrative Chemical Biology and Drug Discovery
> > > Eshelman School of Pharmacy
> > > University of North Carolina at Chapel Hill
> > > Chapel Hill, NC 27599-7363
> > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ.

Re: [gmx-users] REMD ensemble of states

2016-11-15 Thread Abramyan, Tigran
Hi Mark,

I understand that at each replica the coordinates of the accepted states are 
saved. I understand that I can calculate different properties of 0.xtc in 
differenr programs, e.g., gromacs, MDTraj, etc., but when it comes down to 
visualization in VMD, for example, in gromacs I don't seem to find a way to 
remove the jumps and superpose the coordinates saved in 0.xtc.

Tigran





From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Monday, November 14, 2016 1:20 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] REMD ensemble of states

Hi,

The ensemble at each temperature is intrinsically discontinuous. You can't
make it look continuous. What are you trying to do?

Mark

On Mon, 14 Nov 2016 05:26 Abramyan, Tigran  wrote:

> Thank you Mark,
>
> One more question regarding the centering of the frames at 300 replica
> (0.xtc) using trjconv. I have used a few trjconv options, however do not
> seem to be removing jumps from the original trajectory. For example, the
> command below works for me when applied to the *xtc file produced in
> regular MD, however, with REMD it produces a trajectory which won't be of
> use for example in VMD:
>
>  echo 1 | trjconv -s 0.tpr -f 0.xtc -o 300.xtc -pbc nojump -dt 40
>
> I am assuming I may need to use a combination of tpr files to produce the
> nojump 300.xtc file?
>
> Please advise,
> Thank you very much.
> Tigran
>
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Tuesday, November 8, 2016 1:15 PM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] REMD ensemble of states
>
> Yes
>
> On Tue, 8 Nov 2016 18:43 Abramyan, Tigran  wrote:
>
> > Hi Mark,
> >
> > Thanks a lot for your prompt response. So  demux.pl creates continuous
> > trajectories, *_trajout.xtc, but the ensemble of states (lowest energy
> > ensemble, typically of interest in the analysis of REMD results) is saved
> > in the original  0.xtc file produced during REMD before using demux.pl?
> >
> > Thank you,
> > Tigran
> >
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> > Abraham 
> > Sent: Tuesday, November 8, 2016 5:53 AM
> > To: gmx-us...@gromacs.org
> > Subject: Re: [gmx-users] REMD ensemble of states
> >
> > Hi,
> >
> > Mdrun wrote that. You made the trajectories contiguous with the demux.
> >
> > Mark
> >
> > On Tue, 8 Nov 2016 04:55 Abramyan, Tigran  wrote:
> >
> > > Hi,
> > >
> > >
> > > I conducted REMD, and extracted the trajectories via
> > > trjcat -f *.trr -demux replica_index.xvg
> > > And now I was wondering which *.xtc file is the ensemble of states at
> the
> > > baseline replica (lowest temperature replica). Intuitively my guess is
> > that
> > > the numbers in the names of *_trajout.xtc files correspond to the
> replica
> > > numbers starting from the baseline, and hence 0_trajout.xtc is the
> > ensemble
> > > of states at the baseline replica, but I may be wrong.
> > >
> > >
> > > Please suggest.
> > >
> > >
> > > Thank you,
> > >
> > > Tigran
> > >
> > >
> > > --
> > > Tigran M. Abramyan, Ph.D.
> > > Postdoctoral Fellow, Computational Biophysics & Molecular Design
> > > Center for Integrative Chemical Biology and Drug Discovery
> > > Eshelman School of Pharmacy
> > > University of North Carolina at Chapel Hill
> > > Chapel Hill, NC 27599-7363
> > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
&

[gmx-users] REMD Showing Zero Exchange Probability

2018-07-20 Thread Ligesh Lichu
Dear all,
I have performed REMD for a system containing Protein, Reline, Urea and
Water in the temperature range 290 to 450 K consist of 16 replicas out of
47 replicas generated by REMD temperature generator. But after the MD
simulation the exchange probability is zero. I have used position
restraints for reline, urea and protein. Is there any chance that position
restraints  cause the exchange probability to be zero?  I have one more
query that, the REMD temperature generator produced around 45 to 54
replicas for my system in the required temperature range. But I have only
80 processors to do the job, So is it necessary to choose the consecutive
temperature replicas given by the REMD temperature generator or I can skip
some temperatures in between?

If I am using the equation *Ti = T0 exp (k* i)*, what determines the value
of 'k'  how it affects the exchange probability? How can I choose the value
of 'k' for an arbitrary system?

Thanks in advance...
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD and MARTINI force field

2014-11-24 Thread Nicolas Floquet
Hello, i would be interested in trying Replica exchange MD using a 
coarse grained representation of my system (MARTINI force field).

I followed the tutorial of REMD with gromacs and all seemed to be fine ...

However, running REMD at 300K 350K and 400K on my system in a first 
attempt, led to no exchange between replica with classical kT values 
approaching 1000 !


Repl 1 <-> 2  dE_term =  9.440e+02 (kT)
Repl 1 <-> 2  dE_term =  9.945e+02 (kT)
Repl 1 <-> 2  dE_term =  9.787e+02 (kT)
Repl 1 <-> 2  dE_term =  1.006e+03 (kT)
Repl 1 <-> 2  dE_term =  1.034e+03 (kT)
Repl 1 <-> 2  dE_term =  9.542e+02 (kT)
Repl 1 <-> 2  dE_term =  9.121e+02 (kT)
Repl 1 <-> 2  dE_term =  9.453e+02 (kT)




Any idea to help me ??? thank you in advance .

Nicolas
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD - subsystems not compatible

2019-04-24 Thread Mark Abraham
Hi,

Generally the REMD code has written some analysis to the log file above
this error message that should provide context.

More generally, you can use gmx check to compare the .tpr files and observe
that the differences between them are only what you expect.

Mark

On Wed, 24 Apr 2019 at 15:28, Per Larsson  wrote:

> Hi gmx-users,
>
> I am trying to start a replica exchange simulation of a model peptide in
> water, but can’t get it to run properly.
> I have limited experience with REMD, so I thought I’d ask here for all the
> rookie mistakes it is possible to do.
> I have also seen the earlier discussions about the error message, but
> those seemed to be related to restarts and/or continuations, rather than
> not being able to run at all.
>
> My gromacs version is 2016 (for compatibility reasons), and the exact
> error message I get is this:
>
> ---
> Program: gmx mdrun, version 2016.5
> Source file: src/gromacs/mdlib/main.cpp (line 115)
> MPI rank:32 (out of 62)
>
> Fatal error:
> The 62 subsystems are not compatible
>
> I followed Marks tutorial on the gromacs website and have a small
> bash-script that loops over all desired temperatures, run equilibration
> etc.
> I then start the simulation like this:
>
> $MPIRUN $GMX mdrun $ntmpi -ntomp $ntomp -deffnm sim -replex 500 -multidir
> ~pfs/ferring/gnrh_aa/dipep_remd/sim*
>
> What could be the source of this incompatibility?
>
> Many thanks
> /Per
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD - subsystems not compatible

2019-04-24 Thread Per Larsson
Thanks Mark for reminding me about the existence of the log files. 
Problem solved, the difference is clearly indicated (number of atoms, my stupid 
mistake. 

Cheers
/Per



> 24 apr. 2019 kl. 16:51 skrev Mark Abraham :
> 
> Hi,
> 
> Generally the REMD code has written some analysis to the log file above
> this error message that should provide context.
> 
> More generally, you can use gmx check to compare the .tpr files and observe
> that the differences between them are only what you expect.
> 
> Mark
> 
> On Wed, 24 Apr 2019 at 15:28, Per Larsson  wrote:
> 
>> Hi gmx-users,
>> 
>> I am trying to start a replica exchange simulation of a model peptide in
>> water, but can’t get it to run properly.
>> I have limited experience with REMD, so I thought I’d ask here for all the
>> rookie mistakes it is possible to do.
>> I have also seen the earlier discussions about the error message, but
>> those seemed to be related to restarts and/or continuations, rather than
>> not being able to run at all.
>> 
>> My gromacs version is 2016 (for compatibility reasons), and the exact
>> error message I get is this:
>> 
>> ---
>> Program: gmx mdrun, version 2016.5
>> Source file: src/gromacs/mdlib/main.cpp (line 115)
>> MPI rank:32 (out of 62)
>> 
>> Fatal error:
>> The 62 subsystems are not compatible
>> 
>> I followed Marks tutorial on the gromacs website and have a small
>> bash-script that loops over all desired temperatures, run equilibration
>> etc.
>> I then start the simulation like this:
>> 
>> $MPIRUN $GMX mdrun $ntmpi -ntomp $ntomp -deffnm sim -replex 500 -multidir
>> ~pfs/ferring/gnrh_aa/dipep_remd/sim*
>> 
>> What could be the source of this incompatibility?
>> 
>> Many thanks
>> /Per
>> 
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD analysis of trajectories

2017-06-01 Thread Mark Abraham
Hi,

That's what you already have. See
http://www.gromacs.org/Documentation/How-tos/REMD#Post-Processing

Mark

On Thu, Jun 1, 2017 at 5:37 AM YanhuaOuyang <15901283...@163.com> wrote:

> Hi,
>I have run a 100ns-REMD of protein, which has 20 replicas (i.e.
> remd1.xtc, remd2.xtc, ..., remd20.xtc).  I want to analyze a trajectory at
> specific temperature  such as a trajectory at experiment temperature 298K
> rather than analyzing the continuous trajectory. I have known GROMACS
> exchange coordinate when REMD running. Do I just analyze remd2.xtc of
> replica 2(T=298K) if I want to analyze a trajectory at 298K? Do I need to
> do something else on the trajectories to get a trajectory at specific
> temperature(i.e. 298K)?
>
> Best regards,
> Ouyang
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD analysis of trajectories

2017-06-01 Thread YanhuaOuyang
Do you mean  that the original trajectories REMD generated are belong to "one 
trajectory per temperature" (i.e. the md2.xtc is a trajectory at 298K)?



Ouyang




At 2017-06-01 21:00:52, "Mark Abraham"  wrote:
>Hi,
>
>That's what you already have. See
>http://www.gromacs.org/Documentation/How-tos/REMD#Post-Processing
>
>Mark
>
>On Thu, Jun 1, 2017 at 5:37 AM YanhuaOuyang <15901283...@163.com> wrote:
>
>> Hi,
>>I have run a 100ns-REMD of protein, which has 20 replicas (i.e.
>> remd1.xtc, remd2.xtc, ..., remd20.xtc).  I want to analyze a trajectory at
>> specific temperature  such as a trajectory at experiment temperature 298K
>> rather than analyzing the continuous trajectory. I have known GROMACS
>> exchange coordinate when REMD running. Do I just analyze remd2.xtc of
>> replica 2(T=298K) if I want to analyze a trajectory at 298K? Do I need to
>> do something else on the trajectories to get a trajectory at specific
>> temperature(i.e. 298K)?
>>
>> Best regards,
>> Ouyang
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>-- 
>Gromacs Users mailing list
>
>* Please search the archive at 
>http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
>* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>* For (un)subscribe requests visit
>https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD analysis of trajectories

2017-06-01 Thread Mark Abraham
Hi,

What did you learn from the first sentence of the link I gave you?

Mark

On Thu, Jun 1, 2017 at 3:20 PM YanhuaOuyang <15901283...@163.com> wrote:

> Do you mean  that the original trajectories REMD generated are belong to
> "one trajectory per temperature" (i.e. the md2.xtc is a trajectory at 298K)?
>
>
>
> Ouyang
>
>
>
>
> At 2017-06-01 21:00:52, "Mark Abraham"  wrote:
> >Hi,
> >
> >That's what you already have. See
> >http://www.gromacs.org/Documentation/How-tos/REMD#Post-Processing
> >
> >Mark
> >
> >On Thu, Jun 1, 2017 at 5:37 AM YanhuaOuyang <15901283...@163.com> wrote:
> >
> >> Hi,
> >>I have run a 100ns-REMD of protein, which has 20 replicas (i.e.
> >> remd1.xtc, remd2.xtc, ..., remd20.xtc).  I want to analyze a trajectory
> at
> >> specific temperature  such as a trajectory at experiment temperature
> 298K
> >> rather than analyzing the continuous trajectory. I have known GROMACS
> >> exchange coordinate when REMD running. Do I just analyze remd2.xtc of
> >> replica 2(T=298K) if I want to analyze a trajectory at 298K? Do I need
> to
> >> do something else on the trajectories to get a trajectory at specific
> >> temperature(i.e. 298K)?
> >>
> >> Best regards,
> >> Ouyang
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >--
> >Gromacs Users mailing list
> >
> >* Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> >* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> >* For (un)subscribe requests visit
> >https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD analysis of trajectories

2017-06-01 Thread Smith, Micholas D.
Ouyang,

Each Replica corresponds to 1 temperature in Gromacs (unlike other software 
packages). If you want to have continuous trajectories (i.e. follow the motion 
of one replica through temperature exchanges) then you have to demux. But the 
demux is really only useful (in my experience) with use of the retired 
g_kinetics tool.


===
Micholas Dean Smith, PhD.
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular Biophysics


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Thursday, June 01, 2017 10:53 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] REMD analysis of trajectories

Hi,

What did you learn from the first sentence of the link I gave you?

Mark

On Thu, Jun 1, 2017 at 3:20 PM YanhuaOuyang <15901283...@163.com> wrote:

> Do you mean  that the original trajectories REMD generated are belong to
> "one trajectory per temperature" (i.e. the md2.xtc is a trajectory at 298K)?
>
>
>
> Ouyang
>
>
>
>
> At 2017-06-01 21:00:52, "Mark Abraham"  wrote:
> >Hi,
> >
> >That's what you already have. See
> >http://www.gromacs.org/Documentation/How-tos/REMD#Post-Processing
> >
> >Mark
> >
> >On Thu, Jun 1, 2017 at 5:37 AM YanhuaOuyang <15901283...@163.com> wrote:
> >
> >> Hi,
> >>I have run a 100ns-REMD of protein, which has 20 replicas (i.e.
> >> remd1.xtc, remd2.xtc, ..., remd20.xtc).  I want to analyze a trajectory
> at
> >> specific temperature  such as a trajectory at experiment temperature
> 298K
> >> rather than analyzing the continuous trajectory. I have known GROMACS
> >> exchange coordinate when REMD running. Do I just analyze remd2.xtc of
> >> replica 2(T=298K) if I want to analyze a trajectory at 298K? Do I need
> to
> >> do something else on the trajectories to get a trajectory at specific
> >> temperature(i.e. 298K)?
> >>
> >> Best regards,
> >> Ouyang
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >--
> >Gromacs Users mailing list
> >
> >* Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> >* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> >* For (un)subscribe requests visit
> >https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD slow's down drastically

2014-02-23 Thread Francis Jing
Because every replica has to be calculated separately, it seems the speed
is not that slow (125 * 0.07 ~ 9 ns).

Also, to evaluate it, you should post how many CPUs you used, and other
information like exchange attempt frequency...


Francis


On Mon, Feb 24, 2014 at 3:32 PM, Singam Karthick  wrote:

> Dear members,
> I am trying to run REMD simulation for poly Alanine (12 residue) system. I
> used remd generator to get the range of temperature with the exchange
> probability of 0.3. I was getting the 125 replicas. I tried to simulate 125
> replicas its drastically slow down the simulation time (for 70 pico seconds
> it took around 17 hours ) could anyone please tell me how to solve this
> issue.
>
> Following is the MDP file
>
> title   = G4Ga3a4a5 production.
> ;define = ;-DPOSRES ; position restrain the protein
> ; Run parameters
> integrator  = md; leap-frog integrator
> nsteps  = 1250  ; 2 * 500 = 3ns
> dt  = 0.002 ; 2 fs
> ; Output control
> nstxout = 0 ; save coordinates every 0.2 ps
> nstvout = 1 ; save velocities every 0.2 ps
> nstxtcout   = 500   ; save xtc coordinate every 0.2 ps
> nstenergy   = 500   ; save energies every 0.2 ps
> nstlog  = 100   ; update log file every 0.2 ps
> ; Bond parameters
> continuation= yes   ; Restarting after NVT
> constraint_algorithm = lincs; holonomic constraints
> constraints = hbonds; all bonds (even heavy atom-H bonds)
> constrained
> lincs_iter  = 1 ; accuracy of LINCS
> lincs_order = 4 ; also related to accuracy
> morse   = no
> ; Neighborsearching
> ns_type = grid  ; search neighboring grid cels
> nstlist = 5 ; 10 fs
> rlist   = 1.0   ; short-range neighborlist cutoff (in nm)
> rcoulomb= 1.0   ; short-range electrostatic cutoff (in nm)
> rvdw= 1.0   ; short-range van der Waals cutoff (in nm)
> ; Electrostatics
> coulombtype = PME   ; Particle Mesh Ewald for long-range
> electrostatics
> pme_order   = 4 ; cubic interpolation
> fourierspacing  = 0.16  ; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl  = V-rescale ; modified Berendsen thermostat
> tc-grps =  protein SOL Cl   ;two coupling groups - more
> accurate
> tau_t = 0.1 0.1  0.1 ; time constant, in ps
> ref_t = X  X  X; reference temperature,
> one for each group, in K
> ; Pressure coupling is on
> pcoupl  = Parrinello-Rahman ; Pressure coupling on in NPT
> pcoupltype  = isotropic ; uniform scaling of box vectors
> tau_p   = 2.0   ; time constant, in ps
> ref_p   = 1.0   ; reference pressure, in bar
> compressibility = 4.5e-5; isothermal compressibility of water,
> bar^-1
> ; Periodic boundary conditions
> pbc = xyz   ; 3-D PBC
> ; Dispersion correction
>
> DispCorr= EnerPres  ; account for cut-off vdW scheme
>
>
> regards
> singam
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
Zhifeng (Francis) Jing
Graduate Student in Physical Chemistry
School of Chemistry and Chemical Engineering
Shanghai Jiao Tong University
http://sun.sjtu.edu.cn
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD slow's down drastically

2014-02-24 Thread Dr. Vitaly Chaban
sorry, and what performance did you expect from REMD?


Dr. Vitaly V. Chaban


On Mon, Feb 24, 2014 at 8:32 AM, Singam Karthick  wrote:
> Dear members,
> I am trying to run REMD simulation for poly Alanine (12 residue) system. I 
> used remd generator to get the range of temperature with the exchange 
> probability of 0.3. I was getting the 125 replicas. I tried to simulate 125 
> replicas its drastically slow down the simulation time (for 70 pico seconds 
> it took around 17 hours ) could anyone please tell me how to solve this issue.
>
> Following is the MDP file
>
> title   = G4Ga3a4a5 production.
> ;define = ;-DPOSRES ; position restrain the protein
> ; Run parameters
> integrator  = md; leap-frog integrator
> nsteps  = 1250  ; 2 * 500 = 3ns
> dt  = 0.002 ; 2 fs
> ; Output control
> nstxout = 0 ; save coordinates every 0.2 ps
> nstvout = 1 ; save velocities every 0.2 ps
> nstxtcout   = 500   ; save xtc coordinate every 0.2 ps
> nstenergy   = 500   ; save energies every 0.2 ps
> nstlog  = 100   ; update log file every 0.2 ps
> ; Bond parameters
> continuation= yes   ; Restarting after NVT
> constraint_algorithm = lincs; holonomic constraints
> constraints = hbonds; all bonds (even heavy atom-H bonds) 
> constrained
> lincs_iter  = 1 ; accuracy of LINCS
> lincs_order = 4 ; also related to accuracy
> morse   = no
> ; Neighborsearching
> ns_type = grid  ; search neighboring grid cels
> nstlist = 5 ; 10 fs
> rlist   = 1.0   ; short-range neighborlist cutoff (in nm)
> rcoulomb= 1.0   ; short-range electrostatic cutoff (in nm)
> rvdw= 1.0   ; short-range van der Waals cutoff (in nm)
> ; Electrostatics
> coulombtype = PME   ; Particle Mesh Ewald for long-range 
> electrostatics
> pme_order   = 4 ; cubic interpolation
> fourierspacing  = 0.16  ; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl  = V-rescale ; modified Berendsen thermostat
> tc-grps =  protein SOL Cl   ;two coupling groups - more accurate
> tau_t = 0.1 0.1  0.1 ; time constant, in ps
> ref_t = X  X  X; reference temperature, one 
> for each group, in K
> ; Pressure coupling is on
> pcoupl  = Parrinello-Rahman ; Pressure coupling on in NPT
> pcoupltype  = isotropic ; uniform scaling of box vectors
> tau_p   = 2.0   ; time constant, in ps
> ref_p   = 1.0   ; reference pressure, in bar
> compressibility = 4.5e-5; isothermal compressibility of water, bar^-1
> ; Periodic boundary conditions
> pbc = xyz   ; 3-D PBC
> ; Dispersion correction
>
> DispCorr= EnerPres  ; account for cut-off vdW scheme
>
>
> regards
> singam
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD slow's down drastically

2014-02-24 Thread Singam Karthick
Dear Francis
we are running in the  Xeon E5-2670 8C 2.60GHz (2 CPUs , 8 cores, 16 threads) 
for each temperature. and the exchange attempt frequency is 500 steps. The 
other system with 126 replicas run 30 ns per day ( system size of  ~38000 
atoms). could you please help us in solving this problem

regards
singam



On Monday, 24 February 2014 1:02 PM, Singam Karthick  wrote:
 
Dear members,
I am trying to run REMD simulation for poly Alanine (12 residue) system. I used 
remd generator to get the range of temperature with the exchange probability of 
0.3. I was getting the 125 replicas. I tried to simulate 125 replicas its 
drastically slow down the simulation time (for 70 pico seconds it took around 
17 hours ) could anyone please tell me how to solve this issue.

Following is the MDP file 

title           = G4Ga3a4a5 production. 
;define         = ;-DPOSRES     ; position restrain the protein
; Run parameters
integrator      = md            ; leap-frog integrator
nsteps          = 1250      ; 2 * 500 = 3ns
dt              = 0.002         ; 2 fs
; Output control
nstxout         = 0             ; save coordinates every 0.2 ps
nstvout         = 1         ; save velocities every 0.2 ps
nstxtcout       = 500           ; save xtc coordinate every 0.2 ps
nstenergy       = 500           ; save energies every 0.2 ps
nstlog          = 100           ; update log file every 0.2 ps
; Bond parameters
continuation    = yes           ; Restarting after NVT 
constraint_algorithm = lincs    ; holonomic constraints 
constraints     = hbonds        ; all bonds (even heavy atom-H bonds) 
constrained
lincs_iter      = 1             ; accuracy of LINCS
lincs_order     = 4             ; also related to accuracy
morse           = no
; Neighborsearching
ns_type         = grid          ; search neighboring grid cels
nstlist         = 5             ; 10 fs
rlist           = 1.0           ; short-range neighborlist cutoff (in nm)
rcoulomb        = 1.0           ; short-range electrostatic cutoff (in nm)
rvdw            = 1.0           ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype     = PME           ; Particle Mesh Ewald for long-range 
electrostatics
pme_order       = 4             ; cubic interpolation
fourierspacing  = 0.16          ; grid spacing for FFT
; Temperature coupling is on
tcoupl          = V-rescale     ; modified Berendsen thermostat
tc-grps         =  protein SOL Cl       ;two coupling groups - more accurate
tau_t                 = 0.1 0.1  0.1 ; time constant, in ps
ref_t                 = X  X  X    ; reference temperature, one for 
each group, in K
; Pressure coupling is on
pcoupl          = Parrinello-Rahman     ; Pressure coupling on in NPT
pcoupltype      = isotropic     ; uniform scaling of box vectors
tau_p           = 2.0           ; time constant, in ps
ref_p           = 1.0           ; reference pressure, in bar
compressibility = 4.5e-5        ; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc             = xyz           ; 3-D PBC
; Dispersion correction

DispCorr        = EnerPres      ; account for cut-off vdW scheme


regards
singam
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD slow's down drastically

2014-02-24 Thread Mark Abraham
Adding replicas cannot of itself slow things down, though it will increase
the cost linearly. Don't try to run them all on the same amount of hardware
as a smaller calculation! You are shooting yourself in the foot if you do
not have at least one processor per replica (= MPI rank).

Mark


On Mon, Feb 24, 2014 at 8:32 AM, Singam Karthick  wrote:

> Dear members,
> I am trying to run REMD simulation for poly Alanine (12 residue) system. I
> used remd generator to get the range of temperature with the exchange
> probability of 0.3. I was getting the 125 replicas. I tried to simulate 125
> replicas its drastically slow down the simulation time (for 70 pico seconds
> it took around 17 hours ) could anyone please tell me how to solve this
> issue.
>
> Following is the MDP file
>
> title   = G4Ga3a4a5 production.
> ;define = ;-DPOSRES ; position restrain the protein
> ; Run parameters
> integrator  = md; leap-frog integrator
> nsteps  = 1250  ; 2 * 500 = 3ns
> dt  = 0.002 ; 2 fs
> ; Output control
> nstxout = 0 ; save coordinates every 0.2 ps
> nstvout = 1 ; save velocities every 0.2 ps
> nstxtcout   = 500   ; save xtc coordinate every 0.2 ps
> nstenergy   = 500   ; save energies every 0.2 ps
> nstlog  = 100   ; update log file every 0.2 ps
> ; Bond parameters
> continuation= yes   ; Restarting after NVT
> constraint_algorithm = lincs; holonomic constraints
> constraints = hbonds; all bonds (even heavy atom-H bonds)
> constrained
> lincs_iter  = 1 ; accuracy of LINCS
> lincs_order = 4 ; also related to accuracy
> morse   = no
> ; Neighborsearching
> ns_type = grid  ; search neighboring grid cels
> nstlist = 5 ; 10 fs
> rlist   = 1.0   ; short-range neighborlist cutoff (in nm)
> rcoulomb= 1.0   ; short-range electrostatic cutoff (in nm)
> rvdw= 1.0   ; short-range van der Waals cutoff (in nm)
> ; Electrostatics
> coulombtype = PME   ; Particle Mesh Ewald for long-range
> electrostatics
> pme_order   = 4 ; cubic interpolation
> fourierspacing  = 0.16  ; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl  = V-rescale ; modified Berendsen thermostat
> tc-grps =  protein SOL Cl   ;two coupling groups - more
> accurate
> tau_t = 0.1 0.1  0.1 ; time constant, in ps
> ref_t = X  X  X; reference temperature,
> one for each group, in K
> ; Pressure coupling is on
> pcoupl  = Parrinello-Rahman ; Pressure coupling on in NPT
> pcoupltype  = isotropic ; uniform scaling of box vectors
> tau_p   = 2.0   ; time constant, in ps
> ref_p   = 1.0   ; reference pressure, in bar
> compressibility = 4.5e-5; isothermal compressibility of water,
> bar^-1
> ; Periodic boundary conditions
> pbc = xyz   ; 3-D PBC
> ; Dispersion correction
>
> DispCorr= EnerPres  ; account for cut-off vdW scheme
>
>
> regards
> singam
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD slow's down drastically

2014-02-24 Thread Francis Jing
So what's the difference between the two systems? I think you can look
through the end of the log file, and find which part slowed down. Maybe
your job did not use the right partition of cores (too much comm. time)? I
don't know.

Francis
On 25 Feb, 2014 1:19 am, "Singam Karthick"  wrote:

> Dear Francis
> we are running in the  Xeon E5-2670 8C 2.60GHz (2 CPUs , 8 cores, 16
> threads) for each temperature. and the exchange attempt frequency is 500
> steps. The other system with 126 replicas run 30 ns per day ( system size
> of  ~38000 atoms). could you please help us in solving this problem
>
> regards
> singam
>
>
>
> On Monday, 24 February 2014 1:02 PM, Singam Karthick 
> wrote:
>
> Dear members,
> I am trying to run REMD simulation for poly Alanine (12 residue) system. I
> used remd generator to get the range of temperature with the exchange
> probability of 0.3. I was getting the 125 replicas. I tried to simulate 125
> replicas its drastically slow down the simulation time (for 70 pico seconds
> it took around 17 hours ) could anyone please tell me how to solve this
> issue.
>
> Following is the MDP file
>
> title   = G4Ga3a4a5 production.
> ;define = ;-DPOSRES ; position restrain the protein
> ; Run parameters
> integrator  = md; leap-frog integrator
> nsteps  = 1250  ; 2 * 500 = 3ns
> dt  = 0.002 ; 2 fs
> ; Output control
> nstxout = 0 ; save coordinates every 0.2 ps
> nstvout = 1 ; save velocities every 0.2 ps
> nstxtcout   = 500   ; save xtc coordinate every 0.2 ps
> nstenergy   = 500   ; save energies every 0.2 ps
> nstlog  = 100   ; update log file every 0.2 ps
> ; Bond parameters
> continuation= yes   ; Restarting after NVT
> constraint_algorithm = lincs; holonomic constraints
> constraints = hbonds; all bonds (even heavy atom-H bonds)
> constrained
> lincs_iter  = 1 ; accuracy of LINCS
> lincs_order = 4 ; also related to accuracy
> morse   = no
> ; Neighborsearching
> ns_type = grid  ; search neighboring grid cels
> nstlist = 5 ; 10 fs
> rlist   = 1.0   ; short-range neighborlist cutoff (in nm)
> rcoulomb= 1.0   ; short-range electrostatic cutoff (in nm)
> rvdw= 1.0   ; short-range van der Waals cutoff (in nm)
> ; Electrostatics
> coulombtype = PME   ; Particle Mesh Ewald for long-range
> electrostatics
> pme_order   = 4 ; cubic interpolation
> fourierspacing  = 0.16  ; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl  = V-rescale ; modified Berendsen thermostat
> tc-grps =  protein SOL Cl   ;two coupling groups - more
> accurate
> tau_t = 0.1 0.1  0.1 ; time constant, in ps
> ref_t = X  X  X; reference temperature,
> one for each group, in K
> ; Pressure coupling is on
> pcoupl  = Parrinello-Rahman ; Pressure coupling on in NPT
> pcoupltype  = isotropic ; uniform scaling of box vectors
> tau_p   = 2.0   ; time constant, in ps
> ref_p   = 1.0   ; reference pressure, in bar
> compressibility = 4.5e-5; isothermal compressibility of water,
> bar^-1
> ; Periodic boundary conditions
> pbc = xyz   ; 3-D PBC
> ; Dispersion correction
>
> DispCorr= EnerPres  ; account for cut-off vdW scheme
>
>
> regards
> singam
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD slow's down drastically

2014-02-24 Thread Christopher Neale
Presuming that you have indeed set up the number of processors correctly 
(should be running on a different number of cored for different number of 
replicas to do a fair test), could it be a thread pinning issue?

I run on a Nehalem system with 8 cores/node but, because of the Nehalem 
hyperthreading (I think), gromacs always complains if I run "mpirun -np $N 
mdrun" where $N is the number of cores

NOTE: The number of threads is not equal to the number of (logical) cores
  and the -pin option is set to auto: will not pin thread to cores.
  This can lead to significant performance degradation.
  Consider using -pin on (and -pinoffset in case you run multiple jobs).

However, if I use $N = 2 times the number of cores, then I don't get that note, 
instead getting:

"Pinning threads with a logical core stride of 1"

Aside, if anybody has a suggestion about how I should handle the thread pinning 
in my case, or if it matters, then I would be happy to hear it (my throughput 
seems to be good though).

Finally, this comment is off topic, but you might want to reconsider having the 
CL ions in a separate temperature coupling group.

Chris.

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Singam 
Karthick 
Sent: 24 February 2014 02:32
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [gmx-users] REMD slow's down drastically

Dear members,
I am trying to run REMD simulation for poly Alanine (12 residue) system. I used 
remd generator to get the range of temperature with the exchange probability of 
0.3. I was getting the 125 replicas. I tried to simulate 125 replicas its 
drastically slow down the simulation time (for 70 pico seconds it took around 
17 hours ) could anyone please tell me how to solve this issue.

Following is the MDP file

title   = G4Ga3a4a5 production.
;define = ;-DPOSRES ; position restrain the protein
; Run parameters
integrator  = md; leap-frog integrator
nsteps  = 1250  ; 2 * 500 = 3ns
dt  = 0.002 ; 2 fs
; Output control
nstxout = 0 ; save coordinates every 0.2 ps
nstvout = 1 ; save velocities every 0.2 ps
nstxtcout   = 500   ; save xtc coordinate every 0.2 ps
nstenergy   = 500   ; save energies every 0.2 ps
nstlog  = 100   ; update log file every 0.2 ps
; Bond parameters
continuation= yes   ; Restarting after NVT
constraint_algorithm = lincs; holonomic constraints
constraints = hbonds; all bonds (even heavy atom-H bonds) 
constrained
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
morse   = no
; Neighborsearching
ns_type = grid  ; search neighboring grid cels
nstlist = 5 ; 10 fs
rlist   = 1.0   ; short-range neighborlist cutoff (in nm)
rcoulomb= 1.0   ; short-range electrostatic cutoff (in nm)
rvdw= 1.0   ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = PME   ; Particle Mesh Ewald for long-range 
electrostatics
pme_order   = 4 ; cubic interpolation
fourierspacing  = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = V-rescale ; modified Berendsen thermostat
tc-grps =  protein SOL Cl   ;two coupling groups - more accurate
tau_t = 0.1 0.1  0.1 ; time constant, in ps
ref_t = X  X  X; reference temperature, one for 
each group, in K
; Pressure coupling is on
pcoupl  = Parrinello-Rahman ; Pressure coupling on in NPT
pcoupltype  = isotropic ; uniform scaling of box vectors
tau_p   = 2.0   ; time constant, in ps
ref_p   = 1.0   ; reference pressure, in bar
compressibility = 4.5e-5; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc = xyz   ; 3-D PBC
; Dispersion correction

DispCorr= EnerPres  ; account for cut-off vdW scheme


regards
singam
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD slow's down drastically

2014-02-24 Thread Mark Abraham
On Feb 24, 2014 11:01 PM, "Christopher Neale" 
wrote:
>
> Presuming that you have indeed set up the number of processors correctly
(should be running on a different number of cored for different number of
replicas to do a fair test), could it be a thread pinning issue?

Yes, but part of the larger problem of over-loading the physical cores.

> I run on a Nehalem system with 8 cores/node but, because of the Nehalem
hyperthreading (I think), gromacs always complains if I run "mpirun -np $N
mdrun" where $N is the number of cores
>
> NOTE: The number of threads is not equal to the number of (logical) cores
>   and the -pin option is set to auto: will not pin thread to cores.
>   This can lead to significant performance degradation.
>   Consider using -pin on (and -pinoffset in case you run multiple
jobs).
>
> However, if I use $N = 2 times the number of cores, then I don't get that
note, instead getting:
>
> "Pinning threads with a logical core stride of 1"
>
> Aside, if anybody has a suggestion about how I should handle the thread
pinning in my case, or if it matters, then I would be happy to hear it (my
throughput seems to be good though).

Hyper-threading is good for applications that are memory- or user-bound (so
enabled by default on consumer machines), so they can take advantage of CPU
instruction-issue opportunities while stalled. GROMACS kernels are already
CPU-bound, so there is little to gain and it generally does not pay for the
overhead. Generally, one should not use HT; turning it off can be emulated
with the right use of -pinoffset and using half the number of threads.

> Finally, this comment is off topic, but you might want to reconsider
having the CL ions in a separate temperature coupling group.

Indeed.

Mark

> Chris.
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Singam
Karthick 
> Sent: 24 February 2014 02:32
> To: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: [gmx-users] REMD slow's down drastically
>
> Dear members,
> I am trying to run REMD simulation for poly Alanine (12 residue) system.
I used remd generator to get the range of temperature with the exchange
probability of 0.3. I was getting the 125 replicas. I tried to simulate 125
replicas its drastically slow down the simulation time (for 70 pico seconds
it took around 17 hours ) could anyone please tell me how to solve this
issue.
>
> Following is the MDP file
>
> title   = G4Ga3a4a5 production.
> ;define = ;-DPOSRES ; position restrain the protein
> ; Run parameters
> integrator  = md; leap-frog integrator
> nsteps  = 1250  ; 2 * 500 = 3ns
> dt  = 0.002 ; 2 fs
> ; Output control
> nstxout = 0 ; save coordinates every 0.2 ps
> nstvout = 1 ; save velocities every 0.2 ps
> nstxtcout   = 500   ; save xtc coordinate every 0.2 ps
> nstenergy   = 500   ; save energies every 0.2 ps
> nstlog  = 100   ; update log file every 0.2 ps
> ; Bond parameters
> continuation= yes   ; Restarting after NVT
> constraint_algorithm = lincs; holonomic constraints
> constraints = hbonds; all bonds (even heavy atom-H bonds)
constrained
> lincs_iter  = 1 ; accuracy of LINCS
> lincs_order = 4 ; also related to accuracy
> morse   = no
> ; Neighborsearching
> ns_type = grid  ; search neighboring grid cels
> nstlist = 5 ; 10 fs
> rlist   = 1.0   ; short-range neighborlist cutoff (in nm)
> rcoulomb= 1.0   ; short-range electrostatic cutoff (in nm)
> rvdw= 1.0   ; short-range van der Waals cutoff (in nm)
> ; Electrostatics
> coulombtype = PME   ; Particle Mesh Ewald for long-range
electrostatics
> pme_order   = 4 ; cubic interpolation
> fourierspacing  = 0.16  ; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl  = V-rescale ; modified Berendsen thermostat
> tc-grps =  protein SOL Cl   ;two coupling groups - more
accurate
> tau_t = 0.1 0.1  0.1 ; time constant, in ps
> ref_t = X  X  X; reference temperature,
one for each group, in K
> ; Pressure coupling is on
> pcoupl  = Parrinello-Rahman ; Pressure coupling on in NPT
> pcoupltype  = isotropic ; uniform scaling of box vectors
> tau_p   = 2.0   ; time constant, in ps
> ref_p   = 1.0   ; reference pressure, in bar
> compressibility = 4.5e-5; isothermal c

Re: [gmx-users] REMD slow's down drastically

2014-02-24 Thread Christopher Neale
Thank you Mark for the tips about the pinoffset. I'll try it and see if it 
affects the speed at all.

Regarding the utility of hyperthreading, running on a cluster in which each 
node has 8 Nehalem processing cores, I have seen 5-15% speedup from using 
hyperthreading via 16 threads vs. using only 8 threads (in non-MPI gromacs). 
This is across about 10 simulations systems that I have worked on in the last 
four years. In all these cases, I am using -npme 0. However, once multiple 
nodes and IB fabric get involved, then hyperthreading gives no benefit and 
generally degrades the performance. Perhaps there are some other things getting 
involved here, but the only change I make is mdrun -nt 8 or mdrun -nt 16 and I 
see a speedup from -nt 16. System sizes range from 10K to 250K atoms. Note that 
I have never tried using the hyperthreading with REMD or any other fancy setup.

Chris.

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: 24 February 2014 21:26
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] REMD slow's down drastically

On Feb 24, 2014 11:01 PM, "Christopher Neale" 
wrote:
>
> Presuming that you have indeed set up the number of processors correctly
(should be running on a different number of cored for different number of
replicas to do a fair test), could it be a thread pinning issue?

Yes, but part of the larger problem of over-loading the physical cores.

> I run on a Nehalem system with 8 cores/node but, because of the Nehalem
hyperthreading (I think), gromacs always complains if I run "mpirun -np $N
mdrun" where $N is the number of cores
>
> NOTE: The number of threads is not equal to the number of (logical) cores
>   and the -pin option is set to auto: will not pin thread to cores.
>   This can lead to significant performance degradation.
>   Consider using -pin on (and -pinoffset in case you run multiple
jobs).
>
> However, if I use $N = 2 times the number of cores, then I don't get that
note, instead getting:
>
> "Pinning threads with a logical core stride of 1"
>
> Aside, if anybody has a suggestion about how I should handle the thread
pinning in my case, or if it matters, then I would be happy to hear it (my
throughput seems to be good though).

Hyper-threading is good for applications that are memory- or user-bound (so
enabled by default on consumer machines), so they can take advantage of CPU
instruction-issue opportunities while stalled. GROMACS kernels are already
CPU-bound, so there is little to gain and it generally does not pay for the
overhead. Generally, one should not use HT; turning it off can be emulated
with the right use of -pinoffset and using half the number of threads.

> Finally, this comment is off topic, but you might want to reconsider
having the CL ions in a separate temperature coupling group.

Indeed.

Mark

> Chris.
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Singam
Karthick 
> Sent: 24 February 2014 02:32
> To: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: [gmx-users] REMD slow's down drastically
>
> Dear members,
> I am trying to run REMD simulation for poly Alanine (12 residue) system.
I used remd generator to get the range of temperature with the exchange
probability of 0.3. I was getting the 125 replicas. I tried to simulate 125
replicas its drastically slow down the simulation time (for 70 pico seconds
it took around 17 hours ) could anyone please tell me how to solve this
issue.
>
> Following is the MDP file
>
> title   = G4Ga3a4a5 production.
> ;define = ;-DPOSRES ; position restrain the protein
> ; Run parameters
> integrator  = md; leap-frog integrator
> nsteps  = 1250  ; 2 * 500 = 3ns
> dt  = 0.002 ; 2 fs
> ; Output control
> nstxout = 0 ; save coordinates every 0.2 ps
> nstvout = 1 ; save velocities every 0.2 ps
> nstxtcout   = 500   ; save xtc coordinate every 0.2 ps
> nstenergy   = 500   ; save energies every 0.2 ps
> nstlog  = 100   ; update log file every 0.2 ps
> ; Bond parameters
> continuation= yes   ; Restarting after NVT
> constraint_algorithm = lincs; holonomic constraints
> constraints = hbonds; all bonds (even heavy atom-H bonds)
constrained
> lincs_iter  = 1 ; accuracy of LINCS
> lincs_order = 4 ; also related to accuracy
> morse   = no
> ; Neighborsearching
> ns_type = grid  ; search neighboring grid cels
> nstlist = 5 ; 10 fs
> rlist

Re: [gmx-users] REMD slow's down drastically

2014-02-25 Thread Mark Abraham
Goods to know, thanks. Mileage certainly does vary.

Mark
On Feb 25, 2014 3:45 AM, "Christopher Neale" 
wrote:

> Thank you Mark for the tips about the pinoffset. I'll try it and see if it
> affects the speed at all.
>
> Regarding the utility of hyperthreading, running on a cluster in which
> each node has 8 Nehalem processing cores, I have seen 5-15% speedup from
> using hyperthreading via 16 threads vs. using only 8 threads (in non-MPI
> gromacs). This is across about 10 simulations systems that I have worked on
> in the last four years. In all these cases, I am using -npme 0. However,
> once multiple nodes and IB fabric get involved, then hyperthreading gives
> no benefit and generally degrades the performance. Perhaps there are some
> other things getting involved here, but the only change I make is mdrun -nt
> 8 or mdrun -nt 16 and I see a speedup from -nt 16. System sizes range from
> 10K to 250K atoms. Note that I have never tried using the hyperthreading
> with REMD or any other fancy setup.
>
> Chris.
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: 24 February 2014 21:26
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] REMD slow's down drastically
>
> On Feb 24, 2014 11:01 PM, "Christopher Neale" <
> chris.ne...@alum.utoronto.ca>
> wrote:
> >
> > Presuming that you have indeed set up the number of processors correctly
> (should be running on a different number of cored for different number of
> replicas to do a fair test), could it be a thread pinning issue?
>
> Yes, but part of the larger problem of over-loading the physical cores.
>
> > I run on a Nehalem system with 8 cores/node but, because of the Nehalem
> hyperthreading (I think), gromacs always complains if I run "mpirun -np $N
> mdrun" where $N is the number of cores
> >
> > NOTE: The number of threads is not equal to the number of (logical) cores
> >   and the -pin option is set to auto: will not pin thread to cores.
> >   This can lead to significant performance degradation.
> >   Consider using -pin on (and -pinoffset in case you run multiple
> jobs).
> >
> > However, if I use $N = 2 times the number of cores, then I don't get that
> note, instead getting:
> >
> > "Pinning threads with a logical core stride of 1"
> >
> > Aside, if anybody has a suggestion about how I should handle the thread
> pinning in my case, or if it matters, then I would be happy to hear it (my
> throughput seems to be good though).
>
> Hyper-threading is good for applications that are memory- or user-bound (so
> enabled by default on consumer machines), so they can take advantage of CPU
> instruction-issue opportunities while stalled. GROMACS kernels are already
> CPU-bound, so there is little to gain and it generally does not pay for the
> overhead. Generally, one should not use HT; turning it off can be emulated
> with the right use of -pinoffset and using half the number of threads.
>
> > Finally, this comment is off topic, but you might want to reconsider
> having the CL ions in a separate temperature coupling group.
>
> Indeed.
>
> Mark
>
> > Chris.
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Singam
> Karthick 
> > Sent: 24 February 2014 02:32
> > To: gromacs.org_gmx-users@maillist.sys.kth.se
> > Subject: [gmx-users] REMD slow's down drastically
> >
> > Dear members,
> > I am trying to run REMD simulation for poly Alanine (12 residue) system.
> I used remd generator to get the range of temperature with the exchange
> probability of 0.3. I was getting the 125 replicas. I tried to simulate 125
> replicas its drastically slow down the simulation time (for 70 pico seconds
> it took around 17 hours ) could anyone please tell me how to solve this
> issue.
> >
> > Following is the MDP file
> >
> > title   = G4Ga3a4a5 production.
> > ;define = ;-DPOSRES ; position restrain the protein
> > ; Run parameters
> > integrator  = md; leap-frog integrator
> > nsteps  = 1250  ; 2 * 500 = 3ns
> > dt  = 0.002 ; 2 fs
> > ; Output control
> > nstxout = 0 ; save coordinates every 0.2 ps
> > nstvout = 1 ; save velocities every 0.2 ps
> > nstxtcout   = 500   ; save xtc coordinate every 

[gmx-users] REMD on more than one node

2016-05-12 Thread YanhuaOuyang
Hi,
I am running a REMD with grimacs 5.0, I have 46 replica, 4 nodes, 16 cores per 
node. how can I use my compute resource and what’s the command of “gmx mdrun”?
the command is below, I am not sure weather it is right
mpirun -np 4 -npme gmx mdrun -s md_01.tpr  -multi 46 -replex 500 -reseed -1.
mpirun -np 4 -npme gmx mdrun -s md_02.tpr  -multi 46 -replex 500 -reseed -1.
mpirun -np 4 -npme gmx mdrun -s md_03.tpr  -multi 46 -replex 500 -reseed -1.
…
mpirun -np 4 -npme gmx mdrun -s md_46.tpr  -multi 46 -replex 500 -reseed -1.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

  1   2   >