Re: [gmx-users] Using Gpus on multiple nodes. (Feature #1591)

2014-10-14 Thread Trayder Thomas
Try: mpirun -npernode 2

-Trayder

On Wed, Oct 15, 2014 at 8:42 AM, Siva Dasetty  wrote:

> Thank you Mark for the reply,
>
> We use pbs for submitting jobs on our cluster and this is how I request the
> nodes and processors
>
> #PBS -l
> select=2:ncpus=8:mem=8gb:mpiprocs=8:ngpus=2:gpu_model=k20:interconnect=fdr
>
>
> Do you think the problem could be with the way I installed mdrun using Open
> MPI?
>
>
> Can you please suggest the missing environmental settings that I may need
> to include in the job script in order for the MPI to consider 2 ranks on
> one node?
>
>
> Thank you for your time.
>
>
>
> On Tue, Oct 14, 2014 at 5:20 PM, Mark Abraham 
> wrote:
>
> > On Tue, Oct 14, 2014 at 10:51 PM, Siva Dasetty 
> > wrote:
> >
> > > Dear All,
> > >
> > > I am currently able to run simulation on a single node containing 2
> gpus,
> > > but I get the following fatal error when I try to run the simulation
> > using
> > > multiple gpus (2 on each node) on multiple nodes (2 for example) using
> > OPEN
> > > MPI.
> > >
> >
> > Here you say you want 2 ranks on each of two nodes...
> >
> >
> > > Fatal error:
> > >
> > > Incorrect launch configuration: mismatching number of PP MPI processes
> > and
> > > GPUs
> > >
> > > per node.
> > >
> > > mdrun was started with 4 PP MPI processes per node,
> >
> >
> > ... but here mdrun means what it says...
> >
> >
> > > but you provided only 2
> > > GPUs.
> > >
> > > The command I used to run the simulation is
> > >
> > > mpirun -np 4 mdrun  -s   -deffnm <...>  -gpu_id 01
> > >
> >
> > ... which means your MPI environment (hostfile, job script settings,
> > whatever) doesn't have the settings you think it does, since it's putting
> > all 4 ranks on one node.
> >
> > Mark
> >
> >
> > >
> > >
> > > However It at least runs if I use the following command,
> > >
> > >
> > > mpirun -np 4 mdrun  -s   -deffnm <...>  -gpu_id 0011
> > >
> > >
> > > But after referring to the following thread, I highly doubt if I am
> using
> > > all the 4 gpus available in the 2 nodes combined.
> > >
> > >
> > >
> > >
> >
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-developers/2014-May/007682.html
> > >
> > >
> > >
> > > Thank you for your help in advance,
> > >
> > > --
> > > Siva
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
>
> --
> Siva
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] (no subject)

2014-10-14 Thread Padmani Sandhu
Hello,


I am doing Molecular-Dynamic simulation of Protein-Ligand complex embedded
in POPC lipid membrane. After energy minimizing of complex, I am facing
problem with NVT equilibration step. mdrun crashed with following error:


"12 particles communicated to PME node 4 are more than 2/3 times the
cut-off out of the domain decomposition cell of their charge group in
dimension x.
This usually means that your system is not well equilibrated."




These are the parameters used in nvt.mdp file:

title   = NVT equilibration

define  = -DPOSRES_LIPID -DPOSRES
-DPOSRES_WATER  ; position restrain the protein

; Run parameters

integrator  = md; leap-frog integrator

nsteps  = 5 ; 2 * 5 = 100 ps

dt  = 0.002 ; 2 fs

; Output control

nstxout = 100   ; save coordinates every 0.2 ps

nstvout = 100   ; save velocities every 0.2 ps

nstenergy   = 100   ; save energies every 0.2 ps

nstlog  = 100   ; update log file every 0.2 ps

; Bond parameters

continuation= no; first dynamics run

constraint_algorithm = lincs; holonomic
constraints

constraints = all-bonds ; all bonds
(even heavy atom-H bonds) constrained

lincs_iter  = 1 ; accuracy of LINCS

lincs_order = 4 ; also related to
accuracy

; Neighborsearching

ns_type = grid  ; search neighboring grid cels

nstlist = 5 ; 10 fs

rlist   = 1.2   ; short-range neighborlist cutoff
(in nm)

rcoulomb= 1.2   ; short-range electrostatic
cutoff (in nm)

rvdw= 1.2   ; short-range van der Waals cutoff
(in nm)

; Electrostatics

coulombtype = PME   ; Particle Mesh Ewald for
long-range electrostatics

pme_order   = 4 ; cubic interpolation

fourierspacing  = 0.16  ; grid spacing for FFT

; Temperature coupling is on

tcoupl  = V-rescale ; modified
Berendsen thermostat

tc-grps = Protein_LMT POPC Water_and_ions   ;
three coupling groups - more accurate

tau_t   = 0.1   0.1 0.1 ; time constant, in
ps

ref_t   = 271   271 271 ; reference
temperature, one for each group, in K

; Pressure coupling is off

pcoupl  = no; no pressure coupling in NVT

; Periodic boundary conditions

pbc = xyz   ; 3-D PBC

; Dispersion correction

DispCorr= EnerPres  ; account for cut-off vdW
scheme

; Velocity generation

gen_vel = yes   ; assign velocities from Maxwell
distribution

gen_temp= 271   ; temperature for Maxwell
distribution

gen_seed= -1; generate a random seed

; COM motion removal

; These options remove motion of the
protein/bilayer relative to the solvent/ions

nstcomm = 1

comm-mode   = Linear

comm-grps   = Protein_LMT_POPC Water_and_ions




Please help me...




-- 
*Padmani sandhu*
*Research Scholar,*
*Center for Computational Biology and Bioinformatics,*
*Central University of Himachal Pradesh,*
*Temporary Academic Block, Shahpur *
*Pin 176206, District Kangra,*
*Himachal Pradesh, India*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] 2-D liquid

2014-10-14 Thread Cai
Hi users,

I am trying to simulate 2-d liquids interacting with simple potential like
Lennard-Jones potential.

Can it be done by specifying "pcoupletype = semiisotropic" in the .mdp
file? I mean enforcing a normal pressure in x-y direction while very high
pressure in z direction to constrain the motion in z.

Or do I need to define walls in the z direction?

Does anyone have similar experience? Your help are highly appreciated!


Cai
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] How long run should be enough for Free Energy Calculation of a protein?

2014-10-14 Thread Batdorj Batsaikhan
Dear gmx-users,

Hello, I am working on free energy calculation of a protein. How long run 
should be enough for Free Energy Calculation of a protein?

Batsaikhan
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Using Gpus on multiple nodes. (Feature #1591)

2014-10-14 Thread Siva Dasetty
Thank you Mark for the reply,

We use pbs for submitting jobs on our cluster and this is how I request the
nodes and processors

#PBS -l
select=2:ncpus=8:mem=8gb:mpiprocs=8:ngpus=2:gpu_model=k20:interconnect=fdr


Do you think the problem could be with the way I installed mdrun using Open
MPI?


Can you please suggest the missing environmental settings that I may need
to include in the job script in order for the MPI to consider 2 ranks on
one node?


Thank you for your time.



On Tue, Oct 14, 2014 at 5:20 PM, Mark Abraham 
wrote:

> On Tue, Oct 14, 2014 at 10:51 PM, Siva Dasetty 
> wrote:
>
> > Dear All,
> >
> > I am currently able to run simulation on a single node containing 2 gpus,
> > but I get the following fatal error when I try to run the simulation
> using
> > multiple gpus (2 on each node) on multiple nodes (2 for example) using
> OPEN
> > MPI.
> >
>
> Here you say you want 2 ranks on each of two nodes...
>
>
> > Fatal error:
> >
> > Incorrect launch configuration: mismatching number of PP MPI processes
> and
> > GPUs
> >
> > per node.
> >
> > mdrun was started with 4 PP MPI processes per node,
>
>
> ... but here mdrun means what it says...
>
>
> > but you provided only 2
> > GPUs.
> >
> > The command I used to run the simulation is
> >
> > mpirun -np 4 mdrun  -s   -deffnm <...>  -gpu_id 01
> >
>
> ... which means your MPI environment (hostfile, job script settings,
> whatever) doesn't have the settings you think it does, since it's putting
> all 4 ranks on one node.
>
> Mark
>
>
> >
> >
> > However It at least runs if I use the following command,
> >
> >
> > mpirun -np 4 mdrun  -s   -deffnm <...>  -gpu_id 0011
> >
> >
> > But after referring to the following thread, I highly doubt if I am using
> > all the 4 gpus available in the 2 nodes combined.
> >
> >
> >
> >
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-developers/2014-May/007682.html
> >
> >
> >
> > Thank you for your help in advance,
> >
> > --
> > Siva
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
Siva
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Using Gpus on multiple nodes. (Feature #1591)

2014-10-14 Thread Mark Abraham
On Tue, Oct 14, 2014 at 10:51 PM, Siva Dasetty 
wrote:

> Dear All,
>
> I am currently able to run simulation on a single node containing 2 gpus,
> but I get the following fatal error when I try to run the simulation using
> multiple gpus (2 on each node) on multiple nodes (2 for example) using OPEN
> MPI.
>

Here you say you want 2 ranks on each of two nodes...


> Fatal error:
>
> Incorrect launch configuration: mismatching number of PP MPI processes and
> GPUs
>
> per node.
>
> mdrun was started with 4 PP MPI processes per node,


... but here mdrun means what it says...


> but you provided only 2
> GPUs.
>
> The command I used to run the simulation is
>
> mpirun -np 4 mdrun  -s   -deffnm <...>  -gpu_id 01
>

... which means your MPI environment (hostfile, job script settings,
whatever) doesn't have the settings you think it does, since it's putting
all 4 ranks on one node.

Mark


>
>
> However It at least runs if I use the following command,
>
>
> mpirun -np 4 mdrun  -s   -deffnm <...>  -gpu_id 0011
>
>
> But after referring to the following thread, I highly doubt if I am using
> all the 4 gpus available in the 2 nodes combined.
>
>
>
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-developers/2014-May/007682.html
>
>
>
> Thank you for your help in advance,
>
> --
> Siva
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Using Gpus on multiple nodes. (Feature #1591)

2014-10-14 Thread Siva Dasetty
Dear All,

I am currently able to run simulation on a single node containing 2 gpus,
but I get the following fatal error when I try to run the simulation using
multiple gpus (2 on each node) on multiple nodes (2 for example) using OPEN
MPI.

Fatal error:

Incorrect launch configuration: mismatching number of PP MPI processes and
GPUs

per node.

mdrun was started with 4 PP MPI processes per node, but you provided only 2
GPUs.

The command I used to run the simulation is

mpirun -np 4 mdrun  -s   -deffnm <...>  -gpu_id 01



However It at least runs if I use the following command,


mpirun -np 4 mdrun  -s   -deffnm <...>  -gpu_id 0011


But after referring to the following thread, I highly doubt if I am using
all the 4 gpus available in the 2 nodes combined.


https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-developers/2014-May/007682.html



Thank you for your help in advance,

-- 
Siva
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Conserved energy ("Conserved En.") in NVT simulation

2014-10-14 Thread Wade
Dear Mark and ALL,
   I am trying to calculate the conserved energy (H-tilde) in Bussi's 
stochastic velocity rescale algorithm (jcp,2007).
In previous maillist, ‍Mark had mentioned that the conserved En. in ener file 
is the H in bussi's paper 
(http://permalink.gmane.org/gmane.science.biology.gromacs.user/63005‍‍). 
According to the eq. 15 in Bussi's paper, we just need to get the time 
integration of dK if we have had the H.
I carefully checked the coupling.c file, and noticed a variable - 
therm_integral[i] in the fucntion vrescale_tcoupl().
The therm_integral[i]-= Ek_new - Ek. It seems like an accumulation of the -dK.
If it realy is the sum of -dK, the problem could become simple.
What we need to do is just like this: H-tilde=H-Tau-t* therm_integral[i].?
Is that right? I really need your help.
Any suggestion is valuable.

Wade‍
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] grompp Segmentation fault

2014-10-14 Thread Nilesh Dhumal
Here are more details
.itp file

; Derived from parsing of runfiles/alat.top.orig
[ defaults ]
; nbfunccomb-rule   gen-pairs   fudgeLJ fudgeQQ
 1   3   yes 0.5 0.5
; comb-rule 3 is square-root sigma, the OPLSAA version

[ atomtypes ]
; full atom descriptions are available in ffoplsaa.atp
; name  bond_typemasscharge   ptype  sigma  epsilon
 opls_991   Cu2963.546001.098   A3.114e-010.02092e+00
 opls_993   OA8 15.99940-0.665   A3.033e-01   
0.401664e+00
 opls_994   OB8 15.99940-0.665   A3.033e-01   
0.401664e+00
 opls_995   CA6 12.0110  0.778   A3.473e-01   
0.39748e+00
 opls_996   CB6 12.0110 -0.092   A3.473e-01   
0.39748e+00
 opls_997   CC6 12.0110 -0.014   A3.473e-01   
0.39748e+00
 opls_998   HM1  1.0080  0.109   A2.846e-01   
0.06276e+00

[ bondtypes ]
; ij  func   b0D  Beta
 Cu   OA   3 0.1969 358.8574960.285
 Cu   OB   3 0.1969 358.8574960.285
 OA   CA   3 0.1260 564.840.200
 OB   CA   3 0.1260 564.840.200
 CA   CB   3 0.1456 367.52256 0.200
 CB   CC   3 0.1355 502.080.200
 CC   HM   3 0.0931 485.344   0.177

 [ angletypes ]
;  ijk  func   th0   cth
OA   Cu   OA  1   170.2   419.73888
OB   Cu   OB  1   170.2   419.73888
OA   Cu   OB  190.0   100.416
Cu   OA   CA  1   127.5   338.65296
Cu   OB   CA  1   127.5   338.65296
OA   CA   OA  1   128.5   606.68
OB   CA   OB  1   128.5   606.68
OA   CA   OB  1   128.5   606.68
OA   CA   CB  1   116.2   456.01416
OB   CA   CB  1   116.2   456.01416
CA   CB   CC  1   119.9   290.20224
CC   CB   CC  1   120.1   753.12
CB   CC   CB  1   119.9   753.12
CB   CC   HM  1   120.0   309.616

[ dihedraltypes ]
;  ijkl   func coefficients
; OPLS Fourier dihedraltypes translated to Gromacs Ryckaert-Bellemans form
; according to the formula in the Gromacs manual.
CuOA   CA   CB1 180.012.552 2
CuOB   CA   CB1 180.012.552 2
CuOA   CA   OB1 180.012.552 2
CuOB   CA   OA1 180.012.552 2
CuOA   CA   OA1 180.012.552 2
CuOB   CA   OB1 180.012.552 2
CBCC   CB   CC1 180.012.552 2
CACB   CC   CB1 180.012.552 2
CACB   CC   HM1 180.012.552 2
CCCB   CC   HM1 180.012.552 2
OACA   CB   CC1 180.010.460 2
OBCA   CB   CC1 180.010.460 2


HMCC   CB   CB2 180.01.548082
CACB   CC   CC2 180.0   41.84   2
CBCA   OA   OB2 180.0   41.84   2
CBCA   OB   OA2 180.0   41.84   2


Initial part of .gro file

Title
  156
1BTC  OA1  1  19.579  14.561  18.006  0.  0.  0.
1BTC  CA2  2  18.533  15.007  18.533  0.  0.  0.
1BTC  CB3  3  17.865  16.163  17.865  0.  0.  0.
1BTC  CC4  4  18.433  16.737  16.737  0.  0.  0.
1BTC  HM5  5  19.215  16.358  16.358  0.  0.  0.
1BTC  CC6  6  16.737  16.737  18.433  0.  0.  0.
1BTC  HM7  7  16.358  16.358  19.215  0.  0.  0.
1BTC  OA8  8  14.561  18.006  19.579  0.  0.  0.
1BTC  CA9  9  15.007  18.533  18.533  0.  0.  0.
1BTC  CB1010  16.163  17.865  17.865  0.  0.  0.
1BTC  CC1111  16.737  18.433  16.737  0.  0.  0.
1BTC  HM1212  16.358  19.215  16.358  0.  0.  0.
1BTC  OA1313  18.006  19.579  14.561  0.  0.  0.
1BTC  CA1414  18.533  18.533  15.007  0.  0.  0.
1BTC  CB1515  17.865  17.865  16.163  0.  0.  0.
1BTC  OB1616  14.561  19.579  18.006  0.  0.  0.
1BTC  OB1717  18.006  14.561  19.579  0.  0.  0.
1BTC  OB1818  19.579  18.006  14.561  0.  0.  0.
1BTC  OA1919  14.561  21.499  19.926  0.  0.  0.
1BTC  CA2020  15.007  20.972  20.972  0.  0.  0.
1BTC  CB2121  16.163  21.639  21.639  0.  0.  0.
1BTC  CC2222  16.737  22.767  21.072  0.  0.  0.
1BTC  HM2323  16.358  23.146  20.290  0.  0.  0.
1BTC  CC2424  16.737  21.072  22.767  0.  0.  0.
1BTC  HM2525  16.358  20.290  23.146  0.  0.  0.
1BTC  OB2626  14.561  19.926  21.499  0.  0.  0.
1BTC  OA2727  19.926  14.561  21.499  0.  0.  0.
1BTC  CA2828  20.

Re: [gmx-users] Problem with constraints in NVT calculations.

2014-10-14 Thread Justin Lemkul



On 10/14/14 7:40 AM, Kester Wong wrote:

Hi Justin and all,


>
> > > Meanwhile, is it possible to implement a self-consistent FF 
from scratch? One
> > > example I came across is from the work by Ho and Striolo
> > >
> > > titled: Polarizability effects in molecular dynamics 
simulations of the
> > > graphene-water interface
> > >
> >
> > Of course you can implement whatever you like.  Gromacs has 
been able to carry
> > out polarizable simulations for a very long time; I've only 
ever cautioned
> > against abuse of certain models.
> >
> >
> > I guess that GROMACS is capable in running polarisable sims, but 
for the Drude
> > polarisable calcs, they are prone to polarisation catastrophe due 
to the
> > massless shells and thermostat instability?
>
> Polarization catastrophe is possible in any polarizable simulation.  
Usually
> very small time steps are required to avoid this, unless using an 
anharmonic
> potential or a hard wall restraint.
>
>
> Using Morse = yes for the anharmonic potential option, whereas using the
> parameters below for the hard wall restraint option?
>
> pbc = xy
> nwall = 2
> wall-atomtype =; optional
> wall-type = 12-6
> wall-r-linpot = 1  ; having a positive val. is esp. useful in equil. run
> wall-density  = 5 5
> wall-ewald-zfac = 3
>

No.  I'm not suggesting a Morse potential.  What I was referring to was an
anharmonic function for the bonds, which is present in Gromacs but I'm not 
sure
if it's documented.  The wall settings in Gromacs have nothing to do with 
this.
   Such a function is not present in Gromacs (yet).


Although the wall settings have nothing to do with polarisation catastrophe, I
guess it might be useful in the following case:

I have been using a time step of 1 fs, which is small already, yet the water
droplet (on graphene) quickly fills up the vacuum of ~5-6 nm along the
z-direction. I will try using the wall setup as above, hoping that water remains
a droplet with the presence of H3O and Cl ions. Could you please explain what is
the difference between the three types of wall; 9-3, 10-4, and 12-6?



The exponents used in the LJ potential for the wall.  12-6 is the "normal" LJ 
potential.



The only part of the GROMACS 5.0 manual that described anharmonic bond potential
is in the Morse potential section 4.2.2.



Like I said, it's not documented.  See src/gromacs/gmxlib.c, function 
anharm_polarize().



Which function is not available in GROMACS yet?



What we call the "hard wall" restraint, that reflects a Drude particle along the 
bond vector connecting it to its parent atom.  It prevents the Drude from moving 
more than a specified amount, thus vastly improving integration stability.  See 
the Appendix of dx.doi.org/10.1021/jp402860e.



> > In the paper mentioned above, the authors have carried out three 
types of cals:
> > i) SPC/E on non-pol graphene
> > ii) SWM4-DP on non-pol graphene: graphene in neutral or charged 
states
> > iii) SWM4-DP on graphene-DP (one Drude particle per C-atom with 
opposite
> > charge): graphene-DP in neutral or charged states
> >
> > They seemed to have simulated their systems using both additive and 
polarisable
> > (0.878 angstrom^3) models?
> > I guess this is where I got confused.
>
> I suppose you can make any model work if you parametrize it a certain 
way, but
> my point in the previous message is that you shouldn't go off trying 
to build a
> force field that has SWM4-NDP water around additive CHARMM solutes.
>
>
> Yep, now I understand it.
> If I wanted to also describe graphene, is it possible to include carbon
> parameters in the SWM4-NDP force field then?
>

Well, strictly speaking, you're not introducing graphene into a SWM4-NDP 
force
field, you're creating a force field that describes both.  This can 
certainly be
done if you have all the parameters.

That is great! To create a FF that describes the SWM4 water, NDP ions, and
graphene carbon (CA); I will have to include graphene.itp, the CA-CA bonded
parameters, and the LJ nonbonding interaction parameters, is that right?



Yes.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailin

Re: [gmx-users] regarding confout.gro

2014-10-14 Thread Justin Lemkul



On 10/14/14 12:31 AM, RINU KHATTRI wrote:

hello gromacs user i am working on protein ligand complex with popc membrane
i am running production md in extended time (40 ns) i got some file i
am using -noappend option i got confout.gro file in each extended time
what is the use of this file if i want see my protein or ligand are in
proper place this confout.gro is sufficient of i  have to see traj
file


The confout.gro file is simply the last snapshot of the simulation interval. 
Whether or not it is reflective of the dynamics during that interval is unknown 
without doing analysis and simple visualization.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] grompp Segmentation fault

2014-10-14 Thread Justin Lemkul



On 10/13/14 10:42 PM, Nilesh Dhumal wrote:

hello,

I running grompp for simulation. I get Segmentation fault error.

grompp -f 600.mdp -c cu.gro -p  cu_btc_1.top -o 1.tpr


checking input for internal consistency...
processing topology...
Segmentation fault


Could any one tell what is the problem?



Not without a full debugging back trace.  Seg faults are generic memory errors; 
there's nothing at all that can be diagnosed from this message alone.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem with constraints in NVT calculations.

2014-10-14 Thread Kester Wong
Hi Justin and all,>
> > > Meanwhile, is it possible to implement a self-consistent FF from scratch? One
> > > example I came across is from the work by Ho and Striolo
> > >
> > > titled: Polarizability effects in molecular dynamics simulations of the
> > > graphene-water interface
> > >
> >
> > Of course you can implement whatever you like.  Gromacs has been able to carry
> > out polarizable simulations for a very long time; I've only ever cautioned
> > against abuse of certain models.
> >
> >
> > I guess that GROMACS is capable in running polarisable sims, but for the Drude
> > polarisable calcs, they are prone to polarisation catastrophe due to the
> > massless shells and thermostat instability?
>
> Polarization catastrophe is possible in any polarizable simulation.  Usually
> very small time steps are required to avoid this, unless using an anharmonic
> potential or a hard wall restraint.
>
>
> Using Morse = yes for the anharmonic potential option, whereas using the
> parameters below for the hard wall restraint option?
>
> pbc = xy
> nwall = 2
> wall-atomtype =; optional
> wall-type = 12-6
> wall-r-linpot = 1  ; having a positive val. is esp. useful in equil. run
> wall-density  = 5 5
> wall-ewald-zfac = 3
>

No.  I'm not suggesting a Morse potential.  What I was referring to was an 
anharmonic function for the bonds, which is present in Gromacs but I'm not sure 
if it's documented.  The wall settings in Gromacs have nothing to do with this. 
  Such a function is not present in Gromacs (yet).
Although the wall settings have nothing to do with polarisation catastrophe, I guess it might be useful in the following case:  I have been using a time step of 1 fs, which is small already, yet the water droplet (on graphene) quickly fills up the vacuum of ~5-6 nm along the z-direction. I will try using the wall setup as above, hoping that water remains a droplet with the presence of H3O and Cl ions. Could you please explain what is the difference between the three types of wall; 9-3, 10-4, and 12-6?The only part of the GROMACS 5.0 manual that described anharmonic bond potential is in the Morse potential section 4.2.2.Which function is not available in GROMACS yet?> > In the paper mentioned above, the authors have carried out three types of cals:
> > i) SPC/E on non-pol graphene
> > ii) SWM4-DP on non-pol graphene: graphene in neutral or charged states
> > iii) SWM4-DP on graphene-DP (one Drude particle per C-atom with opposite
> > charge): graphene-DP in neutral or charged states
> >
> > They seemed to have simulated their systems using both additive and polarisable
> > (0.878 angstrom^3) models?
> > I guess this is where I got confused.
>
> I suppose you can make any model work if you parametrize it a certain way, but
> my point in the previous message is that you shouldn't go off trying to build a
> force field that has SWM4-NDP water around additive CHARMM solutes.
>
>
> Yep, now I understand it.
> If I wanted to also describe graphene, is it possible to include carbon
> parameters in the SWM4-NDP force field then?
>

Well, strictly speaking, you're not introducing graphene into a SWM4-NDP force 
field, you're creating a force field that describes both.  This can certainly be 
done if you have all the parameters.
That is great! To create a FF that describes the SWM4 water, NDP ions, and graphene carbon (CA); I will have to include graphene.itp, the CA-CA bonded parameters, and the LJ nonbonding interaction parameters, is that right? > >
> > On the side: From my previous calcs using GRAPPA force field (TIPS3P water
> > model), graphene's polarisation (0.91 angstrom^3) resulted in spreading of water
> > into thin layer. But that was polarisable graphene in a rigid rod model (dummy
> > instead of shelltype particle).
> >
> > >
> > > Pardon me if this sounds outright wrong; regarding the massless Drude particle,
> > > can it be replaced with an atom (assuming an induced dipole model) instead of
> > > the charge-on-spring model? The mass of the atom can be set to 0.4 amu with an
> > > opposite charge of the water oxygen atom?
> > >
> >
> > In the Drude model with 0.4-amu particles, the Drudes are essentially just
> > atoms.  There's nothing conceptually special about them, we just handle them
> > slightly differently in the code.
> >
> >
> > Well since domain decomposition will not work on shelltype calcs, I am intrigued
> > to experiment if I can:
> > i) replace the Drudes to atom with the same configuration - opposite charge,
> > mass (0.4 amu), lengths, etc
> >
>
> The problem is that shells/Drudes have to be relaxed (SCF) or otherwise h

[gmx-users] Free energy calculation of

2014-10-14 Thread Batdorj Batsaikhan
Dear gxm users,

Now I am doing  solvation free energy of a protein, I follow Sander Pronk's 
tutorial downloaded from Gromacs page. 

1. How do I check system is equilibrated? 

2. I run following command

sh mklambdas.sh run.mdp topol.top equil.gro

I got following error:

mklambdas.sh: 12: mklambdas.sh: Syntax error: "(" unexpected



How can I fix this?


Best regards,

Batsaikhan
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.