[gmx-users] implicit solvent error

2018-06-24 Thread Chhaya Singh
Hi,

I am trying to simulate a protein in an implicit solvent in groamcs using
amber ff99sb ildn .
the mdpfile that I am using is I have shown below:


integrator  =  md
dt  =  0.001 ;0.005; ps !
nsteps  =  500 ; total 5 ns.

nstlog  = 5000
nstxout = 0  ;1000
nstvout = 0  ;1000
nstfout = 0  ;1000
nstxtcout   = 5000
nstenergy   = 5000

nstlist =  10

cutoff-scheme   = group

rlist   =  5
rvdw=  5
rcoulomb=  5
coulombtype = cut-off
vdwtype = cut-off
bd_fric =  0
;ld_seed =  -1
pbc =  no
ns_type =  grid  ;simple => gives domain decomposition error
constraints = all-bonds
lincs_order = 4
lincs_iter  = 1
lincs-warnangle = 30

Tcoupl  = v-rescale
tau_t   = 1.0
tc-grps = Protein
ref_t   = 310



This is the mdp file that I am using for equilibration and production run,
if there is anything that I can fix in mdp file please let me know.
I am getting very less speed using an implicit solvent in gromacs.
is there any way to increase the speed.
the speed right now I am getting is 0.47- 0.74 ns /day using one node.
please help.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] implicit solvent error

2018-06-24 Thread Alex
This input has no information about implicit solvent and a simple google 
search immediately yields mdp examples using gbsa. As far as performance 
is concerned, we don't know the specs of your machine or the size of 
your system. With cutoff electrostatics and a cutoff of 5 nm, one can 
expect quite a bit of scaling with system size beyond linear.


Alex


On 6/24/2018 1:04 AM, Chhaya Singh wrote:

Hi,

I am trying to simulate a protein in an implicit solvent in groamcs using
amber ff99sb ildn .
the mdpfile that I am using is I have shown below:


integrator  =  md
dt  =  0.001 ;0.005; ps !
nsteps  =  500 ; total 5 ns.

nstlog  = 5000
nstxout = 0  ;1000
nstvout = 0  ;1000
nstfout = 0  ;1000
nstxtcout   = 5000
nstenergy   = 5000

nstlist =  10

cutoff-scheme   = group

rlist   =  5
rvdw=  5
rcoulomb=  5
coulombtype = cut-off
vdwtype = cut-off
bd_fric =  0
;ld_seed =  -1
pbc =  no
ns_type =  grid  ;simple => gives domain decomposition error
constraints = all-bonds
lincs_order = 4
lincs_iter  = 1
lincs-warnangle = 30

Tcoupl  = v-rescale
tau_t   = 1.0
tc-grps = Protein
ref_t   = 310



This is the mdp file that I am using for equilibration and production run,
if there is anything that I can fix in mdp file please let me know.
I am getting very less speed using an implicit solvent in gromacs.
is there any way to increase the speed.
the speed right now I am getting is 0.47- 0.74 ns /day using one node.
please help.


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] implicit solvent error

2018-06-24 Thread Chhaya Singh
Hi,
I have attached the energy minimization mdp file.
please look through it .



cpp =
/lib/cpp ; prepocessor of
the current machine
define  =
-DFLEXIBLE   ; -DPOSRES,
-DPOSRES_IONS ;DFLEX_SPC; FLEXible SPC and POSition REStraints

integrator  =
steep; steepest descent
algorithm
dt  =
0.005; time step in ps
nsteps  =
5000 ; number of steps

emtol   =
100  ; convergence
criterion
emstep  =
0.05 ; intial step size
constraints   = none
constraint-algorithm  = lincs
unconstrained-start   =
no  ; Do not constrain
the start configuration
;shake_tol   = 0.0001
nstlist =
0; step frequency
for updating neighbour list
ns_type =
simple   ; grid ; method
for nighbour searching (?)
nstxout =
100  ; frequency for
writing coords to output
nstvout =
100  ; frequency for
writing velocities to output
nstfout =  0; frequency for writing forces to output
nstlog  =  100; frequency for writing energies to log file
nstenergy   =  100  ; frequency for writing energies to energy file
nstxtcout   =  0; frequency for writing coords to xtc traj
xtc_grps=  system ; group(s) whose coords are to be written in
xtc traj
energygrps  =  system ; group(s) whose energy is to be written in
energy file
pbc =  no; use pbc
rlist   =  1.4; cutoff (nm)
coulombtype =  cutoff ; truncation for minimisation, with large
cutoff
rcoulomb=  1.4
vdwtype =  cut-off  ; truncation for minimisation, with large
cutoff
rvdw=  1.4
nstcomm =  0  ; number of steps for centre of mass motion
removal (in vacuo only!)
Tcoupl  =  no
Pcoupl  =  no
"min-implicit.mdp" 40L,
2616C
1,1   Top



the system I am using has the following information:
PBS -l select=1:ncpus=16:mpiprocs=16
#PBS -l walltime=24:00:00



On 24 June 2018 at 13:00, Alex  wrote:

> This input has no information about implicit solvent and a simple google
> search immediately yields mdp examples using gbsa. As far as performance is
> concerned, we don't know the specs of your machine or the size of your
> system. With cutoff electrostatics and a cutoff of 5 nm, one can expect
> quite a bit of scaling with system size beyond linear.
>
> Alex
>
>
>
> On 6/24/2018 1:04 AM, Chhaya Singh wrote:
>
>> Hi,
>>
>> I am trying to simulate a protein in an implicit solvent in groamcs using
>> amber ff99sb ildn .
>> the mdpfile that I am using is I have shown below:
>>
>>
>> integrator  =  md
>> dt  =  0.001 ;0.005; ps !
>> nsteps  =  500 ; total 5 ns.
>>
>> nstlog  = 5000
>> nstxout = 0  ;1000
>> nstvout = 0  ;1000
>> nstfout = 0  ;1000
>> nstxtcout   = 5000
>> nstenergy   = 5000
>>
>> nstlist =  10
>>
>> cutoff-scheme   = group
>>
>> rlist   =  5
>> rvdw=  5
>> rcoulomb=  5
>> coulombtype = cut-off
>> vdwtype = cut-off
>> bd_fric =  0
>> ;ld_seed =  -1
>> pbc =  no
>> ns_type =  grid  ;simple => gives domain decomposition error
>> constraints = all-bonds
>> lincs_order = 4
>> lincs_iter  = 1
>> lincs-warnangle = 30
>>
>> Tcoupl  = v-rescale
>> tau_t   = 1.0
>> tc-grps = Protein
>> ref_t   = 310
>>
>>
>>
>> This is the mdp file that I am using for equilibration and production run,
>> if there is anything that I can fix in mdp file please let me know.
>> I am getting very less speed using an implicit solvent in gromacs.
>> is there any way to increase the speed.
>> the speed right now I am getting is 0.47- 0.74 ns /day using one node.
>> please help.
>>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mai

Re: [gmx-users] implicit solvent error

2018-06-24 Thread Alex
Your EM is unrelated to dynamics. I mean, it could, but we don't know 
anything about your simulated system. I am of course assuming that your 
MPI setup is optimal for gmx and you actually get to use those 16 
threads, assuming those aren't an emulation of some sort.



On 6/24/2018 1:33 AM, Chhaya Singh wrote:

Hi,
I have attached the energy minimization mdp file.
please look through it .



cpp =
/lib/cpp ; prepocessor of
the current machine
define  =
-DFLEXIBLE   ; -DPOSRES,
-DPOSRES_IONS ;DFLEX_SPC; FLEXible SPC and POSition REStraints

integrator  =
steep; steepest descent
algorithm
dt  =
0.005; time step in ps
nsteps  =
5000 ; number of steps

emtol   =
100  ; convergence
criterion
emstep  =
0.05 ; intial step size
constraints   = none
constraint-algorithm  = lincs
unconstrained-start   =
no  ; Do not constrain
the start configuration
;shake_tol   = 0.0001
nstlist =
0; step frequency
for updating neighbour list
ns_type =
simple   ; grid ; method
for nighbour searching (?)
nstxout =
100  ; frequency for
writing coords to output
nstvout =
100  ; frequency for
writing velocities to output
nstfout =  0; frequency for writing forces to output
nstlog  =  100; frequency for writing energies to log file
nstenergy   =  100  ; frequency for writing energies to energy file
nstxtcout   =  0; frequency for writing coords to xtc traj
xtc_grps=  system ; group(s) whose coords are to be written in
xtc traj
energygrps  =  system ; group(s) whose energy is to be written in
energy file
pbc =  no; use pbc
rlist   =  1.4; cutoff (nm)
coulombtype =  cutoff ; truncation for minimisation, with large
cutoff
rcoulomb=  1.4
vdwtype =  cut-off  ; truncation for minimisation, with large
cutoff
rvdw=  1.4
nstcomm =  0  ; number of steps for centre of mass motion
removal (in vacuo only!)
Tcoupl  =  no
Pcoupl  =  no
"min-implicit.mdp" 40L,
2616C
1,1   Top



the system I am using has the following information:
PBS -l select=1:ncpus=16:mpiprocs=16
#PBS -l walltime=24:00:00



On 24 June 2018 at 13:00, Alex  wrote:


This input has no information about implicit solvent and a simple google
search immediately yields mdp examples using gbsa. As far as performance is
concerned, we don't know the specs of your machine or the size of your
system. With cutoff electrostatics and a cutoff of 5 nm, one can expect
quite a bit of scaling with system size beyond linear.

Alex



On 6/24/2018 1:04 AM, Chhaya Singh wrote:


Hi,

I am trying to simulate a protein in an implicit solvent in groamcs using
amber ff99sb ildn .
the mdpfile that I am using is I have shown below:


integrator  =  md
dt  =  0.001 ;0.005; ps !
nsteps  =  500 ; total 5 ns.

nstlog  = 5000
nstxout = 0  ;1000
nstvout = 0  ;1000
nstfout = 0  ;1000
nstxtcout   = 5000
nstenergy   = 5000

nstlist =  10

cutoff-scheme   = group

rlist   =  5
rvdw=  5
rcoulomb=  5
coulombtype = cut-off
vdwtype = cut-off
bd_fric =  0
;ld_seed =  -1
pbc =  no
ns_type =  grid  ;simple => gives domain decomposition error
constraints = all-bonds
lincs_order = 4
lincs_iter  = 1
lincs-warnangle = 30

Tcoupl  = v-rescale
tau_t   = 1.0
tc-grps = Protein
ref_t   = 310



This is the mdp file that I am using for equilibration and production run,
if there is anything that I can fix in mdp file please let me know.
I am getting very less speed using an implicit solvent in gromacs.
is there any way to increase the speed.
the speed right now I am getting is 0.47- 0.74 ns /day using one node.
please help.


--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit

Re: [gmx-users] increasing md run speed

2018-06-24 Thread Mark Abraham
Hi,

You might be, but you need to look at the bottom of the log files for
performance feedback. What is common and different?

Mark

On Sun, Jun 24, 2018 at 6:23 AM neelam wafa  wrote:

> Hi, I have run a 100 ps simmulation  of the same protein with different
> ligand and its producing 0.445ns/day and 53.988 hr/ns.
> while this was 25hr/ns for the first simmulation of the same protein. Am I
> doing something wrong?
> I have been using the same md.mdp file for all the simmulations.
>
> Regards
>
> On Sat, Jun 23, 2018 at 7:07 AM, Mark Abraham 
> wrote:
>
> > Hi,
> >
> > That's an energy minimisation output file. It's too short to get a
> > meaningful estimate of performance, and will anyway not be representative
> > of what the simulation will achieve. More recent versions of GROMACS
> won't
> > even report on performance for them.
> >
> > Mark
> >
> > On Sat, Jun 23, 2018, 04:56 neelam wafa  wrote:
> >
> > > Dear gmx users!
> > >
> > > I am running md simmulation of a protein with different ligands but the
> > > speed is decreasing with every simmulation. In first one it was
> 25hrs/ns,
> > > for second one it became 35 hrs/ns then 36hs/ns. what can be the
> reason?
> > > I am using this command for the run.
> > > How to select the value of x if I use this command to increase the
> speed.
> > > Following is the detail of the cores used and the hardware i am using.
> > >
> > > Running on 1 node with total 2 cores, 4 logical cores
> > > Hardware detected:
> > >   CPU info:
> > > Vendor: GenuineIntel
> > > Brand:  Intel(R) Core(TM) i3-2370M CPU @ 2.40GHz
> > > SIMD instructions most likely to fit this hardware: AVX_256
> > > SIMD instructions selected at GROMACS compile time: AVX_256
> > >
> > > Reading file em.tpr, VERSION 5.1.5 (single precision)
> > > Using 1 MPI thread
> > > Using 4 OpenMP threads
> > >
> > >
> > > Looking forward for your help and cooperation.
> > > Regards
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] implicit solvent error

2018-06-24 Thread Chhaya Singh
sorry i attached a wrong file.
this is my inputs about the implcit solvent



 integrator  =  md
dt  =  0.005; ps !
nsteps  =  2000 ; total 10 ns.

nstlog  = 5000
nstxout = 0  ;1000
nstvout = 0  ;1000
nstfout = 0  ;1000
nstxtcout   = 5000
nstenergy   = 5000

nstlist =  10

cutoff-scheme   = group

rlist   =  5
rvdw=  5
rcoulomb= 5
coulombtype = cut-off
vdwtype = cut-off
bd_fric =  0
;ld_seed =  -1
pbc =  no
ns_type =  simple
constraints = all-bonds
lincs_order = 4
lincs_iter  = 1
lincs-warnangle = 30

Tcoupl  = v-rescale
tau_t   = 1.0
tc-grps =  Protein
ref_t   =  300

Pcoupl  =  no

gen_vel =  yes
gen_temp=  300
gen_seed=  173529
;http://www.mail-archive.com/gmx-users@gromacs.org/msg20866.html

;mine addition
comm_mode   = angular
nstcomm = 10

; IMPLICIT SOLVENT ALGORITHM
implicit_solvent = gbsa

; GENERALIZED BORN ELECTROSTATICS
; Algorithm for calculating Born radii
gb_algorithm = OBC  ;Still
; Frequency of calculating the Born radii inside rlist
nstgbradii   = 1
; Cutoff for Born radii calculation; the contribution from atoms
; between rlist and rgbradii is updated every nstlist steps
rgbradii = 5
; Dielectric coefficient of the implicit solvent
gb_epsilon_solvent   = 80
; Salt concentration in M for Generalized Born models
gb_saltconc  = 0
; Scaling factors used in the OBC GB model. Default values are OBC(II)
gb_obc_alpha = 1
gb_obc_beta  = 0.8
gb_obc_gamma = 4.85
gb_dielectric_offset = 0.009
sa_algorithm = Ace-approximation
; Surface tension (kJ/mol/nm^2) for the SA (nonpolar surface) part of GBSA
; The value -1 will set default value for Still/HCT/OBC GB-models.
sa_surface_tension   = -1



1,1   Top


On 24 June 2018 at 13:17, Alex  wrote:

> Your EM is unrelated to dynamics. I mean, it could, but we don't know
> anything about your simulated system. I am of course assuming that your MPI
> setup is optimal for gmx and you actually get to use those 16 threads,
> assuming those aren't an emulation of some sort.
>
>
>
> On 6/24/2018 1:33 AM, Chhaya Singh wrote:
>
>> Hi,
>> I have attached the energy minimization mdp file.
>> please look through it .
>>
>>
>>
>> cpp =
>> /lib/cpp ; prepocessor of
>> the current machine
>> define  =
>> -DFLEXIBLE   ; -DPOSRES,
>> -DPOSRES_IONS ;DFLEX_SPC; FLEXible SPC and POSition REStraints
>>
>> integrator  =
>> steep; steepest
>> descent
>> algorithm
>> dt  =
>> 0.005; time step in ps
>> nsteps  =
>> 5000 ; number of steps
>>
>> emtol   =
>> 100  ; convergence
>> criterion
>> emstep  =
>> 0.05 ; intial step
>> size
>> constraints   = none
>> constraint-algorithm  = lincs
>> unconstrained-start   =
>> no  ; Do not constrain
>> the start configuration
>> ;shake_tol   = 0.0001
>> nstlist =
>> 0; step frequency
>> for updating neighbour list
>> ns_type =
>> simple   ; grid ; method
>> for nighbour searching (?)
>> nstxout =
>> 100  ; frequency for
>> writing coords to output
>> nstvout =
>> 100  ; frequency for
>> writing velocities to output
>> nstfout =  0; frequency for writing forces to output
>> nstlog  =  100; frequency for writing energies to log file
>> nstenergy   =  100  ; frequency for writing energies to energy
>> file
>> nstxtcout   =  0; frequency for writing coords to xtc traj
>> xtc_grps=  system ; group(s) whose coords are to be written in
>> xtc traj
>> energygrps  =  system ; group(s) whose energy is to be written in
>> energy file
>> pbc =  no; use pbc
>> rlist   =  1.4; cutoff (nm)
>> coulombtype =  cutoff ; truncation for minimisation, with large
>> cutoff
>> rcoulomb=  1.4
>> vdwtype =  cut-off  ; truncation for minimisation, with large
>> cutoff

Re: [gmx-users] Implicit solvent

2018-06-24 Thread Mark Abraham
Hi,

Implicit solvation works with multiple threads in GROMACS 4.5.x, but has
been broken ever since (and will be removed in GROMACS 2019). So I suggest
you use 4.5.7, or some other software (e.g. AMBER)

Mark

On Sun, Jun 24, 2018 at 4:00 AM Chhaya Singh 
wrote:

> I am running implicit solvent in gromacs . I am getting very less speed 300
> ps per day. Is there any way to increase the speed.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Continuation of the gromacs job using gmx convert-tpr

2018-06-24 Thread Own 12121325
update:

I am trying using the way that you have suggested me

# the first step
gmx convert-tpr -s old.tpr -o new.tpr -extend 5
mdrun_mpi  -s new.tpr -deffnm old -cpi old -append

Here there is an issue:

Each time all pieces of the files will be merged into "old" (.trr .edr and
.cpt). But since I use always -deffnm old, the new check-point produced at
the end of the simulation will be always of the same name as the input
check-point (provided in the beginning)
so by the end of the each run I have to rename checkpoint file from old.cpt
to new.cpt and the next step should be

# the second step
gmx convert-tpr -s new.tpr -o new2.tpr -extend 5
mdrun_mpi  -s new2.tpr -deffnm old -cpi new -append
mv new.cpt new2.cpt

# the third step
gmx convert-tpr -s new2.tpr -o new3.tpr -extend 5
mdrun_mpi  -s new3.tpr -deffnm old -cpi new2 -append
mv new2.cpt new3.cpt

etc.

So the trajectory and edr files will be always the same but check-point
files updated (if I will be needed to go back )


Actually it may produce some mismatching between the files...



2018-06-22 16:22 GMT+02:00 Own 12121325 :

> Thanks Mark!
>
>
> assuming that I am interesting to obtain separate files for the each step
> I need just one command:
>
> mdrun -v -deffnm step7_1 -cpi step7_1.cpt -noappend
> that each time should create step7_1part002 etc
>
> but in case if I want to set manes of each pieces manually (it sounds
> crazy but in fact I need to do follow this way!) does the method with gmx
> convert-tpr in principle produce separate pieces correctly?
>
> gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
> mdrun -v -deffnm step7_2 -cpi step7_1.cpt
>
> in earlier versions I did in the same way but without cpt file and it
> worked good:
>
> gmx convert-tpr -s step7_1.tpr -trr step7_1 -edr step7_1 -o step7_2.tpr
> -extend 5
> mdrun -v -deffnm step7_2
>
>
>
> 2018-06-22 15:02 GMT+02:00 Mark Abraham :
>
>> Hi,
>>
>> There are some differences in recent GROMACS versions here (because the
>> old
>> implementations were not robust enough), but the checkpoint restart will
>> not work with appending unless it finds the output files named in the .cpt
>> match those on the command line (here, from -deffnm). You're making extra
>> work for yourself in several ways.
>>
>> I encourage you to not use -deffnm with a new name that merely signifies
>> that the extension happened. There's no physical and no real
>> organizational
>> reason to do this.
>>
>> If you want numbered output files for each step, then start your
>> simulations with -noappend and let mdrun number them automatically. But
>> IMO
>> all that does is make work for you later, concatenating the files again.
>>
>> If you want appending to work after extending to the number of steps, use
>> -s new.tpr -deffnm old rather than -deffnm new, because the former doesn't
>> create name mismatches between those output files that the checkpoint
>> remembers and those you've instructed mdrun to use now.
>>
>> And if your reason for using -deffnm is that you want to have multiple
>> simulation steps in the same directory, bear in mind that using a single
>> directory to contain a single step is much more robust (you are using the
>> standard way of grouping related files, called a directory, and using cd
>> is
>> not any more difficult than -deffnm), and you can just use the default
>> file
>> naming:
>>
>> (cd step7; mpirun -np whatever gmx_mpi mdrun -s extended)
>>
>> Mark
>>
>> On Fri, Jun 22, 2018 at 11:07 AM Own 12121325 
>> wrote:
>>
>> > thanks Mark!
>> >
>> > could you please also confirm that my method of the prolongation of the
>> > simulation would be correct
>> >
>> > #entend simulation for 50 ns and save these pieces as the separate files
>> > with the name step7_2*
>> > gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
>> > mdrun -v -deffnm step7_2 -cpi step7_1.cpt
>> >
>> > 2018-06-22 10:57 GMT+02:00 Mark Abraham :
>> >
>> > > Hi,
>> > >
>> > > The previous checkpoint has the _prev suffix, in case there is a
>> problem
>> > > that might require you to go further back in time.
>> > >
>> > > Mark
>> > >
>> > > On Fri, Jun 22, 2018, 10:46 Own 12121325 
>> wrote:
>> > >
>> > > > P.S. what the difference between name.cpt and name_prev.cpt
>> produced by
>> > > > mdrun? What check-point should correspond to the last snapshot in
>> trr
>> > > file
>> > > > ?
>> > > >
>> > > > 2018-06-22 10:17 GMT+02:00 Own 12121325 :
>> > > >
>> > > > > In fact there is an alternative trick :-)
>> > > > > If I rename a tpr file via gmx convert-tpr  and then run mdrun
>> using
>> > > this
>> > > > > new tpr as well as previous checkpoint, it will produce all
>> pieces of
>> > > the
>> > > > > trajectory in the separate file:
>> > > > >
>> > > > > gmx convert-tpr -s step7_1.tpr -o step7_2.tpr -extend 5
>> > > > > mpirun -np ${NB_TASKS} mdrun -v -deffnm step7_2 -cpi step7_1.cpt
>> > > > >
>> > > > > If I add -noappend flag to the mdrun, its also do the same job but
>> > also
>> >

[gmx-users] Distance vector file for g_analyze: incompatibility of normal and vector

2018-06-24 Thread Apramita Chand
Dear All,
I want to generate the autocorrelation function for end-to-end distance
vector of my peptide and using g_dist,  I have generated a dist.xvg file.
On giving this file as input to g_analyze and giving the option -P 1 (for
Legendre polynomial), it gives the error:
"Incompatible mode bits: normal and vector (or Legendre)"

How to generate a compatible file for g_analyse to generate the P1 dipole
autocorrelation functon for the end-to-end distance ?

Regards,
Apramita
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Is the Amber14sb ff on the gromacs website actually amber14sb force field

2018-06-24 Thread Evelyne Deplazes
Hi gromacs users

When doing pdb2gmx using the Ambersb14 download from the gromacs force field 
website (http://www.gromacs.org/Downloads/User_contributions/Force_fields)  it 
is labelled Amber99sb-ILDN and the reference matches the amber99sb_ILDN force 
field.
Is the file downloaded from the website the correct Amber14sb folder with 
incorrect naming or is it actually the amber99sb_ILDN folder.  We also were 
unable to find any other source of the amber14sb force field on the internet to 
compare with or use.
thanks
Evelyne

Dr. Evelyne Deplazes
PhD (Computational Biophysics)
Research Fellow
School of Pharmacy and Biomedical Sciences
Secretary, Association of Molecular Modellers of Australasia
Committee Member, WA committee of the Australian Society for Medical Research

Curtin University
Tel | +61 8 9266 5685

Email | evelyne.depla...@curtin.edu.au
Web | www.curtin.edu.au
[id:image001.png@01D3F40D.7CE98FB0]

CRICOS Provider Code 00301J


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Is the Amber14sb ff on the gromacs website actually amber14sb force field

2018-06-24 Thread Quyen V. Vu
​Hi,​



On Mon, Jun 25, 2018 at 8:20 AM Evelyne Deplazes <
evelyne.depla...@curtin.edu.au> wrote:

> Hi gromacs users
>
> When doing pdb2gmx using the Ambersb14 download from the gromacs force
> field website (
> http://www.gromacs.org/Downloads/User_contributions/Force_fields)  it is
> labelled Amber99sb-ILDN and the reference matches the amber99sb_ILDN force
> field.
> Is the file downloaded from the website the correct Amber14sb folder with
> incorrect naming or is it actually the amber99sb_ILDN folder.  We also were
> unable to find any other source of the amber14sb force field on the
> internet to compare with or use.
> thanks
> Evelyne
>

The forcefield that you downloaded is amber14sb, the author was forgot to
rename the document.​

​I remember that there are also some missing parameters (for dihedral
angle) on this forcefield, you can search with keyword amber14sb on this
mailling list to find that missing parameter​
Best,
Quyen



> Dr. Evelyne Deplazes
> PhD (Computational Biophysics)
> Research Fellow
> School of Pharmacy and Biomedical Sciences
> Secretary, Association of Molecular Modellers of Australasia
> Committee Member, WA committee of the Australian Society for Medical
> Research
>
> Curtin University
> Tel | +61 8 9266 5685
>
> Email | evelyne.depla...@curtin.edu.au evelyne.depla...@curtin.edu.au>
> Web | www.curtin.edu.au
> [id:image001.png@01D3F40D.7CE98FB0]
>
> CRICOS Provider Code 00301J
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Regarding wall settings in CG MD

2018-06-24 Thread Mijiddorj B
Dear Users,

I would like to simulate coarse grained membrane system with two walls. I
used related optrions of wall as follows:

* ewald_geometry= 3dc
** nwall= 2
**wall_type= 12-6
**wall_r_linpot= -1
**wall_atomtype   = C5 C5
**wall_density= 12 12
**wall_ewald_zfac= 3
**pbc= xy*

However, after  minimization and shord equilibration, some atoms lost.

Please suggest and advise me on atomtype, density, etc.

Bests,

Miji
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.