Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-28 Thread Nizar Masbukhin
Thank you very much Justin and Mark.

On Tue, Oct 28, 2014 at 2:31 AM, Mark Abraham mark.j.abra...@gmail.com
wrote:

 On Mon, Oct 27, 2014 at 6:05 PM, Nizar Masbukhin nizar.fku...@gmail.com
 wrote:

  i dont really understand the point. could you please what do you mean in
  the last reply?
  what command should i use?
 
  if, say i have 72 cores in 9 nodes, and 16 replicas to simulate in
 implicit
  solvent.


 Hi,

 You can only use two MPI ranks per replica if there's a limit of two ranks
 per simulation. So that's 32 ranks total. So something like

 mpirun -np 32 mdrun_mpi -multidir your-16-directories -repl_ex whatever

 after setting up the MPI environment to fill four nodes.

 Mark
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
Thanks
My Best Regards, Nizar
Medical Faculty of Brawijaya University
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-28 Thread Nizar Masbukhin
Now, the only thing worrying me is a warning Turning off pressure coupling
for vacuum system on NPT equilibration. Can I just ignore this warning or
should i do something? as i didn't mean my system in vacuum.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-28 Thread Justin Lemkul



On 10/28/14 7:16 AM, Nizar Masbukhin wrote:

Now, the only thing worrying me is a warning Turning off pressure coupling
for vacuum system on NPT equilibration. Can I just ignore this warning or
should i do something? as i didn't mean my system in vacuum.



You can't do NPT with pbc = no, as your earlier .mdp files showed, so nothing 
can be scaled.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-27 Thread Nizar Masbukhin
and how to use that 2 cores? i think that would increase performace twice
as now i am running 1 core per replica.

On Mon, Oct 27, 2014 at 7:15 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/26/14 9:55 AM, Nizar Masbukhin wrote:

 regarding gaining speed in implicit solvent simulation, i have tried to
 parallelize using -ntmpi flag. However gromacs doesn't allow as i use
 group
 cutoff-scheme. Any recommendation how to parallelise implicit solvent
 simulation? I do need parallelise my simulation. I have found the same
 question in this mail list, one suggest use all-vs-all kernel which uses
 zero cut-off.
 This is my test run actually. I intend to run my simulation in cluster
 computer.


 Unless the restriction was lifted at some point, implicit simulations
 won't run on more than 2 cores.  There were issues with constraints that
 led to the limitation.

 -Justin


  On Sun, Oct 26, 2014 at 8:23 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/26/14 9:17 AM, Nizar Masbukhin wrote:

  Thanks Justin.
 I have increased the cutoff, and yeah thats work. There were no error
 message anymore. The first 6 nanoseconds, i felt the simulation run
 slower.
 Felt so curious that  simulation run very fast the rest of time.


  Longer cutoffs mean there are more interactions to calculate, but the
 cutoffs aren't to be toyed with arbitrarily to gain speed.  They are a
 critical element of the force field itself, though in implicit solvent,
 it
 is common to increase (and never decrease) the cutoff values used in
 explicit solvent.  Physical validity should trump speed any day.

 -Justin


   On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul jalem...@vt.edu
 wrote:




 On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

   Thanks for yor reply, Mark.



 At first i was sure that the problem was table-exension because when I
 enlarge table-extension value, warning message didn't  appear anymore.
 Besides, i have successfully minimized and equilibrated the system
 (indicated by Fmax  emtol reached; and no error messages during
 NVTNPT
 equilibration, except a warning that the Pcouple is turned off in
 vacuum
 system).

 However, the error message appeared without table-extension warning
 makes
 me doubt also about my system stability. Here is my mdp setting.
 Please
 tell me if there are any 'weird' setting, and also kindly
 suggest/recommend
 a better setting.


 *mdp file for Minimisation*


 integrator = steep

 nsteps = 5000

 emtol = 200

 emstep = 0.01

 niter = 20

 nstlog = 1

 nstenergy = 1

 cutoff-scheme = group

 nstlist = 1

 ns_type = simple

 pbc = no

 rlist = 0.5

 coulombtype = cut-off

 rcoulomb = 0.5

 vdw-type = cut-off

 rvdw-switch = 0.8

 rvdw = 0.5

 DispCorr = no

 fourierspacing = 0.12

 pme_order = 6

 ewald_rtol = 1e-06

 epsilon_surface = 0

 optimize_fft = no

 tcoupl = no

 pcoupl = no

 free_energy = yes

 init_lambda = 0.0

 delta_lambda = 0

 foreign_lambda = 0.05

 sc-alpha = 0.5

 sc-power = 1.0

 sc-sigma  = 0.3

 couple-lambda0 = vdw

 couple-lambda1 = none

 couple-intramol = no

 nstdhdl = 10

 gen_vel = no

 constraints = none

 constraint-algorithm = lincs

 continuation = no

 lincs-order  = 12

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05


 *mdp file for NVT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 25

 init-step = 0

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 5

 nstfout = 5

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme= group

 nstlist  = 1

 ns-type = simple

 pbc= no

 rlist= 0.5

 coulombtype = cut-off

 rcoulomb= 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch= 0.8

 rvdw = 0.5

 table-extension = 500

 fourierspacing = 0.12

 fourier-nx  = 0

 fourier-ny = 0

 fourier-nz = 0

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05

 tcoupl = v-rescale

 nsttcouple = -1

 nh-chain-length = 10

 print-nose-hoover-chain-variables = no

 tc-grps = system

 tau-t = 0.1

 ref-t = 298.00

 pcoupl = No

 pcoupltype = Isotropic

 nstpcouple = -1

 tau-p = 1

 refcoord-scaling = No

 gen-vel = yes

 gen-temp = 298.00

 gen-seed  = -1

 constraints= all-bonds

 constraint-algorithm = Lincs

 continuation = no

 Shake-SOR = no

 shake-tol = 0.0001

 lincs-order = 4

 lincs-iter = 1

 lincs-warnangle = 30


 *mdp file for NPT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 50

 init-step = 0

 simulation-part = 1

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 50

 nstfout = 50

 nstlog 

Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-27 Thread Nizar Masbukhin
i dont really understand the point. could you please what do you mean in
the last reply?
what command should i use?

if, say i have 72 cores in 9 nodes, and 16 replicas to simulate in implicit
solvent.


On 10/27/14 5:59 AM, Nizar Masbukhin wrote:

 and how to use that 2 cores? i think that would increase performace twice
 as now i am running 1 core per replica.


In the context of REMD, mdrun should figure this out if you issue the
command over 2N processors, where N is the number of replicas.

-Justin

 On Mon, Oct 27, 2014 at 7:15 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/26/14 9:55 AM, Nizar Masbukhin wrote:

  regarding gaining speed in implicit solvent simulation, i have tried to
 parallelize using -ntmpi flag. However gromacs doesn't allow as i use
 group
 cutoff-scheme. Any recommendation how to parallelise implicit solvent
 simulation? I do need parallelise my simulation. I have found the same
 question in this mail list, one suggest use all-vs-all kernel which uses
 zero cut-off.
 This is my test run actually. I intend to run my simulation in cluster
 computer.


  Unless the restriction was lifted at some point, implicit simulations
 won't run on more than 2 cores.  There were issues with constraints that
 led to the limitation.

 -Justin


   On Sun, Oct 26, 2014 at 8:23 PM, Justin Lemkul jalem...@vt.edu wrote:




 On 10/26/14 9:17 AM, Nizar Masbukhin wrote:

   Thanks Justin.

 I have increased the cutoff, and yeah thats work. There were no error
 message anymore. The first 6 nanoseconds, i felt the simulation run
 slower.
 Felt so curious that  simulation run very fast the rest of time.


   Longer cutoffs mean there are more interactions to calculate, but the

 cutoffs aren't to be toyed with arbitrarily to gain speed.  They are a
 critical element of the force field itself, though in implicit solvent,
 it
 is common to increase (and never decrease) the cutoff values used in
 explicit solvent.  Physical validity should trump speed any day.

 -Justin


On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul jalem...@vt.edu
 wrote:




  On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

Thanks for yor reply, Mark.



 At first i was sure that the problem was table-exension because when
 I
 enlarge table-extension value, warning message didn't  appear
 anymore.
 Besides, i have successfully minimized and equilibrated the system
 (indicated by Fmax  emtol reached; and no error messages during
 NVTNPT
 equilibration, except a warning that the Pcouple is turned off in
 vacuum
 system).

 However, the error message appeared without table-extension warning
 makes
 me doubt also about my system stability. Here is my mdp setting.
 Please
 tell me if there are any 'weird' setting, and also kindly
 suggest/recommend
 a better setting.


 *mdp file for Minimisation*


 integrator = steep

 nsteps = 5000

 emtol = 200

 emstep = 0.01

 niter = 20

 nstlog = 1

 nstenergy = 1

 cutoff-scheme = group

 nstlist = 1

 ns_type = simple

 pbc = no

 rlist = 0.5

 coulombtype = cut-off

 rcoulomb = 0.5

 vdw-type = cut-off

 rvdw-switch = 0.8

 rvdw = 0.5

 DispCorr = no

 fourierspacing = 0.12

 pme_order = 6

 ewald_rtol = 1e-06

 epsilon_surface = 0

 optimize_fft = no

 tcoupl = no

 pcoupl = no

 free_energy = yes

 init_lambda = 0.0

 delta_lambda = 0

 foreign_lambda = 0.05

 sc-alpha = 0.5

 sc-power = 1.0

 sc-sigma  = 0.3

 couple-lambda0 = vdw

 couple-lambda1 = none

 couple-intramol = no

 nstdhdl = 10

 gen_vel = no

 constraints = none

 constraint-algorithm = lincs

 continuation = no

 lincs-order  = 12

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05


 *mdp file for NVT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 25

 init-step = 0

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 5

 nstfout = 5

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme= group

 nstlist  = 1

 ns-type = simple

 pbc= no

 rlist= 0.5

 coulombtype = cut-off

 rcoulomb= 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch= 0.8

 rvdw = 0.5

 table-extension = 500

 fourierspacing = 0.12

 fourier-nx  = 0

 fourier-ny = 0

 fourier-nz = 0

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05

 tcoupl = v-rescale

 nsttcouple = -1

 nh-chain-length = 10

 print-nose-hoover-chain-variables = no

 tc-grps = system

 tau-t = 0.1

 ref-t = 298.00

 pcoupl = No

 pcoupltype = Isotropic

 nstpcouple = -1

 tau-p = 1

 refcoord-scaling = No

 gen-vel = yes

 gen-temp = 298.00

 gen-seed  = -1

 constraints= all-bonds

 constraint-algorithm = 

Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-27 Thread Mark Abraham
On Mon, Oct 27, 2014 at 6:05 PM, Nizar Masbukhin nizar.fku...@gmail.com
wrote:

 i dont really understand the point. could you please what do you mean in
 the last reply?
 what command should i use?

 if, say i have 72 cores in 9 nodes, and 16 replicas to simulate in implicit
 solvent.


Hi,

You can only use two MPI ranks per replica if there's a limit of two ranks
per simulation. So that's 32 ranks total. So something like

mpirun -np 32 mdrun_mpi -multidir your-16-directories -repl_ex whatever

after setting up the MPI environment to fill four nodes.

Mark
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-26 Thread Nizar Masbukhin
Thanks Justin.
I have increased the cutoff, and yeah thats work. There were no error
message anymore. The first 6 nanoseconds, i felt the simulation run slower.
Felt so curious that  simulation run very fast the rest of time.

On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

 Thanks for yor reply, Mark.


 At first i was sure that the problem was table-exension because when I
 enlarge table-extension value, warning message didn't  appear anymore.
 Besides, i have successfully minimized and equilibrated the system
 (indicated by Fmax  emtol reached; and no error messages during NVTNPT
 equilibration, except a warning that the Pcouple is turned off in vacuum
 system).

 However, the error message appeared without table-extension warning makes
 me doubt also about my system stability. Here is my mdp setting. Please
 tell me if there are any 'weird' setting, and also kindly
 suggest/recommend
 a better setting.


 *mdp file for Minimisation*


 integrator = steep

 nsteps = 5000

 emtol = 200

 emstep = 0.01

 niter = 20

 nstlog = 1

 nstenergy = 1

 cutoff-scheme = group

 nstlist = 1

 ns_type = simple

 pbc = no

 rlist = 0.5

 coulombtype = cut-off

 rcoulomb = 0.5

 vdw-type = cut-off

 rvdw-switch = 0.8

 rvdw = 0.5

 DispCorr = no

 fourierspacing = 0.12

 pme_order = 6

 ewald_rtol = 1e-06

 epsilon_surface = 0

 optimize_fft = no

 tcoupl = no

 pcoupl = no

 free_energy = yes

 init_lambda = 0.0

 delta_lambda = 0

 foreign_lambda = 0.05

 sc-alpha = 0.5

 sc-power = 1.0

 sc-sigma  = 0.3

 couple-lambda0 = vdw

 couple-lambda1 = none

 couple-intramol = no

 nstdhdl = 10

 gen_vel = no

 constraints = none

 constraint-algorithm = lincs

 continuation = no

 lincs-order  = 12

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05


 *mdp file for NVT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 25

 init-step = 0

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 5

 nstfout = 5

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme= group

 nstlist  = 1

 ns-type = simple

 pbc= no

 rlist= 0.5

 coulombtype = cut-off

 rcoulomb= 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch= 0.8

 rvdw = 0.5

 table-extension = 500

 fourierspacing = 0.12

 fourier-nx  = 0

 fourier-ny = 0

 fourier-nz = 0

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05

 tcoupl = v-rescale

 nsttcouple = -1

 nh-chain-length = 10

 print-nose-hoover-chain-variables = no

 tc-grps = system

 tau-t = 0.1

 ref-t = 298.00

 pcoupl = No

 pcoupltype = Isotropic

 nstpcouple = -1

 tau-p = 1

 refcoord-scaling = No

 gen-vel = yes

 gen-temp = 298.00

 gen-seed  = -1

 constraints= all-bonds

 constraint-algorithm = Lincs

 continuation = no

 Shake-SOR = no

 shake-tol = 0.0001

 lincs-order = 4

 lincs-iter = 1

 lincs-warnangle = 30


 *mdp file for NPT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 50

 init-step = 0

 simulation-part = 1

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 50

 nstfout = 50

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme = group

 nstlist = 1

 ns-type = simple

 pbc = no

 rlist  = 0.5

 coulombtype= cut-off

 rcoulomb = 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch = 0.8

 rvdw= 0.5

 table-extension = 1

 fourierspacing = 0.12

 fourier-nx= 0

 fourier-ny = 0

 fourier-nz = 0

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05

 tcoupl  = Nose-Hoover

 tc-grps = system

 tau-t  = 0.1

 ref-t = 298.00

 pcoupl = parrinello-rahman

 pcoupltype = Isotropic

 tau-p   = 1.0

 compressibility   = 4.5e-5

 ref-p   = 1.0

 refcoord-scaling = No

 gen-vel   = no

 gen-temp = 298.00

 gen-seed   = -1

 constraints  = all-bonds

 constraint-algorithm   = Lincs

 continuation  = yes

 Shake-SOR  = no

 shake-tol  = 0.0001

 lincs-order = 4

 lincs-iter   = 1

 lincs-warnangle  = 30


 *mdp file for MD*


 integrator  = md

 tinit = 0

 dt  = 0.001

 nsteps = 5 ; 1 us

 init-step = 0

 simulation-part= 1

 comm-mode  = Angular

 nstcomm = 100

 comm-grps = system

 bd-fric  = 0

 ld-seed = -1

 nstxout  = 1

 nstvout  = 0

 nstfout   = 0

 nstlog  = 1

 nstcalcenergy = 1

 nstenergy 

Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-26 Thread Justin Lemkul



On 10/26/14 9:17 AM, Nizar Masbukhin wrote:

Thanks Justin.
I have increased the cutoff, and yeah thats work. There were no error
message anymore. The first 6 nanoseconds, i felt the simulation run slower.
Felt so curious that  simulation run very fast the rest of time.



Longer cutoffs mean there are more interactions to calculate, but the cutoffs 
aren't to be toyed with arbitrarily to gain speed.  They are a critical element 
of the force field itself, though in implicit solvent, it is common to increase 
(and never decrease) the cutoff values used in explicit solvent.  Physical 
validity should trump speed any day.


-Justin


On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul jalem...@vt.edu wrote:




On 10/24/14 8:31 AM, Nizar Masbukhin wrote:


Thanks for yor reply, Mark.


At first i was sure that the problem was table-exension because when I
enlarge table-extension value, warning message didn't  appear anymore.
Besides, i have successfully minimized and equilibrated the system
(indicated by Fmax  emtol reached; and no error messages during NVTNPT
equilibration, except a warning that the Pcouple is turned off in vacuum
system).

However, the error message appeared without table-extension warning makes
me doubt also about my system stability. Here is my mdp setting. Please
tell me if there are any 'weird' setting, and also kindly
suggest/recommend
a better setting.


*mdp file for Minimisation*


integrator = steep

nsteps = 5000

emtol = 200

emstep = 0.01

niter = 20

nstlog = 1

nstenergy = 1

cutoff-scheme = group

nstlist = 1

ns_type = simple

pbc = no

rlist = 0.5

coulombtype = cut-off

rcoulomb = 0.5

vdw-type = cut-off

rvdw-switch = 0.8

rvdw = 0.5

DispCorr = no

fourierspacing = 0.12

pme_order = 6

ewald_rtol = 1e-06

epsilon_surface = 0

optimize_fft = no

tcoupl = no

pcoupl = no

free_energy = yes

init_lambda = 0.0

delta_lambda = 0

foreign_lambda = 0.05

sc-alpha = 0.5

sc-power = 1.0

sc-sigma  = 0.3

couple-lambda0 = vdw

couple-lambda1 = none

couple-intramol = no

nstdhdl = 10

gen_vel = no

constraints = none

constraint-algorithm = lincs

continuation = no

lincs-order  = 12

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05


*mdp file for NVT equilibration*


define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 25

init-step = 0

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 5

nstfout = 5

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme= group

nstlist  = 1

ns-type = simple

pbc= no

rlist= 0.5

coulombtype = cut-off

rcoulomb= 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch= 0.8

rvdw = 0.5

table-extension = 500

fourierspacing = 0.12

fourier-nx  = 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05

tcoupl = v-rescale

nsttcouple = -1

nh-chain-length = 10

print-nose-hoover-chain-variables = no

tc-grps = system

tau-t = 0.1

ref-t = 298.00

pcoupl = No

pcoupltype = Isotropic

nstpcouple = -1

tau-p = 1

refcoord-scaling = No

gen-vel = yes

gen-temp = 298.00

gen-seed  = -1

constraints= all-bonds

constraint-algorithm = Lincs

continuation = no

Shake-SOR = no

shake-tol = 0.0001

lincs-order = 4

lincs-iter = 1

lincs-warnangle = 30


*mdp file for NPT equilibration*


define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 50

init-step = 0

simulation-part = 1

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 50

nstfout = 50

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme = group

nstlist = 1

ns-type = simple

pbc = no

rlist  = 0.5

coulombtype= cut-off

rcoulomb = 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch = 0.8

rvdw= 0.5

table-extension = 1

fourierspacing = 0.12

fourier-nx= 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05

tcoupl  = Nose-Hoover

tc-grps = system

tau-t  = 0.1

ref-t = 298.00

pcoupl = parrinello-rahman

pcoupltype = Isotropic

tau-p   = 1.0

compressibility   = 4.5e-5

ref-p   = 1.0

refcoord-scaling = No

gen-vel   = no

gen-temp = 298.00

gen-seed   = -1

constraints  = all-bonds

constraint-algorithm   = Lincs

continuation  = yes

Shake-SOR  = no

shake-tol  = 0.0001

lincs-order = 4

lincs-iter   = 1

lincs-warnangle  = 30


*mdp file for MD*


integrator  = md

tinit = 0

dt  = 0.001

nsteps = 5 ; 1 us

init-step = 

Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-26 Thread Nizar Masbukhin
regarding gaining speed in implicit solvent simulation, i have tried to
parallelize using -ntmpi flag. However gromacs doesn't allow as i use group
cutoff-scheme. Any recommendation how to parallelise implicit solvent
simulation? I do need parallelise my simulation. I have found the same
question in this mail list, one suggest use all-vs-all kernel which uses
zero cut-off.
This is my test run actually. I intend to run my simulation in cluster
computer.

On Sun, Oct 26, 2014 at 8:23 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/26/14 9:17 AM, Nizar Masbukhin wrote:

 Thanks Justin.
 I have increased the cutoff, and yeah thats work. There were no error
 message anymore. The first 6 nanoseconds, i felt the simulation run
 slower.
 Felt so curious that  simulation run very fast the rest of time.


 Longer cutoffs mean there are more interactions to calculate, but the
 cutoffs aren't to be toyed with arbitrarily to gain speed.  They are a
 critical element of the force field itself, though in implicit solvent, it
 is common to increase (and never decrease) the cutoff values used in
 explicit solvent.  Physical validity should trump speed any day.

 -Justin


  On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

  Thanks for yor reply, Mark.


 At first i was sure that the problem was table-exension because when I
 enlarge table-extension value, warning message didn't  appear anymore.
 Besides, i have successfully minimized and equilibrated the system
 (indicated by Fmax  emtol reached; and no error messages during NVTNPT
 equilibration, except a warning that the Pcouple is turned off in vacuum
 system).

 However, the error message appeared without table-extension warning
 makes
 me doubt also about my system stability. Here is my mdp setting. Please
 tell me if there are any 'weird' setting, and also kindly
 suggest/recommend
 a better setting.


 *mdp file for Minimisation*


 integrator = steep

 nsteps = 5000

 emtol = 200

 emstep = 0.01

 niter = 20

 nstlog = 1

 nstenergy = 1

 cutoff-scheme = group

 nstlist = 1

 ns_type = simple

 pbc = no

 rlist = 0.5

 coulombtype = cut-off

 rcoulomb = 0.5

 vdw-type = cut-off

 rvdw-switch = 0.8

 rvdw = 0.5

 DispCorr = no

 fourierspacing = 0.12

 pme_order = 6

 ewald_rtol = 1e-06

 epsilon_surface = 0

 optimize_fft = no

 tcoupl = no

 pcoupl = no

 free_energy = yes

 init_lambda = 0.0

 delta_lambda = 0

 foreign_lambda = 0.05

 sc-alpha = 0.5

 sc-power = 1.0

 sc-sigma  = 0.3

 couple-lambda0 = vdw

 couple-lambda1 = none

 couple-intramol = no

 nstdhdl = 10

 gen_vel = no

 constraints = none

 constraint-algorithm = lincs

 continuation = no

 lincs-order  = 12

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05


 *mdp file for NVT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 25

 init-step = 0

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 5

 nstfout = 5

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme= group

 nstlist  = 1

 ns-type = simple

 pbc= no

 rlist= 0.5

 coulombtype = cut-off

 rcoulomb= 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch= 0.8

 rvdw = 0.5

 table-extension = 500

 fourierspacing = 0.12

 fourier-nx  = 0

 fourier-ny = 0

 fourier-nz = 0

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05

 tcoupl = v-rescale

 nsttcouple = -1

 nh-chain-length = 10

 print-nose-hoover-chain-variables = no

 tc-grps = system

 tau-t = 0.1

 ref-t = 298.00

 pcoupl = No

 pcoupltype = Isotropic

 nstpcouple = -1

 tau-p = 1

 refcoord-scaling = No

 gen-vel = yes

 gen-temp = 298.00

 gen-seed  = -1

 constraints= all-bonds

 constraint-algorithm = Lincs

 continuation = no

 Shake-SOR = no

 shake-tol = 0.0001

 lincs-order = 4

 lincs-iter = 1

 lincs-warnangle = 30


 *mdp file for NPT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 50

 init-step = 0

 simulation-part = 1

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 50

 nstfout = 50

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme = group

 nstlist = 1

 ns-type = simple

 pbc = no

 rlist  = 0.5

 coulombtype= cut-off

 rcoulomb = 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch = 0.8

 rvdw= 0.5

 table-extension = 1

 fourierspacing = 0.12

 fourier-nx= 0

 fourier-ny = 0

 fourier-nz 

Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-26 Thread Justin Lemkul



On 10/26/14 9:55 AM, Nizar Masbukhin wrote:

regarding gaining speed in implicit solvent simulation, i have tried to
parallelize using -ntmpi flag. However gromacs doesn't allow as i use group
cutoff-scheme. Any recommendation how to parallelise implicit solvent
simulation? I do need parallelise my simulation. I have found the same
question in this mail list, one suggest use all-vs-all kernel which uses
zero cut-off.
This is my test run actually. I intend to run my simulation in cluster
computer.



Unless the restriction was lifted at some point, implicit simulations won't run 
on more than 2 cores.  There were issues with constraints that led to the 
limitation.


-Justin


On Sun, Oct 26, 2014 at 8:23 PM, Justin Lemkul jalem...@vt.edu wrote:




On 10/26/14 9:17 AM, Nizar Masbukhin wrote:


Thanks Justin.
I have increased the cutoff, and yeah thats work. There were no error
message anymore. The first 6 nanoseconds, i felt the simulation run
slower.
Felt so curious that  simulation run very fast the rest of time.



Longer cutoffs mean there are more interactions to calculate, but the
cutoffs aren't to be toyed with arbitrarily to gain speed.  They are a
critical element of the force field itself, though in implicit solvent, it
is common to increase (and never decrease) the cutoff values used in
explicit solvent.  Physical validity should trump speed any day.

-Justin


  On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul jalem...@vt.edu wrote:





On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

  Thanks for yor reply, Mark.



At first i was sure that the problem was table-exension because when I
enlarge table-extension value, warning message didn't  appear anymore.
Besides, i have successfully minimized and equilibrated the system
(indicated by Fmax  emtol reached; and no error messages during NVTNPT
equilibration, except a warning that the Pcouple is turned off in vacuum
system).

However, the error message appeared without table-extension warning
makes
me doubt also about my system stability. Here is my mdp setting. Please
tell me if there are any 'weird' setting, and also kindly
suggest/recommend
a better setting.


*mdp file for Minimisation*


integrator = steep

nsteps = 5000

emtol = 200

emstep = 0.01

niter = 20

nstlog = 1

nstenergy = 1

cutoff-scheme = group

nstlist = 1

ns_type = simple

pbc = no

rlist = 0.5

coulombtype = cut-off

rcoulomb = 0.5

vdw-type = cut-off

rvdw-switch = 0.8

rvdw = 0.5

DispCorr = no

fourierspacing = 0.12

pme_order = 6

ewald_rtol = 1e-06

epsilon_surface = 0

optimize_fft = no

tcoupl = no

pcoupl = no

free_energy = yes

init_lambda = 0.0

delta_lambda = 0

foreign_lambda = 0.05

sc-alpha = 0.5

sc-power = 1.0

sc-sigma  = 0.3

couple-lambda0 = vdw

couple-lambda1 = none

couple-intramol = no

nstdhdl = 10

gen_vel = no

constraints = none

constraint-algorithm = lincs

continuation = no

lincs-order  = 12

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05


*mdp file for NVT equilibration*


define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 25

init-step = 0

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 5

nstfout = 5

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme= group

nstlist  = 1

ns-type = simple

pbc= no

rlist= 0.5

coulombtype = cut-off

rcoulomb= 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch= 0.8

rvdw = 0.5

table-extension = 500

fourierspacing = 0.12

fourier-nx  = 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05

tcoupl = v-rescale

nsttcouple = -1

nh-chain-length = 10

print-nose-hoover-chain-variables = no

tc-grps = system

tau-t = 0.1

ref-t = 298.00

pcoupl = No

pcoupltype = Isotropic

nstpcouple = -1

tau-p = 1

refcoord-scaling = No

gen-vel = yes

gen-temp = 298.00

gen-seed  = -1

constraints= all-bonds

constraint-algorithm = Lincs

continuation = no

Shake-SOR = no

shake-tol = 0.0001

lincs-order = 4

lincs-iter = 1

lincs-warnangle = 30


*mdp file for NPT equilibration*


define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 50

init-step = 0

simulation-part = 1

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 50

nstfout = 50

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme = group

nstlist = 1

ns-type = simple

pbc = no

rlist  = 0.5

coulombtype= cut-off

rcoulomb = 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch = 0.8

rvdw= 0.5

table-extension = 1


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-24 Thread Nizar Masbukhin
Thanks for yor reply, Mark.


At first i was sure that the problem was table-exension because when I
enlarge table-extension value, warning message didn't  appear anymore.
Besides, i have successfully minimized and equilibrated the system
(indicated by Fmax  emtol reached; and no error messages during NVTNPT
equilibration, except a warning that the Pcouple is turned off in vacuum
system).

However, the error message appeared without table-extension warning makes
me doubt also about my system stability. Here is my mdp setting. Please
tell me if there are any 'weird' setting, and also kindly suggest/recommend
a better setting.


*mdp file for Minimisation*

integrator = steep

nsteps = 5000

emtol = 200

emstep = 0.01

niter = 20

nstlog = 1

nstenergy = 1

cutoff-scheme = group

nstlist = 1

ns_type = simple

pbc = no

rlist = 0.5

coulombtype = cut-off

rcoulomb = 0.5

vdw-type = cut-off

rvdw-switch = 0.8

rvdw = 0.5

DispCorr = no

fourierspacing = 0.12

pme_order = 6

ewald_rtol = 1e-06

epsilon_surface = 0

optimize_fft = no

tcoupl = no

pcoupl = no

free_energy = yes

init_lambda = 0.0

delta_lambda = 0

foreign_lambda = 0.05

sc-alpha = 0.5

sc-power = 1.0

sc-sigma  = 0.3

couple-lambda0 = vdw

couple-lambda1 = none

couple-intramol = no

nstdhdl = 10

gen_vel = no

constraints = none

constraint-algorithm = lincs

continuation = no

lincs-order  = 12

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05


*mdp file for NVT equilibration*

define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 25

init-step = 0

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 5

nstfout = 5

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme= group

nstlist  = 1

ns-type = simple

pbc= no

rlist= 0.5

coulombtype = cut-off

rcoulomb= 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch= 0.8

rvdw = 0.5

table-extension = 500

fourierspacing = 0.12

fourier-nx  = 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05

tcoupl = v-rescale

nsttcouple = -1

nh-chain-length = 10

print-nose-hoover-chain-variables = no

tc-grps = system

tau-t = 0.1

ref-t = 298.00

pcoupl = No

pcoupltype = Isotropic

nstpcouple = -1

tau-p = 1

refcoord-scaling = No

gen-vel = yes

gen-temp = 298.00

gen-seed  = -1

constraints= all-bonds

constraint-algorithm = Lincs

continuation = no

Shake-SOR = no

shake-tol = 0.0001

lincs-order = 4

lincs-iter = 1

lincs-warnangle = 30


*mdp file for NPT equilibration*

define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 50

init-step = 0

simulation-part = 1

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 50

nstfout = 50

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme = group

nstlist = 1

ns-type = simple

pbc = no

rlist  = 0.5

coulombtype= cut-off

rcoulomb = 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch = 0.8

rvdw= 0.5

table-extension = 1

fourierspacing = 0.12

fourier-nx= 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05

tcoupl  = Nose-Hoover

tc-grps = system

tau-t  = 0.1

ref-t = 298.00

pcoupl = parrinello-rahman

pcoupltype = Isotropic

tau-p   = 1.0

compressibility   = 4.5e-5

ref-p   = 1.0

refcoord-scaling = No

gen-vel   = no

gen-temp = 298.00

gen-seed   = -1

constraints  = all-bonds

constraint-algorithm   = Lincs

continuation  = yes

Shake-SOR  = no

shake-tol  = 0.0001

lincs-order = 4

lincs-iter   = 1

lincs-warnangle  = 30


*mdp file for MD*

integrator  = md

tinit = 0

dt  = 0.001

nsteps = 5 ; 1 us

init-step = 0

simulation-part= 1

comm-mode  = Angular

nstcomm = 100

comm-grps = system

bd-fric  = 0

ld-seed = -1

nstxout  = 1

nstvout  = 0

nstfout   = 0

nstlog  = 1

nstcalcenergy = 1

nstenergy = 1

nstxtcout  = 0

xtc-precision  = 1000

xtc-grps  = system

energygrps  = system

cutoff-scheme  = group

nstlist  = 10

ns-type  = simple

pbc   = no

rlist= 0.5

coulombtype= cut-off

rcoulomb  = 0.5

vdw-type  = Cut-off

vdw-modifier  = Potential-shift-Verlet

rvdw-switch = 0.8

rvdw  = 0.5

DispCorr = No

table-extension  = 500

fourierspacing = 0.12

fourier-nx  = 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

;implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-24 Thread Justin Lemkul



On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

Thanks for yor reply, Mark.


At first i was sure that the problem was table-exension because when I
enlarge table-extension value, warning message didn't  appear anymore.
Besides, i have successfully minimized and equilibrated the system
(indicated by Fmax  emtol reached; and no error messages during NVTNPT
equilibration, except a warning that the Pcouple is turned off in vacuum
system).

However, the error message appeared without table-extension warning makes
me doubt also about my system stability. Here is my mdp setting. Please
tell me if there are any 'weird' setting, and also kindly suggest/recommend
a better setting.


*mdp file for Minimisation*

integrator = steep

nsteps = 5000

emtol = 200

emstep = 0.01

niter = 20

nstlog = 1

nstenergy = 1

cutoff-scheme = group

nstlist = 1

ns_type = simple

pbc = no

rlist = 0.5

coulombtype = cut-off

rcoulomb = 0.5

vdw-type = cut-off

rvdw-switch = 0.8

rvdw = 0.5

DispCorr = no

fourierspacing = 0.12

pme_order = 6

ewald_rtol = 1e-06

epsilon_surface = 0

optimize_fft = no

tcoupl = no

pcoupl = no

free_energy = yes

init_lambda = 0.0

delta_lambda = 0

foreign_lambda = 0.05

sc-alpha = 0.5

sc-power = 1.0

sc-sigma  = 0.3

couple-lambda0 = vdw

couple-lambda1 = none

couple-intramol = no

nstdhdl = 10

gen_vel = no

constraints = none

constraint-algorithm = lincs

continuation = no

lincs-order  = 12

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05


*mdp file for NVT equilibration*

define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 25

init-step = 0

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 5

nstfout = 5

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme= group

nstlist  = 1

ns-type = simple

pbc= no

rlist= 0.5

coulombtype = cut-off

rcoulomb= 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch= 0.8

rvdw = 0.5

table-extension = 500

fourierspacing = 0.12

fourier-nx  = 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05

tcoupl = v-rescale

nsttcouple = -1

nh-chain-length = 10

print-nose-hoover-chain-variables = no

tc-grps = system

tau-t = 0.1

ref-t = 298.00

pcoupl = No

pcoupltype = Isotropic

nstpcouple = -1

tau-p = 1

refcoord-scaling = No

gen-vel = yes

gen-temp = 298.00

gen-seed  = -1

constraints= all-bonds

constraint-algorithm = Lincs

continuation = no

Shake-SOR = no

shake-tol = 0.0001

lincs-order = 4

lincs-iter = 1

lincs-warnangle = 30


*mdp file for NPT equilibration*

define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 50

init-step = 0

simulation-part = 1

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 50

nstfout = 50

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme = group

nstlist = 1

ns-type = simple

pbc = no

rlist  = 0.5

coulombtype= cut-off

rcoulomb = 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch = 0.8

rvdw= 0.5

table-extension = 1

fourierspacing = 0.12

fourier-nx= 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05

tcoupl  = Nose-Hoover

tc-grps = system

tau-t  = 0.1

ref-t = 298.00

pcoupl = parrinello-rahman

pcoupltype = Isotropic

tau-p   = 1.0

compressibility   = 4.5e-5

ref-p   = 1.0

refcoord-scaling = No

gen-vel   = no

gen-temp = 298.00

gen-seed   = -1

constraints  = all-bonds

constraint-algorithm   = Lincs

continuation  = yes

Shake-SOR  = no

shake-tol  = 0.0001

lincs-order = 4

lincs-iter   = 1

lincs-warnangle  = 30


*mdp file for MD*

integrator  = md

tinit = 0

dt  = 0.001

nsteps = 5 ; 1 us

init-step = 0

simulation-part= 1

comm-mode  = Angular

nstcomm = 100

comm-grps = system

bd-fric  = 0

ld-seed = -1

nstxout  = 1

nstvout  = 0

nstfout   = 0

nstlog  = 1

nstcalcenergy = 1

nstenergy = 1

nstxtcout  = 0

xtc-precision  = 1000

xtc-grps  = system

energygrps  = system

cutoff-scheme  = group

nstlist  = 10

ns-type  = simple

pbc   = no

rlist= 0.5

coulombtype= cut-off

rcoulomb  = 0.5

vdw-type  = Cut-off

vdw-modifier  = Potential-shift-Verlet

rvdw-switch = 0.8

rvdw  = 0.5

DispCorr = No

table-extension  = 500

fourierspacing = 0.12

fourier-nx  = 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

;implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1


[gmx-users] error in the middle of running mdrun_mpi

2014-10-23 Thread Nizar Masbukhin
Dear gromacs users,

I try simulate protein folding using REMD sampling method in implicit
solvent. I run my simulation on MPI-compiled gromacs 5.0.2 on single node.
I have succesfully minimized equilibrated (NVT-constrained, and NPT
constrained) my system. However, In the middle of mdrun_mpi process, the
warning messages appear.





























*starting mdrun 'Protein'5 steps, 50.0 ps.starting mdrun
'Protein'5 steps, 50.0 ps.starting mdrun 'Protein'5
steps, 50.0 ps.starting mdrun 'Protein'5 steps, 50.0
ps.starting mdrun 'Protein'starting mdrun 'Protein'5 steps,
50.0 ps.starting mdrun 'Protein'5 steps, 50.0 ps.starting
mdrun 'Protein'5 steps, 50.0 ps.5 steps, 50.0
ps.step 2873100, will finish Sat Nov  1 10:03:07 2014WARNING: Listed
nonbonded interaction between particles 192 and 197at distance 16.773 which
is larger than the table limit 10.500 nm.This is likely either a 1,4
interaction, or a listed interaction insidea smaller molecule you are
decoupling during a free energy calculation.Since interactions at distances
beyond the table cannot be computed,they are skipped until they are inside
the table limit again. You willonly see this message once, even if it
occurs for several interactions.IMPORTANT: This should not happen in a
stable simulation, so there isprobably something wrong with your system.
Only change the table-extensiondistance in the mdp file if you are really
sure that is the reason.*




















*[nizarPC:07548] *** Process received signal ***[nizarPC:07548] Signal:
Segmentation fault (11)[nizarPC:07548] Signal code: Address not mapped
(1)[nizarPC:07548] Failing at address: 0x1ef8d90[nizarPC:07548] [ 0]
/lib/x86_64-linux-gnu/libc.so.6(+0x36c30) [0x7f610bc9fc30][nizarPC:07548] [
1]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(nb_kernel_ElecGB_VdwLJ_GeomP1P1_F_avx_256_single+0x836)
[0x7f610d3a2466][nizarPC:07548] [ 2]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_nonbonded+0x240)
[0x7f610d235a30][nizarPC:07548] [ 3]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_force_lowlevel+0x1d3e)
[0x7f610d97bebe][nizarPC:07548] [ 4]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_force_cutsGROUP+0x1510)
[0x7f610d91bbe0][nizarPC:07548] [ 5] mdrun_mpi(do_md+0x57c1)
[0x42e5e1][nizarPC:07548] [ 6] mdrun_mpi(mdrunner+0x12a1)
[0x413af1][nizarPC:07548] [ 7] mdrun_mpi(_Z9gmx_mdruniPPc+0x18e5)
[0x4337b5][nizarPC:07548] [ 8]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(_ZN3gmx24CommandLineModuleManager3runEiPPc+0x92)
[0x7f610ce15a42][nizarPC:07548] [ 9] mdrun_mpi(main+0x7c)
[0x40cb8c][nizarPC:07548] [10]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
[0x7f610bc8aec5][nizarPC:07548] [11] mdrun_mpi() [0x40ccce][nizarPC:07548]
*** End of error message
***--mpirun
noticed that process rank 5 with PID 7548 on node nizarPC exited on signal
11 (Segmentation fault).*
I have increased the table-extension to 500.00 (how much this value should
be?), and re-grompp and mdrun again. there were no warning message
regarding table-extension anymore, However, this error messages showed:




































*starting mdrun 'Protein'5 steps, 50.0 ps.starting mdrun
'Protein'5 steps, 50.0 ps.starting mdrun 'Protein'5
steps, 50.0 ps.starting mdrun 'Protein'5 steps, 50.0
ps.starting mdrun 'Protein'5 steps, 50.0 ps.starting mdrun
'Protein'starting mdrun 'Protein'5 steps, 50.0 ps.starting
mdrun 'Protein'5 steps, 50.0 ps.5 steps, 50.0
ps.step 4142800, will finish Sat Nov  1 10:35:55 2014[nizarPC:09984] ***
Process received signal ***[nizarPC:09984] Signal: Segmentation fault
(11)[nizarPC:09984] Signal code: Address not mapped (1)[nizarPC:09984]
Failing at address: 0x1464040[nizarPC:09984] [ 0]
/lib/x86_64-linux-gnu/libc.so.6(+0x36c30) [0x7fa764b65c30][nizarPC:09984] [
1]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(nb_kernel_ElecGB_VdwLJ_GeomP1P1_F_avx_256_single+0x85f)
[0x7fa76626848f][nizarPC:09984] [ 2]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_nonbonded+0x240)
[0x7fa7660fba30][nizarPC:09984] [ 3]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_force_lowlevel+0x1d3e)
[0x7fa766841ebe][nizarPC:09984] [ 4]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_force_cutsGROUP+0x1510)
[0x7fa7667e1be0][nizarPC:09984] [ 5] mdrun_mpi(do_md+0x57c1)
[0x42e5e1][nizarPC:09984] [ 6] mdrun_mpi(mdrunner+0x12a1)
[0x413af1][nizarPC:09984] [ 7] mdrun_mpi(_Z9gmx_mdruniPPc+0x18e5)
[0x4337b5][nizarPC:09984] [ 8]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(_ZN3gmx24CommandLineModuleManager3runEiPPc+0x92)
[0x7fa765cdba42][nizarPC:09984] [ 9] mdrun_mpi(main+0x7c)
[0x40cb8c][nizarPC:09984] [10]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
[0x7fa764b50ec5][nizarPC:09984] [11] mdrun_mpi() 

Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-23 Thread Mark Abraham
Hi,

The warning message told you not to increase the table distance unless you
were sure the table distance was the problem. Why were you sure the table
distance was the problem, rather than some form of general instability of
your system? In addition to all the usual reasons for
http://www.gromacs.org/Documentation/Terminology/Blowing_Up, the GB kernels
are completely untested, so you might try running with 4.5.7 (last version
known to be probably-good for GB) to see whether the problem is in the code
or your setup.

Mark

On Thu, Oct 23, 2014 at 10:38 PM, Nizar Masbukhin nizar.fku...@gmail.com
wrote:

 Dear gromacs users,

 I try simulate protein folding using REMD sampling method in implicit
 solvent. I run my simulation on MPI-compiled gromacs 5.0.2 on single node.
 I have succesfully minimized equilibrated (NVT-constrained, and NPT
 constrained) my system. However, In the middle of mdrun_mpi process, the
 warning messages appear.





























 *starting mdrun 'Protein'5 steps, 50.0 ps.starting mdrun
 'Protein'5 steps, 50.0 ps.starting mdrun 'Protein'5
 steps, 50.0 ps.starting mdrun 'Protein'5 steps, 50.0
 ps.starting mdrun 'Protein'starting mdrun 'Protein'5 steps,
 50.0 ps.starting mdrun 'Protein'5 steps, 50.0 ps.starting
 mdrun 'Protein'5 steps, 50.0 ps.5 steps, 50.0
 ps.step 2873100, will finish Sat Nov  1 10:03:07 2014WARNING: Listed
 nonbonded interaction between particles 192 and 197at distance 16.773 which
 is larger than the table limit 10.500 nm.This is likely either a 1,4
 interaction, or a listed interaction insidea smaller molecule you are
 decoupling during a free energy calculation.Since interactions at distances
 beyond the table cannot be computed,they are skipped until they are inside
 the table limit again. You willonly see this message once, even if it
 occurs for several interactions.IMPORTANT: This should not happen in a
 stable simulation, so there isprobably something wrong with your system.
 Only change the table-extensiondistance in the mdp file if you are really
 sure that is the reason.*




















 *[nizarPC:07548] *** Process received signal ***[nizarPC:07548] Signal:
 Segmentation fault (11)[nizarPC:07548] Signal code: Address not mapped
 (1)[nizarPC:07548] Failing at address: 0x1ef8d90[nizarPC:07548] [ 0]
 /lib/x86_64-linux-gnu/libc.so.6(+0x36c30) [0x7f610bc9fc30][nizarPC:07548] [
 1]

 /usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(nb_kernel_ElecGB_VdwLJ_GeomP1P1_F_avx_256_single+0x836)
 [0x7f610d3a2466][nizarPC:07548] [ 2]
 /usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_nonbonded+0x240)
 [0x7f610d235a30][nizarPC:07548] [ 3]
 /usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_force_lowlevel+0x1d3e)
 [0x7f610d97bebe][nizarPC:07548] [ 4]

 /usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_force_cutsGROUP+0x1510)
 [0x7f610d91bbe0][nizarPC:07548] [ 5] mdrun_mpi(do_md+0x57c1)
 [0x42e5e1][nizarPC:07548] [ 6] mdrun_mpi(mdrunner+0x12a1)
 [0x413af1][nizarPC:07548] [ 7] mdrun_mpi(_Z9gmx_mdruniPPc+0x18e5)
 [0x4337b5][nizarPC:07548] [ 8]

 /usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(_ZN3gmx24CommandLineModuleManager3runEiPPc+0x92)
 [0x7f610ce15a42][nizarPC:07548] [ 9] mdrun_mpi(main+0x7c)
 [0x40cb8c][nizarPC:07548] [10]
 /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
 [0x7f610bc8aec5][nizarPC:07548] [11] mdrun_mpi() [0x40ccce][nizarPC:07548]
 *** End of error message

 ***--mpirun
 noticed that process rank 5 with PID 7548 on node nizarPC exited on signal
 11 (Segmentation fault).*
 I have increased the table-extension to 500.00 (how much this value should
 be?), and re-grompp and mdrun again. there were no warning message
 regarding table-extension anymore, However, this error messages showed:




































 *starting mdrun 'Protein'5 steps, 50.0 ps.starting mdrun
 'Protein'5 steps, 50.0 ps.starting mdrun 'Protein'5
 steps, 50.0 ps.starting mdrun 'Protein'5 steps, 50.0
 ps.starting mdrun 'Protein'5 steps, 50.0 ps.starting mdrun
 'Protein'starting mdrun 'Protein'5 steps, 50.0 ps.starting
 mdrun 'Protein'5 steps, 50.0 ps.5 steps, 50.0
 ps.step 4142800, will finish Sat Nov  1 10:35:55 2014[nizarPC:09984] ***
 Process received signal ***[nizarPC:09984] Signal: Segmentation fault
 (11)[nizarPC:09984] Signal code: Address not mapped (1)[nizarPC:09984]
 Failing at address: 0x1464040[nizarPC:09984] [ 0]
 /lib/x86_64-linux-gnu/libc.so.6(+0x36c30) [0x7fa764b65c30][nizarPC:09984] [
 1]

 /usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(nb_kernel_ElecGB_VdwLJ_GeomP1P1_F_avx_256_single+0x85f)
 [0x7fa76626848f][nizarPC:09984] [ 2]
 /usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_nonbonded+0x240)
 [0x7fa7660fba30][nizarPC:09984] [ 3]