[gmx-users] Domain Decomposition

2018-02-15 Thread Iman Ahmadabadi
Dear Gromacs Users,

In one job, I always get in (any number of nodes) the domain decomposition
error as following and I don't know what should I do. I have to use the
-dds or -rdd setting for my problem?

Sincerely
Iman

Initializing Domain Decomposition on 56 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 9.579 nm, LJ-14, atoms 1663 1728
  multi-body bonded interactions: 9.579 nm, Angle, atoms 1727 1728
Minimum cell size due to bonded interactions: 10.537 nm
Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.700 nm
Estimated maximum distance required for P-LINCS: 0.700 nm
Guess for relative PME load: 0.87
Using 0 separate PME nodes, as guessed by mdrun
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 56 cells with a minimum initial size of 13.171 nm
The maximum allowed number of cells is: X 0 Y 0 Z 0

---
Program mdrun, VERSION 4.6
Source code file: /share/apps/gromacs/gromacs-4.6/src/mdlib/domdec.c, line:
6767

Fatal error:
There is no domain decomposition for 56 nodes that is compatible with the
given box and a minimum cell size of 13.1712 nm
Change the number of nodes or mdrun option -rdd or -dds
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain decomposition

2016-07-26 Thread Alexander Alexander
Dear gromacs user,

Now is more than one week that I am engaging with the fatal error due to
domain decomposition, and I have not been succeeded yet, and it is more
painful when I have to test different number of cpu's to see which one
works in a cluster with a long queuing time, means being two or three days
in the queue just to see again the fatal error in two minutes.

These are the dimensions of the cell " 3.53633,   4.17674,   4.99285",
and below is the log file of my test submitted on 2 nodes with total 128
cores, I even reduced to 32 CPU's and even changed from "gmx_mpi mdrun" to
"gmx mdrun", but the problem is still surviving.

Please do not refer me to this link (
http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm
)
as I know what is the problem but I can not solve it:


Thanks,

Regards,
Alex



Log file opened on Fri Jul 22 00:55:56 2016
Host: node074  pid: 12281  rank ID: 0  number of ranks:  64

GROMACS:  gmx mdrun, VERSION 5.1.2
Executable:
/home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi/bin/gmx_mpi
Data prefix:  /home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi
Command line:
  gmx_mpi mdrun -ntomp 1 -deffnm min1.6 -s min1.6

GROMACS version:VERSION 5.1.2
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
GPU support:disabled
OpenCL support: disabled
invsqrt routine:gmx_software_invsqrt(x)
SIMD instructions:  AVX_128_FMA
FFT library:fftw-3.2.1
RDTSCP usage:   enabled
C++11 compilation:  disabled
TNG support:enabled
Tracing support:disabled
Built on:   Thu Jun 23 14:17:43 CEST 2016
Built by:   reuter@marc2-h2 [CMAKE]
Build OS/arch:  Linux 2.6.32-642.el6.x86_64 x86_64
Build CPU vendor:   AuthenticAMD
Build CPU brand:AMD Opteron(TM) Processor 6276
Build CPU family:   21   Model: 1   Stepping: 2
Build CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2
sse3 sse4a sse4.1 sse4.2 ssse3 xop
C compiler: /usr/lib64/ccache/cc GNU 4.4.7
C compiler flags:-mavx -mfma4 -mxop-Wundef -Wextra
-Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
-Wno-unused -Wunused-value -Wunused-parameter  -O3 -DNDEBUG
-funroll-all-loops  -Wno-array-bounds

C++ compiler:   /usr/lib64/ccache/c++ GNU 4.4.7
C++ compiler flags:  -mavx -mfma4 -mxop-Wundef -Wextra
-Wno-missing-field-initializers -Wpointer-arith -Wall -Wno-unused-function
-O3 -DNDEBUG -funroll-all-loops  -Wno-array-bounds
Boost version:  1.55.0 (internal)


Running on 2 nodes with total 128 cores, 128 logical cores
  Cores per node:   64
  Logical cores per node:   64
Hardware detected on host node074 (the node of MPI rank 0):
  CPU info:
Vendor: AuthenticAMD
Brand:  AMD Opteron(TM) Processor 6276
Family: 21  model:  1  stepping:  2
CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2
sse3 sse4a sse4.1 sse4.2 ssse3 xop
SIMD instructions most likely to fit this hardware: AVX_128_FMA
SIMD instructions selected at GROMACS compile time: AVX_128_FMA
Initializing Domain Decomposition on 64 ranks
Dynamic load balancing: off
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 3.196 nm, LJC Pairs NB, atoms 24 28
  multi-body bonded interactions: 0.397 nm, Ryckaert-Bell., atoms 5 13
Minimum cell size due to bonded interactions: 3.516 nm
Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.218 nm
Estimated maximum distance required for P-LINCS: 0.218 nm
Guess for relative PME load: 0.19
Will use 48 particle-particle and 16 PME only ranks
This is a guess, check the performance at the end of the log file
Using 16 separate PME ranks, as guessed by mdrun
Optimizing the DD grid for 48 cells with a minimum initial size of 3.516 nm
The maximum allowed number of cells is: X 1 Y 1 Z 1

---
Program gmx mdrun, VERSION 5.1.2
Source code file: /home/alex/gromacs-5.1.2/src/gromacs/domdec/domdec.cpp,
line: 6987

Fatal error:
There is no domain decomposition for 48 ranks that is compatible with the
given box and a minimum cell size of 3.51565 nm
Change the number of ranks or mdrun option -rdd
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing

[gmx-users] Domain decomposition

2017-06-22 Thread Sergio Manzetti
Hi, I have (also) a system of one molecule in water box of 3 3 3 dimensions, 
the procedure goes well all the way till the simulation starts, getting: 

Will use 20 particle-particle and 4 PME only ranks 
This is a guess, check the performance at the end of the log file 

--- 
Program gmx mdrun, VERSION 5.1.2 
Source code file: 
/build/gromacs-z6bPBg/gromacs-5.1.2/src/gromacs/domdec/domdec.cpp, line: 6987 

Fatal error: 
There is no domain decomposition for 20 ranks that is compatible with the given 
box and a minimum cell size of 2.0777 nm 
Change the number of ranks or mdrun option -rcon or -dds or your LINCS settings 
Look in the log file for details on the domain decomposition 
For more information and tips for troubleshooting, please check the GROMACS 
website at http://www.gromacs.org/Documentation/Errors 
--- 


Not sure what to do here.. 


Sergio Manzetti 

[ http://www.fjordforsk.no/logo_hr2.jpg ] 

[ http://www.fjordforsk.no/ | Fjordforsk AS ] [ http://www.fjordforsk.no/ |   ] 
Midtun 
6894 Vangsnes 
Norge 
Org.nr. 911 659 654 
Tlf: +47 57695621 
[ http://www.oekolab.com/ | Økolab  ] | [ http://www.nanofact.no/ | Nanofactory 
 ] | [ http://www.aq-lab.no/ | AQ-Lab  ] | [ http://www.phap.no/ | FAP ] 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Domain Decomposition

2017-11-08 Thread Shraddha Parate
Dear Gromacs Users,

I was able to achieve a spherical water droplet without periodic boundary
conditions (PBC) by changing few parameters in the .mdp files as below:

*minim.mdp:*
; minim.mdp - used as input into grompp to generate em.tpr
; Parameters describing what to do, when to stop and what to save
integrator = steep ; Algorithm (steep = steepest descent minimization)
emtol = 1000.0  ; Stop minimization when the maximum force < 1000.0
kJ/mol/nm
emstep  = 0.01  ; Energy step size
nsteps = 5   ; Maximum number of (minimization) steps to perform

; Parameters describing how to find the neighbors of each atom and how to
calculate the interactions
nstlist = 0 ; Frequency to update the neighbor list and long range forces
cutoff-scheme   = Group
ns_type = simple ; Method to determine neighbor list (simple, grid)
rlist = 0.0 ; Cut-off for making neighbor list (short range forces)
coulombtype = Cut-off ; Treatment of long range electrostatic interactions
rcoulomb = 0.0 ; Short-range electrostatic cut-off
rvdw = 0.0 ; Short-range Van der Waals cut-off
pbc = no ; Periodic Boundary Conditions (yes/no)


*nvt.mdp:*
title = OPLS Lysozyme NVT equilibration
define = -DPOSRES ; position restrain the protein

; Run parameters
integrator = md ; leap-frog integrator
nsteps = 5 ; 2 * 5 = 100 ps
dt = 0.002 ; 2 fs

; Output control
nstxout = 100 ; save coordinates every 0.2 ps
nstvout = 100 ; save velocities every 0.2 ps
nstenergy = 100 ; save energies every 0.2 ps
nstlog = 100 ; update log file every 0.2 ps

; Bond parameters
continuation = no ; first dynamics run
constraint_algorithm = lincs ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
comm_mode= ANGULAR
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy

; Neighborsearching
ns_type = simple ; search neighboring grid cells
nstlist = 0 ; 10 fs
rlist = 0.0 ; short-range neighborlist cutoff (in nm)
rcoulomb = 0.0 ; short-range electrostatic cutoff (in nm)
rvdw = 0.0 ; short-range van der Waals cutoff (in nm)
verlet-buffer-drift = -1

; Electrostatics
cutoff-scheme   = Group
coulombtype = Cut-off ; Particle Mesh Ewald for long-range electrostatics
pme_order = 4 ; cubic interpolation
fourierspacing = 0.16 ; grid spacing for FFT


; Temperature coupling is on
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = Protein Non-Protein ; two coupling groups - more accurate
tau_t = 0.1 0.1 ; time constant, in ps
ref_t = 300 300 ; reference temperature, one for each group, in K

; Pressure coupling is off
pcoupl = no ; no pressure coupling in NVT

; Periodic boundary conditions
pbc = no ; 3-D PBC

; Dispersion correction
DispCorr = No ; account for cut-off vdW scheme

; Velocity generation
gen_vel = yes ; assign velocities from Maxwell distribution
gen_temp = 300 ; temperature for Maxwell distribution
gen_seed = -1 ; generate a random seed




*npt.mdp:*
title = OPLS Lysozyme NPT equilibration
define = -DPOSRES ; position restrain the protein

; Run parameters
integrator = md ; leap-frog integrator
nsteps = 5 ; 2 * 5 = 100 ps
dt = 0.002 ; 2 fs

; Output control
nstxout = 500 ; save coordinates every 1.0 ps
nstvout = 500 ; save velocities every 1.0 ps
nstenergy = 500 ; save energies every 1.0 ps
nstlog = 500 ; update log file every 1.0 ps

; Bond parameters
continuation = no ; Restarting after NVT
constraint_algorithm= lincs ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds)
constrained
comm_mode= ANGULAR
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy

; Neighborsearching
cutoff-scheme   = Group
ns_type = simple ; search neighboring grid cells
nstlist = 0 ; 20 fs, largely irrelevant with Verlet scheme
rlist = 0.0 ; short-range neighborlist cutoff (in nm)
rcoulomb = 0.0 ; short-range electrostatic cutoff (in nm)
rvdw = 0.0 ; short-range van der Waals cutoff (in nm)
verlet-buffer-drift = -1

; Electrostatics
coulombtype = Cut-off ; Particle Mesh Ewald for long-range
electrostatics
pme_order = 4 ; cubic interpolation
fourierspacing = 0.16 ; grid spacing for FFT

; Temperature coupling is on
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = Protein Non-Protein ; two coupling groups - more accurate
tau_t = 0.1   0.1 ; time constant, in ps
ref_t = 300   300 ; reference temperature, one for each group, in K

; Pressure coupling is on
pcoupl = No ; Pressure coupling on in NPT
pcoupltype = isotropic ; uniform scaling of box vectors
tau_p = 2.0 ; time constant, in ps
ref_p = 1.0 ; reference pressure, in bar
compressibility = 4.5e-5 ; isothermal compressibility of
water, bar^-1
refcoord_scaling= com

; Periodic boundary conditions
pbc = no ; 3-D PBC

; Dispersion

[gmx-users] domain decomposition

2019-08-20 Thread Dhrubajyoti Maji
Dear all,
I am simulating a system consisting urea molecules. After successfully
generating tpr file while I am trying to run mdrun, the following error is
appearing.
Fatal error:
There is no domain decomposition for 72 ranks that is compatible with the
given box and a minimum cell size of 0.5924 nm
Change the number of ranks or mdrun option -rcon or -dds or your LINCS
settings.
All bonds are constrained are by LINCS algorithm in my system and dimension
of my box is 3.40146 nm. I have checked gromacs site as well as mailing
list but couldn't understand what to do. Please help me with the issue.
Thanks and regards.
Dhrubajyoti Maji
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] domain decomposition error

2015-10-27 Thread Musharaf Ali
Dear users
During energy minimization for IL-water system in a box size of 4.7x4.7x9.4
with432 BMIMTF2N and 3519 water molecules, the following error is written
in the md.log file.

Initializing Domain Decomposition on 144 nodes
Dynamic load balancing: no
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 9.371 nm, LJ-14, atoms 622 624
  multi-body bonded interactions: 9.371 nm, Angle, atoms 620 622
Minimum cell size due to bonded interactions: 10.308 nm
Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.218 nm
Estimated maximum distance required for P-LINCS: 0.218 nm
Guess for relative PME load: 0.15
Will use 120 particle-particle and 24 PME only nodes
This is a guess, check the performance at the end of the log file
Using 24 separate PME nodes, as guessed by mdrun
Optimizing the DD grid for 120 cells with a minimum initial size of 10.308
nm
The maximum allowed number of cells is: X 0 Y 0 Z 0

---
Program mdrun_mpi_d, VERSION 4.6.1
Source code file: /root/GROMACS-GPU/gromacs-4.6.1/src/mdlib/domdec.c, line:
6775

Fatal error:
There is no domain decomposition for 120 nodes that is compatible with the
given box and a minimum cell size of 10.3078 nm
Change the number of nodes or mdrun option -rdd
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Could you please suggest how to get rid off it.

Thanks in advance

SMA
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain decomposition error

2015-10-29 Thread badamkhatan togoldor
Dear GMX Users,
I am simulating a free energy of a protein chain_A in water by parallel. Then i 
got domain decomposition error in mdrun. 
Will use 15 particle-particle and 9 PME only ranksThis is a guess, check the 
performance at the end of the log file
---Program mdrun_mpi, 
VERSION 5.1.1-dev-20150819-f10f108Source code file: 
/tmp/asillanp/gromacs/src/gromacs/domdec/domdec.cpp, line: 6969
Fatal error:There is no domain decomposition for 15 ranks that is compatible 
with the given box and a minimum cell size of 5.68559 nmChange the number of 
ranks or mdrun option -rddLook in the log file for details on the domain 
decomposition

 Then i look through the .log file, there was 24 rank . So how can i change 
this ranks? What's wrong in here? Or something wrong in my .mdp file ?  Or 
wrong construction on my script in parallel ? I am using just 2 nodes with 24 
cpu. Then i don't think my system is too small (one protein chain, solvent is 
around 8000 molecules and few ions).     
Initializing Domain Decomposition on 24 ranksDynamic load balancing: offWill 
sort the charge groups at every domain (re)decompositionInitial maximum inter 
charge-group distances:    two-body bonded interactions: 5.169 nm, LJC Pairs 
NB, atoms 81 558  multi-body bonded interactions: 0.404 nm, Ryckaert-Bell., 
atoms 521 529Minimum cell size due to bonded interactions: 5.686 nmMaximum 
distance for 13 constraints, at 120 deg. angles, all-trans: 0.218 nmEstimated 
maximum distance required for P-LINCS: 0.218 nmGuess for relative PME load: 
0.38Will use 15 particle-particle and 9 PME only ranksThis is a guess, check 
the performance at the end of the log fileUsing 9 separate PME ranks, as 
guessed by mdrunOptimizing the DD grid for 15 cells with a minimum initial size 
of 5.686 nmThe maximum allowed number of cells is: X 1 Y 1 Z 0
Can anybody help this issue? 
 Tnx Khatnaa 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Domain Decomposition

2018-02-15 Thread Mark Abraham
Hi,

You have a bonded interaction at a distance of 10 nm. I assume that's not
your intention. Perhaps you should give a configuration to grompp that has
whole molecules. IIRC less ancient versions of GROMACS do a better job of
this.

Mark

On Thu, Feb 15, 2018 at 5:39 PM Iman Ahmadabadi 
wrote:

> Dear Gromacs Users,
>
> In one job, I always get in (any number of nodes) the domain decomposition
> error as following and I don't know what should I do. I have to use the
> -dds or -rdd setting for my problem?
>
> Sincerely
> Iman
>
> Initializing Domain Decomposition on 56 nodes
> Dynamic load balancing: auto
> Will sort the charge groups at every domain (re)decomposition
> Initial maximum inter charge-group distances:
> two-body bonded interactions: 9.579 nm, LJ-14, atoms 1663 1728
>   multi-body bonded interactions: 9.579 nm, Angle, atoms 1727 1728
> Minimum cell size due to bonded interactions: 10.537 nm
> Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.700 nm
> Estimated maximum distance required for P-LINCS: 0.700 nm
> Guess for relative PME load: 0.87
> Using 0 separate PME nodes, as guessed by mdrun
> Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
> Optimizing the DD grid for 56 cells with a minimum initial size of 13.171
> nm
> The maximum allowed number of cells is: X 0 Y 0 Z 0
>
> ---
> Program mdrun, VERSION 4.6
> Source code file: /share/apps/gromacs/gromacs-4.6/src/mdlib/domdec.c, line:
> 6767
>
> Fatal error:
> There is no domain decomposition for 56 nodes that is compatible with the
> given box and a minimum cell size of 13.1712 nm
> Change the number of nodes or mdrun option -rdd or -dds
> Look in the log file for details on the domain decomposition
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain decomposition error

2018-04-15 Thread Dawid das
Dear Gromacs Users,

I run numerous MD simulations for similar systems of protein in water box
and
for only one system I encounter error:





*Fatal error:There is no domain decomposition for 4 ranks that is
compatible with the givenbox and a minimum cell size of 3.54253 nmChange
the number of ranks or mdrun option -rddLook in the log file for details on
the domain decomposition*

I found explanation of this error in Gromacs documentation as well as on
the mailing
list, however I still do not understand why I get this only for one system
out of many.
There is nothing special about it, I mean its size for instance is similar
to this of
others systems.

What can be source of this error, then? Can it be the system size or
placement of
charge groups?

I have changed number of ranks but it does not help. I do not want to play
with -rdd,
etc. options of mdrun as I am not sure whether Ido not spoil my simulation.

Best wishes,
Dawid
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] domain decomposition error

2018-06-18 Thread Chhaya Singh
I am running a simulation having protein in implicit solvent using amber
ff99sb forcefield and gbsa as solvent .
I am not able to use more than one cpu.
It always gives domain decomposition error if i use more than one cpu.
when i tried running using one cpu then it gave me this error :
"Fatal error:
Too many LINCS warnings (12766)
If you know what you are doing you can adjust the lincs warning threshold
in your mdp file
or set the environment variable GMX_MAXCONSTRWARN to -1,
but normally it is better to fix the problem".
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition

2016-07-26 Thread Mark Abraham
Hi,

So you know your cell dimensions, and mdrun is reporting that it can't
decompose because you have a bonded interaction that is almost the length
of the one of the cell dimensions. How big should that interaction distance
be, and what might you do about it?

Probably mdrun should be smarter about pbc and use better periodic image
handling during DD setup, but you can fix that yourself before you call
grompp.

Mark


On Tue, Jul 26, 2016 at 11:46 AM Alexander Alexander <
alexanderwie...@gmail.com> wrote:

> Dear gromacs user,
>
> Now is more than one week that I am engaging with the fatal error due to
> domain decomposition, and I have not been succeeded yet, and it is more
> painful when I have to test different number of cpu's to see which one
> works in a cluster with a long queuing time, means being two or three days
> in the queue just to see again the fatal error in two minutes.
>
> These are the dimensions of the cell " 3.53633,   4.17674,   4.99285",
> and below is the log file of my test submitted on 2 nodes with total 128
> cores, I even reduced to 32 CPU's and even changed from "gmx_mpi mdrun" to
> "gmx mdrun", but the problem is still surviving.
>
> Please do not refer me to this link (
>
> http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm
> )
> as I know what is the problem but I can not solve it:
>
>
> Thanks,
>
> Regards,
> Alex
>
>
>
> Log file opened on Fri Jul 22 00:55:56 2016
> Host: node074  pid: 12281  rank ID: 0  number of ranks:  64
>
> GROMACS:  gmx mdrun, VERSION 5.1.2
> Executable:
> /home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi/bin/gmx_mpi
> Data prefix:  /home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi
> Command line:
>   gmx_mpi mdrun -ntomp 1 -deffnm min1.6 -s min1.6
>
> GROMACS version:VERSION 5.1.2
> Precision:  single
> Memory model:   64 bit
> MPI library:MPI
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
> GPU support:disabled
> OpenCL support: disabled
> invsqrt routine:gmx_software_invsqrt(x)
> SIMD instructions:  AVX_128_FMA
> FFT library:fftw-3.2.1
> RDTSCP usage:   enabled
> C++11 compilation:  disabled
> TNG support:enabled
> Tracing support:disabled
> Built on:   Thu Jun 23 14:17:43 CEST 2016
> Built by:   reuter@marc2-h2 [CMAKE]
> Build OS/arch:  Linux 2.6.32-642.el6.x86_64 x86_64
> Build CPU vendor:   AuthenticAMD
> Build CPU brand:AMD Opteron(TM) Processor 6276
> Build CPU family:   21   Model: 1   Stepping: 2
> Build CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
> misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2
> sse3 sse4a sse4.1 sse4.2 ssse3 xop
> C compiler: /usr/lib64/ccache/cc GNU 4.4.7
> C compiler flags:-mavx -mfma4 -mxop-Wundef -Wextra
> -Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
> -Wno-unused -Wunused-value -Wunused-parameter  -O3 -DNDEBUG
> -funroll-all-loops  -Wno-array-bounds
>
> C++ compiler:   /usr/lib64/ccache/c++ GNU 4.4.7
> C++ compiler flags:  -mavx -mfma4 -mxop-Wundef -Wextra
> -Wno-missing-field-initializers -Wpointer-arith -Wall -Wno-unused-function
> -O3 -DNDEBUG -funroll-all-loops  -Wno-array-bounds
> Boost version:  1.55.0 (internal)
>
>
> Running on 2 nodes with total 128 cores, 128 logical cores
>   Cores per node:   64
>   Logical cores per node:   64
> Hardware detected on host node074 (the node of MPI rank 0):
>   CPU info:
> Vendor: AuthenticAMD
> Brand:  AMD Opteron(TM) Processor 6276
> Family: 21  model:  1  stepping:  2
> CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
> misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2
> sse3 sse4a sse4.1 sse4.2 ssse3 xop
> SIMD instructions most likely to fit this hardware: AVX_128_FMA
> SIMD instructions selected at GROMACS compile time: AVX_128_FMA
> Initializing Domain Decomposition on 64 ranks
> Dynamic load balancing: off
> Will sort the charge groups at every domain (re)decomposition
> Initial maximum inter charge-group distances:
> two-body bonded interactions: 3.196 nm, LJC Pairs NB, atoms 24 28
>   multi-body bonded interactions: 0.397 nm, Ryckaert-Bell., atoms 5 13
> Minimum cell size due to bonded interactions: 3.516 nm
> Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.218 nm
> Estimated maximum distance required for P-LINCS: 0.218 nm
> Guess for relative PME load: 0.19
> Will use 48 particle-particle and 16 PME only ranks
> This is a guess, check the performance at the end of the log file
> Using 16 separate PME ranks, as guessed by mdrun
> Optimizing the DD grid for 48 cells with a minimum initial size of 3.516 nm
> The maximum allowed number of cells is: X 1 Y 1 Z 1
>
> ---
> Program gmx mdrun, VERSION 5.1.2
> Source

Re: [gmx-users] Domain decomposition

2016-07-26 Thread Alexander Alexander
Hi,

Thanks for your response.
I do not know which two atoms has bonded interaction comparable with the
cell size, however, based on this line in log file "two-body bonded
interactions: 3.196 nm, LJC Pairs NB, atoms 24 28", I though the 24 and 28
are the couple whom their coordination are as below:

1ARG   HH22   24   0.946   1.497   4.341
2CL  CL   28   1.903   0.147   0.492

Indeed their geometrical distance is too big but it is normal I think. I
manually changed the coordination of CL atom to bring it closer to the
other one hoping solve the problem, and test it again, but, the problem is
still here.

Here also says "minimum initial size of 3.516 nm", but all of my cell size
are higher than this as well.

?

Thanks,
Regards,
Alex

On Tue, Jul 26, 2016 at 12:12 PM, Mark Abraham 
wrote:

> Hi,
>
> So you know your cell dimensions, and mdrun is reporting that it can't
> decompose because you have a bonded interaction that is almost the length
> of the one of the cell dimensions. How big should that interaction distance
> be, and what might you do about it?
>
> Probably mdrun should be smarter about pbc and use better periodic image
> handling during DD setup, but you can fix that yourself before you call
> grompp.
>
> Mark
>
>
> On Tue, Jul 26, 2016 at 11:46 AM Alexander Alexander <
> alexanderwie...@gmail.com> wrote:
>
> > Dear gromacs user,
> >
> > Now is more than one week that I am engaging with the fatal error due to
> > domain decomposition, and I have not been succeeded yet, and it is more
> > painful when I have to test different number of cpu's to see which one
> > works in a cluster with a long queuing time, means being two or three
> days
> > in the queue just to see again the fatal error in two minutes.
> >
> > These are the dimensions of the cell " 3.53633,   4.17674,   4.99285",
> > and below is the log file of my test submitted on 2 nodes with total 128
> > cores, I even reduced to 32 CPU's and even changed from "gmx_mpi mdrun"
> to
> > "gmx mdrun", but the problem is still surviving.
> >
> > Please do not refer me to this link (
> >
> >
> http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm
> > )
> > as I know what is the problem but I can not solve it:
> >
> >
> > Thanks,
> >
> > Regards,
> > Alex
> >
> >
> >
> > Log file opened on Fri Jul 22 00:55:56 2016
> > Host: node074  pid: 12281  rank ID: 0  number of ranks:  64
> >
> > GROMACS:  gmx mdrun, VERSION 5.1.2
> > Executable:
> > /home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi/bin/gmx_mpi
> > Data prefix:  /home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi
> > Command line:
> >   gmx_mpi mdrun -ntomp 1 -deffnm min1.6 -s min1.6
> >
> > GROMACS version:VERSION 5.1.2
> > Precision:  single
> > Memory model:   64 bit
> > MPI library:MPI
> > OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
> > GPU support:disabled
> > OpenCL support: disabled
> > invsqrt routine:gmx_software_invsqrt(x)
> > SIMD instructions:  AVX_128_FMA
> > FFT library:fftw-3.2.1
> > RDTSCP usage:   enabled
> > C++11 compilation:  disabled
> > TNG support:enabled
> > Tracing support:disabled
> > Built on:   Thu Jun 23 14:17:43 CEST 2016
> > Built by:   reuter@marc2-h2 [CMAKE]
> > Build OS/arch:  Linux 2.6.32-642.el6.x86_64 x86_64
> > Build CPU vendor:   AuthenticAMD
> > Build CPU brand:AMD Opteron(TM) Processor 6276
> > Build CPU family:   21   Model: 1   Stepping: 2
> > Build CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
> > misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2
> > sse3 sse4a sse4.1 sse4.2 ssse3 xop
> > C compiler: /usr/lib64/ccache/cc GNU 4.4.7
> > C compiler flags:-mavx -mfma4 -mxop-Wundef -Wextra
> > -Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
> > -Wno-unused -Wunused-value -Wunused-parameter  -O3 -DNDEBUG
> > -funroll-all-loops  -Wno-array-bounds
> >
> > C++ compiler:   /usr/lib64/ccache/c++ GNU 4.4.7
> > C++ compiler flags:  -mavx -mfma4 -mxop-Wundef -Wextra
> > -Wno-missing-field-initializers -Wpointer-arith -Wall
> -Wno-unused-function
> > -O3 -DNDEBUG -funroll-all-loops  -Wno-array-bounds
> > Boost version:  1.55.0 (internal)
> >
> >
> > Running on 2 nodes with total 128 cores, 128 logical cores
> >   Cores per node:   64
> >   Logical cores per node:   64
> > Hardware detected on host node074 (the node of MPI rank 0):
> >   CPU info:
> > Vendor: AuthenticAMD
> > Brand:  AMD Opteron(TM) Processor 6276
> > Family: 21  model:  1  stepping:  2
> > CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
> > misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2
> > sse3 sse4a sse4.1 sse4.2 ssse3 xop
> > SIMD instructions most likely to fit this hardware: AVX_128_FMA
> > SIMD instructions sele

Re: [gmx-users] Domain decomposition

2016-07-26 Thread Justin Lemkul



On 7/26/16 8:17 AM, Alexander Alexander wrote:

Hi,

Thanks for your response.
I do not know which two atoms has bonded interaction comparable with the
cell size, however, based on this line in log file "two-body bonded
interactions: 3.196 nm, LJC Pairs NB, atoms 24 28", I though the 24 and 28
are the couple whom their coordination are as below:

1ARG   HH22   24   0.946   1.497   4.341
2CL  CL   28   1.903   0.147   0.492

Indeed their geometrical distance is too big but it is normal I think. I
manually changed the coordination of CL atom to bring it closer to the
other one hoping solve the problem, and test it again, but, the problem is
still here.



You'll need to provide a full .mdp file for anyone to be able to tell anything. 
It looks like you're doing a free energy calculation, based on the numbers in 
LJC, and depending on the settings, free energy calculations may involve very 
long bonded interactions that make it difficult (or even impossible) to use DD, 
in which case you must use mdrun -ntmpi 1 to disable DD and rely only on OpenMP.



Here also says "minimum initial size of 3.516 nm", but all of my cell size
are higher than this as well.



"Cell size" refers to a DD cell, not the box vectors of your system.  Note that 
your system is nearly the same size as your limiting interactions, which may 
suggest that your box is too small to avoid periodicity problems, but that's an 
entirely separate issue.


-Justin


?

Thanks,
Regards,
Alex

On Tue, Jul 26, 2016 at 12:12 PM, Mark Abraham 
wrote:


Hi,

So you know your cell dimensions, and mdrun is reporting that it can't
decompose because you have a bonded interaction that is almost the length
of the one of the cell dimensions. How big should that interaction distance
be, and what might you do about it?

Probably mdrun should be smarter about pbc and use better periodic image
handling during DD setup, but you can fix that yourself before you call
grompp.

Mark


On Tue, Jul 26, 2016 at 11:46 AM Alexander Alexander <
alexanderwie...@gmail.com> wrote:


Dear gromacs user,

Now is more than one week that I am engaging with the fatal error due to
domain decomposition, and I have not been succeeded yet, and it is more
painful when I have to test different number of cpu's to see which one
works in a cluster with a long queuing time, means being two or three

days

in the queue just to see again the fatal error in two minutes.

These are the dimensions of the cell " 3.53633,   4.17674,   4.99285",
and below is the log file of my test submitted on 2 nodes with total 128
cores, I even reduced to 32 CPU's and even changed from "gmx_mpi mdrun"

to

"gmx mdrun", but the problem is still surviving.

Please do not refer me to this link (



http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm

)
as I know what is the problem but I can not solve it:


Thanks,

Regards,
Alex



Log file opened on Fri Jul 22 00:55:56 2016
Host: node074  pid: 12281  rank ID: 0  number of ranks:  64

GROMACS:  gmx mdrun, VERSION 5.1.2
Executable:
/home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi/bin/gmx_mpi
Data prefix:  /home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi
Command line:
  gmx_mpi mdrun -ntomp 1 -deffnm min1.6 -s min1.6

GROMACS version:VERSION 5.1.2
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
GPU support:disabled
OpenCL support: disabled
invsqrt routine:gmx_software_invsqrt(x)
SIMD instructions:  AVX_128_FMA
FFT library:fftw-3.2.1
RDTSCP usage:   enabled
C++11 compilation:  disabled
TNG support:enabled
Tracing support:disabled
Built on:   Thu Jun 23 14:17:43 CEST 2016
Built by:   reuter@marc2-h2 [CMAKE]
Build OS/arch:  Linux 2.6.32-642.el6.x86_64 x86_64
Build CPU vendor:   AuthenticAMD
Build CPU brand:AMD Opteron(TM) Processor 6276
Build CPU family:   21   Model: 1   Stepping: 2
Build CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2
sse3 sse4a sse4.1 sse4.2 ssse3 xop
C compiler: /usr/lib64/ccache/cc GNU 4.4.7
C compiler flags:-mavx -mfma4 -mxop-Wundef -Wextra
-Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
-Wno-unused -Wunused-value -Wunused-parameter  -O3 -DNDEBUG
-funroll-all-loops  -Wno-array-bounds

C++ compiler:   /usr/lib64/ccache/c++ GNU 4.4.7
C++ compiler flags:  -mavx -mfma4 -mxop-Wundef -Wextra
-Wno-missing-field-initializers -Wpointer-arith -Wall

-Wno-unused-function

-O3 -DNDEBUG -funroll-all-loops  -Wno-array-bounds
Boost version:  1.55.0 (internal)


Running on 2 nodes with total 128 cores, 128 logical cores
  Cores per node:   64
  Logical cores per node:   64
Hardware detected on host node074 (the node of MPI rank 0):
  CPU i

Re: [gmx-users] Domain decomposition

2016-07-26 Thread Alexander Alexander
Thanks.

Yes indeed it is a free energy calculation in which no problem showed up in
the first 6 windows where the harmonic restrains were applying on my amino
acid but the DD problem came up immediately in the first windows of the
removing charge. Below please find the mdp file.
And If I use -ntmpi = 1 then it takes ages to finish. Although my gromcas
need to be compiled again with thread-MPI .

Another question is that if really this amount of pull restrain is
necessary to be applied on my molecules (singke amino acid) before removing
the charge and vdW?

Best regards,
Alex

define   = -DFLEXIBLE
integrator   = steep
nsteps   = 50
emtol= 250
emstep   = 0.001

nstenergy= 500
nstlog   = 500
nstxout-compressed   = 1000

constraint-algorithm = lincs
constraints  = h-bonds

cutoff-scheme= Verlet
rlist= 1.32

coulombtype  = PME
rcoulomb = 1.30

vdwtype  = Cut-off
rvdw = 1.30
DispCorr = EnerPres

free-energy  = yes
init-lambda-state= 6
calc-lambda-neighbors= -1
restraint-lambdas= 0.0 0.2 0.4 0.6 0.8 1.0 1.0 1.0 1.0 1.0 1.0 1.0
1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0
coul-lambdas = 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 1.0
1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0
vdw-lambdas  = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1
0.2 0.3 0.35 0.4 0.5 0.6 0.7 0.8 0.9 1.0
couple-moltype   = Protein_chain_A
couple-lambda0   = vdw-q
couple-lambda1   = none
couple-intramol  = no
nstdhdl  = 100
sc-alpha = 0.5
sc-coul  = no
sc-power = 1
sc-sigma = 0.3
dhdl-derivatives = yes
separate-dhdl-file   = yes
dhdl-print-energy= total

pull = yes
pull-ngroups = 9
pull-ncoords = 6
pull-group1-name = CA
pull-group2-name = HA
pull-group3-name = N
pull-group4-name = C
pull-group5-name = O1
pull-group6-name = O2
pull-group7-name = CZ
pull-group8-name = NH1
pull-group9-name = NH2

pull-coord1-groups   = 1 2
pull-coord1-type = umbrella
pull-coord1-dim  = Y Y Y
pull-coord1-init = 0
pull-coord1-start= yes
pull-coord1-geometry = distance
pull-coord1-k= 0.0
pull-coord1-kB   = 1000

pull-coord2-groups   = 1 3
pull-coord2-type = umbrella
pull-coord2-dim  = Y Y Y
pull-coord2-init = 0
pull-coord2-start= yes
pull-coord2-geometry = distance
pull-coord2-k= 0.0
pull-coord2-kB   = 1000

pull-coord3-groups   = 4 5
pull-coord3-type = umbrella
pull-coord3-dim  = Y Y Y
pull-coord3-init = 0
pull-coord3-start= yes
pull-coord3-geometry = distance
pull-coord3-k= 0.0
pull-coord3-kB   = 1000

pull-coord4-groups   = 4 6
pull-coord4-type = umbrella
pull-coord4-dim  = Y Y Y
pull-coord4-init = 0
pull-coord4-start= yes
pull-coord4-geometry = distance
pull-coord4-k= 0.0
pull-coord4-kB   = 1000

pull-coord5-groups   = 7 8
pull-coord5-type = umbrella
pull-coord5-dim  = Y Y Y
pull-coord5-init = 0
pull-coord5-start= yes
pull-coord5-geometry = distance
pull-coord5-k= 0.0
pull-coord5-kB   = 1000

pull-coord6-groups   = 7 9
pull-coord6-type = umbrella
pull-coord6-dim  = Y Y Y
pull-coord6-init = 0
pull-coord6-start= yes
pull-coord6-geometry = distance
pull-coord6-k= 0.0
pull-coord6-kB   = 1000

On Tue, Jul 26, 2016 at 2:21 PM, Justin Lemkul  wrote:

>
>
> On 7/26/16 8:17 AM, Alexander Alexander wrote:
>
>> Hi,
>>
>> Thanks for your response.
>> I do not know which two atoms has bonded interaction comparable with the
>> cell size, however, based on this line in log file "two-body bonded
>> interactions: 3.196 nm, LJC Pairs NB, atoms 24 28", I though the 24 and 28
>> are the couple whom their coordination are as below:
>>
>> 1ARG   HH22   24   0.946   1.497   4.341
>> 2CL  CL   28   1.903   0.147   0.492
>>
>> Indeed their geometrical distance is too big but it is normal I think. I
>> manually changed the coordination of CL atom to bring it closer to the
>> other one hoping solve the problem, and test it again, but, the problem is
>> still here.
>>
>>
> You'll need to provide a full .mdp file for anyone to be able to tell
> anything. It looks like you're doing a free energy calculation, based on
> the numbers in LJC, and depending on the settings, free energy calculations
> may involve very long bonded interactions that make it difficult (or even
> impossible) to use DD

Re: [gmx-users] Domain decomposition

2016-07-26 Thread Mark Abraham
Hi,

On Tue, Jul 26, 2016 at 2:18 PM Alexander Alexander <
alexanderwie...@gmail.com> wrote:

> Hi,
>
> Thanks for your response.
> I do not know which two atoms has bonded interaction comparable with the
> cell size, however, based on this line in log file "two-body bonded
> interactions: 3.196 nm, LJC Pairs NB, atoms 24 28", I though the 24 and 28
> are the couple whom their coordination are as below:
>

Indeed, that's why that line is reported.

1ARG   HH22   24   0.946   1.497   4.341
> 2CL  CL   28   1.903   0.147   0.492
>
> Indeed their geometrical distance is too big but it is normal I think.


You've described your box as "3.53633,   4.17674,   4.99285" so it looks
like a displacement of 3.196 is in danger of exceeding the internal radius
of the box (I can't reproduce 3.196 nm, however, so something is amiss; I
had been guessing that mdrun was using an inappropriate periodic image,
which you could perhaps fix by better pre-conditioning your grompp inputs
with trjconv), but also perhaps food for thought for some different
topology / setup.

Mark


> I
> manually changed the coordination of CL atom to bring it closer to the
> other one hoping solve the problem, and test it again, but, the problem is
> still here.
>
> Here also says "minimum initial size of 3.516 nm", but all of my cell size
> are higher than this as well.
>
> ?
>
> Thanks,
> Regards,
> Alex
>
> On Tue, Jul 26, 2016 at 12:12 PM, Mark Abraham 
> wrote:
>
> > Hi,
> >
> > So you know your cell dimensions, and mdrun is reporting that it can't
> > decompose because you have a bonded interaction that is almost the length
> > of the one of the cell dimensions. How big should that interaction
> distance
> > be, and what might you do about it?
> >
> > Probably mdrun should be smarter about pbc and use better periodic image
> > handling during DD setup, but you can fix that yourself before you call
> > grompp.
> >
> > Mark
> >
> >
> > On Tue, Jul 26, 2016 at 11:46 AM Alexander Alexander <
> > alexanderwie...@gmail.com> wrote:
> >
> > > Dear gromacs user,
> > >
> > > Now is more than one week that I am engaging with the fatal error due
> to
> > > domain decomposition, and I have not been succeeded yet, and it is more
> > > painful when I have to test different number of cpu's to see which one
> > > works in a cluster with a long queuing time, means being two or three
> > days
> > > in the queue just to see again the fatal error in two minutes.
> > >
> > > These are the dimensions of the cell " 3.53633,   4.17674,   4.99285",
> > > and below is the log file of my test submitted on 2 nodes with total
> 128
> > > cores, I even reduced to 32 CPU's and even changed from "gmx_mpi mdrun"
> > to
> > > "gmx mdrun", but the problem is still surviving.
> > >
> > > Please do not refer me to this link (
> > >
> > >
> >
> http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm
> > > )
> > > as I know what is the problem but I can not solve it:
> > >
> > >
> > > Thanks,
> > >
> > > Regards,
> > > Alex
> > >
> > >
> > >
> > > Log file opened on Fri Jul 22 00:55:56 2016
> > > Host: node074  pid: 12281  rank ID: 0  number of ranks:  64
> > >
> > > GROMACS:  gmx mdrun, VERSION 5.1.2
> > > Executable:
> > > /home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi/bin/gmx_mpi
> > > Data prefix:  /home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi
> > > Command line:
> > >   gmx_mpi mdrun -ntomp 1 -deffnm min1.6 -s min1.6
> > >
> > > GROMACS version:VERSION 5.1.2
> > > Precision:  single
> > > Memory model:   64 bit
> > > MPI library:MPI
> > > OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
> > > GPU support:disabled
> > > OpenCL support: disabled
> > > invsqrt routine:gmx_software_invsqrt(x)
> > > SIMD instructions:  AVX_128_FMA
> > > FFT library:fftw-3.2.1
> > > RDTSCP usage:   enabled
> > > C++11 compilation:  disabled
> > > TNG support:enabled
> > > Tracing support:disabled
> > > Built on:   Thu Jun 23 14:17:43 CEST 2016
> > > Built by:   reuter@marc2-h2 [CMAKE]
> > > Build OS/arch:  Linux 2.6.32-642.el6.x86_64 x86_64
> > > Build CPU vendor:   AuthenticAMD
> > > Build CPU brand:AMD Opteron(TM) Processor 6276
> > > Build CPU family:   21   Model: 1   Stepping: 2
> > > Build CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
> > > misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2
> > > sse3 sse4a sse4.1 sse4.2 ssse3 xop
> > > C compiler: /usr/lib64/ccache/cc GNU 4.4.7
> > > C compiler flags:-mavx -mfma4 -mxop-Wundef -Wextra
> > > -Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
> > > -Wno-unused -Wunused-value -Wunused-parameter  -O3 -DNDEBUG
> > > -funroll-all-loops  -Wno-array-bounds
> > >
> > > C++ compiler:   /usr/lib64/ccache/c++ GNU 4.4.7
> > > C++ compiler flags:  -mavx -mfma4

Re: [gmx-users] Domain decomposition

2016-07-26 Thread Justin Lemkul



On 7/26/16 11:27 AM, Alexander Alexander wrote:

Thanks.

Yes indeed it is a free energy calculation in which no problem showed up in
the first 6 windows where the harmonic restrains were applying on my amino
acid but the DD problem came up immediately in the first windows of the
removing charge. Below please find the mdp file.
And If I use -ntmpi = 1 then it takes ages to finish. Although my gromcas
need to be compiled again with thread-MPI .



I suspect you have inconsistent usage of couple-intramol.  Your long-distance 
LJC pairs should be a result of "couple-intramol = no" in which you get explicit 
intramolecular exclusions and pair interactions that occur at longer distance 
than normal 1-4 interactions.  If you ran other systems without getting any 
problem, you probably had "couple-intramol = yes" in which all nonbonded 
interactions are treated the same way and the bonded topology is the same.



Another question is that if really this amount of pull restrain is
necessary to be applied on my molecules (singke amino acid) before removing
the charge and vdW?



You're decoupling a single amino acid?  What purpose do the pull restraints even 
serve?  CA-HA, etc. should be bonded in a single amino acid, so why are you 
applying a pull restraint to them?  I really don't understand.


-Justin


Best regards,
Alex

define   = -DFLEXIBLE
integrator   = steep
nsteps   = 50
emtol= 250
emstep   = 0.001

nstenergy= 500
nstlog   = 500
nstxout-compressed   = 1000

constraint-algorithm = lincs
constraints  = h-bonds

cutoff-scheme= Verlet
rlist= 1.32

coulombtype  = PME
rcoulomb = 1.30

vdwtype  = Cut-off
rvdw = 1.30
DispCorr = EnerPres

free-energy  = yes
init-lambda-state= 6
calc-lambda-neighbors= -1
restraint-lambdas= 0.0 0.2 0.4 0.6 0.8 1.0 1.0 1.0 1.0 1.0 1.0 1.0
1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0
coul-lambdas = 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 1.0
1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0
vdw-lambdas  = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1
0.2 0.3 0.35 0.4 0.5 0.6 0.7 0.8 0.9 1.0
couple-moltype   = Protein_chain_A
couple-lambda0   = vdw-q
couple-lambda1   = none
couple-intramol  = no
nstdhdl  = 100
sc-alpha = 0.5
sc-coul  = no
sc-power = 1
sc-sigma = 0.3
dhdl-derivatives = yes
separate-dhdl-file   = yes
dhdl-print-energy= total

pull = yes
pull-ngroups = 9
pull-ncoords = 6
pull-group1-name = CA
pull-group2-name = HA
pull-group3-name = N
pull-group4-name = C
pull-group5-name = O1
pull-group6-name = O2
pull-group7-name = CZ
pull-group8-name = NH1
pull-group9-name = NH2

pull-coord1-groups   = 1 2
pull-coord1-type = umbrella
pull-coord1-dim  = Y Y Y
pull-coord1-init = 0
pull-coord1-start= yes
pull-coord1-geometry = distance
pull-coord1-k= 0.0
pull-coord1-kB   = 1000

pull-coord2-groups   = 1 3
pull-coord2-type = umbrella
pull-coord2-dim  = Y Y Y
pull-coord2-init = 0
pull-coord2-start= yes
pull-coord2-geometry = distance
pull-coord2-k= 0.0
pull-coord2-kB   = 1000

pull-coord3-groups   = 4 5
pull-coord3-type = umbrella
pull-coord3-dim  = Y Y Y
pull-coord3-init = 0
pull-coord3-start= yes
pull-coord3-geometry = distance
pull-coord3-k= 0.0
pull-coord3-kB   = 1000

pull-coord4-groups   = 4 6
pull-coord4-type = umbrella
pull-coord4-dim  = Y Y Y
pull-coord4-init = 0
pull-coord4-start= yes
pull-coord4-geometry = distance
pull-coord4-k= 0.0
pull-coord4-kB   = 1000

pull-coord5-groups   = 7 8
pull-coord5-type = umbrella
pull-coord5-dim  = Y Y Y
pull-coord5-init = 0
pull-coord5-start= yes
pull-coord5-geometry = distance
pull-coord5-k= 0.0
pull-coord5-kB   = 1000

pull-coord6-groups   = 7 9
pull-coord6-type = umbrella
pull-coord6-dim  = Y Y Y
pull-coord6-init = 0
pull-coord6-start= yes
pull-coord6-geometry = distance
pull-coord6-k= 0.0
pull-coord6-kB   = 1000

On Tue, Jul 26, 2016 at 2:21 PM, Justin Lemkul  wrote:




On 7/26/16 8:17 AM, Alexander Alexander wrote:


Hi,

Thanks for your response.
I do not know which two atoms has bonded interaction comparable with the
cell size, however, based on this line in log file "two-body bonded
interactions: 3.196 nm, LJC Pairs NB, atoms 24 28", I though th

Re: [gmx-users] Domain decomposition

2016-07-26 Thread Alexander Alexander
On Tue, Jul 26, 2016 at 6:07 PM, Justin Lemkul  wrote:

>
>
> On 7/26/16 11:27 AM, Alexander Alexander wrote:
>
>> Thanks.
>>
>> Yes indeed it is a free energy calculation in which no problem showed up
>> in
>> the first 6 windows where the harmonic restrains were applying on my amino
>> acid but the DD problem came up immediately in the first windows of the
>> removing charge. Below please find the mdp file.
>> And If I use -ntmpi = 1 then it takes ages to finish. Although my gromcas
>> need to be compiled again with thread-MPI .
>>
>>
> I suspect you have inconsistent usage of couple-intramol.  Your
> long-distance LJC pairs should be a result of "couple-intramol = no" in
> which you get explicit intramolecular exclusions and pair interactions that
> occur at longer distance than normal 1-4 interactions.  If you ran other
> systems without getting any problem, you probably had "couple-intramol =
> yes" in which all nonbonded interactions are treated the same way and the
> bonded topology is the same.
>

Actually I always have had "couple-intramol = no" in all my other
calculation(a single amino acid in water solution), and not problem has
shown up. But FEP calculations of the charged amino acid where I have also
an Ion for neutralization of the system and "ion+amino acid" is used as
"couple-moltype", this problem emerges. And if you noticed the Ion here CL
is always one of the atom involving in the problem. I hope  "couple-intramol
= yes"can sove the problem in charged amino acid.

>
> Another question is that if really this amount of pull restrain is
>> necessary to be applied on my molecules (singke amino acid) before
>> removing
>> the charge and vdW?
>>
>>
> You're decoupling a single amino acid?  What purpose do the pull
> restraints even serve?  CA-HA, etc. should be bonded in a single amino
> acid, so why are you applying a pull restraint to them?  I really don't
> understand.
>

I want to make sure sudden conformational changes of amino acid do not
occur during the perturbation. In particular, when the charge is turned
off.  Applying a harmonic restraint to keep the geometry the same during
FEP is a well-established procedure, e.g. Deng, Y.; Roux, B. J Chem Theory
Comput 2006, 2 (5), 1255. I might reduce the number of restraints to only
between 1 or 2 pairs.

The whole task is to calculate the binding free energy of amino acid to a
metal surface, although here I am still dealing with the amino acid in only
water without surface yet.

Regards,
Alex

>
> -Justin
>
>
> Best regards,
>> Alex
>>
>> define   = -DFLEXIBLE
>> integrator   = steep
>> nsteps   = 50
>> emtol= 250
>> emstep   = 0.001
>>
>> nstenergy= 500
>> nstlog   = 500
>> nstxout-compressed   = 1000
>>
>> constraint-algorithm = lincs
>> constraints  = h-bonds
>>
>> cutoff-scheme= Verlet
>> rlist= 1.32
>>
>> coulombtype  = PME
>> rcoulomb = 1.30
>>
>> vdwtype  = Cut-off
>> rvdw = 1.30
>> DispCorr = EnerPres
>>
>> free-energy  = yes
>> init-lambda-state= 6
>> calc-lambda-neighbors= -1
>> restraint-lambdas= 0.0 0.2 0.4 0.6 0.8 1.0 1.0 1.0 1.0 1.0 1.0 1.0
>> 1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0
>> coul-lambdas = 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 1.0
>> 1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0
>> vdw-lambdas  = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1
>> 0.2 0.3 0.35 0.4 0.5 0.6 0.7 0.8 0.9 1.0
>> couple-moltype   = Protein_chain_A
>> couple-lambda0   = vdw-q
>> couple-lambda1   = none
>> couple-intramol  = no
>> nstdhdl  = 100
>> sc-alpha = 0.5
>> sc-coul  = no
>> sc-power = 1
>> sc-sigma = 0.3
>> dhdl-derivatives = yes
>> separate-dhdl-file   = yes
>> dhdl-print-energy= total
>>
>> pull = yes
>> pull-ngroups = 9
>> pull-ncoords = 6
>> pull-group1-name = CA
>> pull-group2-name = HA
>> pull-group3-name = N
>> pull-group4-name = C
>> pull-group5-name = O1
>> pull-group6-name = O2
>> pull-group7-name = CZ
>> pull-group8-name = NH1
>> pull-group9-name = NH2
>>
>> pull-coord1-groups   = 1 2
>> pull-coord1-type = umbrella
>> pull-coord1-dim  = Y Y Y
>> pull-coord1-init = 0
>> pull-coord1-start= yes
>> pull-coord1-geometry = distance
>> pull-coord1-k= 0.0
>> pull-coord1-kB   = 1000
>>
>> pull-coord2-groups   = 1 3
>> pull-coord2-type = umbrella
>> pull-coord2-dim  = Y Y Y
>> pull-coord2-init = 0
>> pull-coord2-start= yes
>> pull-coord2-geometry = distance
>> pull-coord2-k= 0.0

Re: [gmx-users] Domain decomposition

2016-07-26 Thread Justin Lemkul



On 7/26/16 1:16 PM, Alexander Alexander wrote:

On Tue, Jul 26, 2016 at 6:07 PM, Justin Lemkul  wrote:




On 7/26/16 11:27 AM, Alexander Alexander wrote:


Thanks.

Yes indeed it is a free energy calculation in which no problem showed up
in
the first 6 windows where the harmonic restrains were applying on my amino
acid but the DD problem came up immediately in the first windows of the
removing charge. Below please find the mdp file.
And If I use -ntmpi = 1 then it takes ages to finish. Although my gromcas
need to be compiled again with thread-MPI .



I suspect you have inconsistent usage of couple-intramol.  Your
long-distance LJC pairs should be a result of "couple-intramol = no" in
which you get explicit intramolecular exclusions and pair interactions that
occur at longer distance than normal 1-4 interactions.  If you ran other
systems without getting any problem, you probably had "couple-intramol =
yes" in which all nonbonded interactions are treated the same way and the
bonded topology is the same.



Actually I always have had "couple-intramol = no" in all my other
calculation(a single amino acid in water solution), and not problem has
shown up. But FEP calculations of the charged amino acid where I have also
an Ion for neutralization of the system and "ion+amino acid" is used as
"couple-moltype", this problem emerges. And if you noticed the Ion here CL
is always one of the atom involving in the problem. I hope  "couple-intramol
= yes"can sove the problem in charged amino acid.



Well, there are implications for the results.  Consider what it says in the 
manual.  But yes, this is your problem.  You've got physically separate 
molecules that you call one [moleculetype] for the purpose of transformation, 
and you're running into a problem that isn't really physically meaningful in any 
way.




Another question is that if really this amount of pull restrain is

necessary to be applied on my molecules (singke amino acid) before
removing
the charge and vdW?



You're decoupling a single amino acid?  What purpose do the pull
restraints even serve?  CA-HA, etc. should be bonded in a single amino
acid, so why are you applying a pull restraint to them?  I really don't
understand.



I want to make sure sudden conformational changes of amino acid do not
occur during the perturbation. In particular, when the charge is turned
off.  Applying a harmonic restraint to keep the geometry the same during
FEP is a well-established procedure, e.g. Deng, Y.; Roux, B. J Chem Theory
Comput 2006, 2 (5), 1255. I might reduce the number of restraints to only
between 1 or 2 pairs.



Preserving the A-state in the bonded topology (and using couple-intramol = no) 
will prevent any weirdness from happening without needing any of these 
restraints.  As in my previous message, restraining CA-HA with a harmonic 
potential makes no sense at all.  Those atoms have a bond between them.  The 
pull code is not doing anything useful.



The whole task is to calculate the binding free energy of amino acid to a
metal surface, although here I am still dealing with the amino acid in only
water without surface yet.


I believe I've mentioned this before, but in case it got lost along the way - 
using the free energy decoupling technique is a very ineffective way of 
calculating this binding free energy.  Do a PMF.  It's extremely straightforward 
and you don't deal with any of these algorithmic problems.  It will also likely 
converge a lot faster than try to do complex decoupling.


-Justin



Regards,
Alex



-Justin


Best regards,

Alex

define   = -DFLEXIBLE
integrator   = steep
nsteps   = 50
emtol= 250
emstep   = 0.001

nstenergy= 500
nstlog   = 500
nstxout-compressed   = 1000

constraint-algorithm = lincs
constraints  = h-bonds

cutoff-scheme= Verlet
rlist= 1.32

coulombtype  = PME
rcoulomb = 1.30

vdwtype  = Cut-off
rvdw = 1.30
DispCorr = EnerPres

free-energy  = yes
init-lambda-state= 6
calc-lambda-neighbors= -1
restraint-lambdas= 0.0 0.2 0.4 0.6 0.8 1.0 1.0 1.0 1.0 1.0 1.0 1.0
1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0
coul-lambdas = 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 1.0
1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0
vdw-lambdas  = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1
0.2 0.3 0.35 0.4 0.5 0.6 0.7 0.8 0.9 1.0
couple-moltype   = Protein_chain_A
couple-lambda0   = vdw-q
couple-lambda1   = none
couple-intramol  = no
nstdhdl  = 100
sc-alpha = 0.5
sc-coul  = no
sc-power = 1
sc-sigma = 0.3
dhdl-derivatives = yes
separate-dhdl-file   = yes
dhdl-print-energy= total

pull = yes

Re: [gmx-users] Domain decomposition

2016-07-26 Thread Alexander Alexander
On Tue, Jul 26, 2016 at 7:54 PM, Justin Lemkul  wrote:

>
>
> On 7/26/16 1:16 PM, Alexander Alexander wrote:
>
>> On Tue, Jul 26, 2016 at 6:07 PM, Justin Lemkul  wrote:
>>
>>
>>>
>>> On 7/26/16 11:27 AM, Alexander Alexander wrote:
>>>
>>> Thanks.

 Yes indeed it is a free energy calculation in which no problem showed up
 in
 the first 6 windows where the harmonic restrains were applying on my
 amino
 acid but the DD problem came up immediately in the first windows of the
 removing charge. Below please find the mdp file.
 And If I use -ntmpi = 1 then it takes ages to finish. Although my
 gromcas
 need to be compiled again with thread-MPI .


 I suspect you have inconsistent usage of couple-intramol.  Your
>>> long-distance LJC pairs should be a result of "couple-intramol = no" in
>>> which you get explicit intramolecular exclusions and pair interactions
>>> that
>>> occur at longer distance than normal 1-4 interactions.  If you ran other
>>> systems without getting any problem, you probably had "couple-intramol =
>>> yes" in which all nonbonded interactions are treated the same way and the
>>> bonded topology is the same.
>>>
>>>
>> Actually I always have had "couple-intramol = no" in all my other
>> calculation(a single amino acid in water solution), and not problem has
>> shown up. But FEP calculations of the charged amino acid where I have also
>> an Ion for neutralization of the system and "ion+amino acid" is used as
>> "couple-moltype", this problem emerges. And if you noticed the Ion here CL
>> is always one of the atom involving in the problem. I hope
>> "couple-intramol
>> = yes"can sove the problem in charged amino acid.
>>
>>
> Well, there are implications for the results.  Consider what it says in
> the manual.  But yes, this is your problem.  You've got physically separate
> molecules that you call one [moleculetype] for the purpose of
> transformation, and you're running into a problem that isn't really
> physically meaningful in any way.
>

Actually yes, Ion and amino acid both as  [moleculetype]are really far away
from each other. But I usually use the final .gro file of the last step as
input file in the new step, and this separation is what that gro file has.
I hope " couple-intramol = yes"  can help.

>
>
>>> Another question is that if really this amount of pull restrain is
>>>
 necessary to be applied on my molecules (singke amino acid) before
 removing
 the charge and vdW?


 You're decoupling a single amino acid?  What purpose do the pull
>>> restraints even serve?  CA-HA, etc. should be bonded in a single amino
>>> acid, so why are you applying a pull restraint to them?  I really don't
>>> understand.
>>>
>>>
>> I want to make sure sudden conformational changes of amino acid do not
>> occur during the perturbation. In particular, when the charge is turned
>> off.  Applying a harmonic restraint to keep the geometry the same during
>> FEP is a well-established procedure, e.g. Deng, Y.; Roux, B. J Chem Theory
>> Comput 2006, 2 (5), 1255. I might reduce the number of restraints to only
>> between 1 or 2 pairs.
>>
>>
> Preserving the A-state in the bonded topology (and using couple-intramol =
> no) will prevent any weirdness from happening without needing any of these
> restraints.  As in my previous message, restraining CA-HA with a harmonic
> potential makes no sense at all.  Those atoms have a bond between them.
> The pull code is not doing anything useful.
>

Then, If the " couple-intramol = yes" hopefully solves the problem
discussed above, then, maybe applying restrain in the presence of "
couple-intramol = yes" is not avoidable.


>
> The whole task is to calculate the binding free energy of amino acid to a
>> metal surface, although here I am still dealing with the amino acid in
>> only
>> water without surface yet.
>>
>
> I believe I've mentioned this before, but in case it got lost along the
> way - using the free energy decoupling technique is a very ineffective way
> of calculating this binding free energy.  Do a PMF.  It's extremely
> straightforward and you don't deal with any of these algorithmic problems.
> It will also likely converge a lot faster than try to do complex decoupling.
>

Actually I should have known this in beginning, but, now is a bit late for
me to switch to PMF.

Best regard,
Alex
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition

2016-07-26 Thread Justin Lemkul



On 7/26/16 2:21 PM, Alexander Alexander wrote:

On Tue, Jul 26, 2016 at 7:54 PM, Justin Lemkul  wrote:




On 7/26/16 1:16 PM, Alexander Alexander wrote:


On Tue, Jul 26, 2016 at 6:07 PM, Justin Lemkul  wrote:




On 7/26/16 11:27 AM, Alexander Alexander wrote:

Thanks.


Yes indeed it is a free energy calculation in which no problem showed up
in
the first 6 windows where the harmonic restrains were applying on my
amino
acid but the DD problem came up immediately in the first windows of the
removing charge. Below please find the mdp file.
And If I use -ntmpi = 1 then it takes ages to finish. Although my
gromcas
need to be compiled again with thread-MPI .


I suspect you have inconsistent usage of couple-intramol.  Your

long-distance LJC pairs should be a result of "couple-intramol = no" in
which you get explicit intramolecular exclusions and pair interactions
that
occur at longer distance than normal 1-4 interactions.  If you ran other
systems without getting any problem, you probably had "couple-intramol =
yes" in which all nonbonded interactions are treated the same way and the
bonded topology is the same.



Actually I always have had "couple-intramol = no" in all my other
calculation(a single amino acid in water solution), and not problem has
shown up. But FEP calculations of the charged amino acid where I have also
an Ion for neutralization of the system and "ion+amino acid" is used as
"couple-moltype", this problem emerges. And if you noticed the Ion here CL
is always one of the atom involving in the problem. I hope
"couple-intramol
= yes"can sove the problem in charged amino acid.



Well, there are implications for the results.  Consider what it says in
the manual.  But yes, this is your problem.  You've got physically separate
molecules that you call one [moleculetype] for the purpose of
transformation, and you're running into a problem that isn't really
physically meaningful in any way.



Actually yes, Ion and amino acid both as  [moleculetype]are really far away
from each other. But I usually use the final .gro file of the last step as
input file in the new step, and this separation is what that gro file has.
I hope " couple-intramol = yes"  can help.



Help you in terms of avoiding a DD failure, yes, but you're also completely 
changing the physical picture.  Please read the manual carefully about these 
settings.  You'll probably get distortion if you do this.  Using 
"couple-intramol = no" is more sensible but obviously is causing headaches due 
simply to implementation problems.






Another question is that if really this amount of pull restrain is


necessary to be applied on my molecules (singke amino acid) before
removing
the charge and vdW?


You're decoupling a single amino acid?  What purpose do the pull

restraints even serve?  CA-HA, etc. should be bonded in a single amino
acid, so why are you applying a pull restraint to them?  I really don't
understand.



I want to make sure sudden conformational changes of amino acid do not
occur during the perturbation. In particular, when the charge is turned
off.  Applying a harmonic restraint to keep the geometry the same during
FEP is a well-established procedure, e.g. Deng, Y.; Roux, B. J Chem Theory
Comput 2006, 2 (5), 1255. I might reduce the number of restraints to only
between 1 or 2 pairs.



Preserving the A-state in the bonded topology (and using couple-intramol =
no) will prevent any weirdness from happening without needing any of these
restraints.  As in my previous message, restraining CA-HA with a harmonic
potential makes no sense at all.  Those atoms have a bond between them.
The pull code is not doing anything useful.



Then, If the " couple-intramol = yes" hopefully solves the problem
discussed above, then, maybe applying restrain in the presence of "
couple-intramol = yes" is not avoidable.




The whole task is to calculate the binding free energy of amino acid to a

metal surface, although here I am still dealing with the amino acid in
only
water without surface yet.



I believe I've mentioned this before, but in case it got lost along the
way - using the free energy decoupling technique is a very ineffective way
of calculating this binding free energy.  Do a PMF.  It's extremely
straightforward and you don't deal with any of these algorithmic problems.
It will also likely converge a lot faster than try to do complex decoupling.



Actually I should have known this in beginning, but, now is a bit late for
me to switch to PMF.



As you like.  You can probably finish a PMF in less than a day for a reasonably 
small system.


-Justin


--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==

[gmx-users] domain decomposition problems

2017-01-29 Thread Albert

Hello,

I am trying to run MD simulation for a system:


box size: 105.166 x 105.166 x 105.166
atoms: 114K
FF: Amber99SB

I submitted the job with command line:

srun -n 1 gmx_mpi grompp -f mdp/01-em.mdp -o 60.tpr -n -c ion.pdb
srun -n 12 gmx_mpi mdrun -s 60.tpr -v -g 60.log -c 60.gro -x 60.xtc

but it always failed with messages:

There is no domain decomposition for 12 ranks that is compatible with 
the given box and a minimum cell size of 4.99678 nm

Change the number of ranks or mdrun option -rdd
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


I google it, and the answer for above problem is that the system is too 
small to use a large number of CPU. However, I don't think 12 CPU is too 
big for my system which contains 114 K atoms in all.



Does anybody have other suggestions?

Thanks a lot.

Albert

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] domain decomposition Error

2017-03-06 Thread MRINAL ARANDHARA
I am trying to run a lipid bilayer simulation but during the npt equillibration 
step I am getting the following error
"1 particles communicated to PME rank 6 are more than 2/3 times the cut-off out 
of the domain decomposition cell of their charge group in dimension y"
I have successfully run the NVT equillibration.What may be the  problem??
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain Decomposition Error

2017-03-06 Thread MRINAL ARANDHARA
I am trying to run a lipid bilayer simulation but during the npt equillibration 
step I am getting the following error
"1 particles communicated to PME rank 6 are more than 2/3 times the cut-off out 
of the domain decomposition cell of their charge group in dimension y"
I have successfully run the NVT equillibration.What may be the  problem??
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] domain decomposition error

2017-04-15 Thread Alex Mathew
Dear all gromacs users,

I have seen in mail archive this domain decomposition error can be avoided
with less number of processor, but how to find the suitable number of
processor required?

here is the log file.


​​
https://drive.google.com/file/d/0Bzs8lO6WJxD9alRTYjFaMjBTT2c/view?usp=sharing
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] domain decomposition error- ------

2017-04-15 Thread Alex Mathew
Dear all gromacs users,

I have seen in mail archive this domain decomposition error can be avoided
with less number of processor, but how to find the suitable number of
processor required?

here is the log file.


​​https://drive.google.com/file/d/0Bzs8lO6WJxD9alRTYjFaMjBTT2c/
view?usp=sharing
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Domain decomposition error

2017-05-18 Thread Kashif
I got this error every time when I try to simulate one of my protein-ligand
complex.

 ---
Program mdrun, VERSION 4.6.6
Source code file: /root/Documents/gromacs-4.6.6/src/mdlib/pme.c, line: 851

Fatal error:
1 particles communicated to PME node 5 are more than 2/3 times the cut-off
out of the domain decomposition cell of their charge group in dimension y.
This usually means that your system is not well equilibrated.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
...

Although, same parameters in mdp files fairly simulated the other
drug-protein complex. But this drug complex is creating trouble.
kindly help.

regards
kashif
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition

2017-06-22 Thread Justin Lemkul



On 6/22/17 9:16 AM, Sergio Manzetti wrote:

Hi, I have (also) a system of one molecule in water box of 3 3 3 dimensions, 
the procedure goes well all the way till the simulation starts, getting:

Will use 20 particle-particle and 4 PME only ranks
This is a guess, check the performance at the end of the log file

---
Program gmx mdrun, VERSION 5.1.2
Source code file: 
/build/gromacs-z6bPBg/gromacs-5.1.2/src/gromacs/domdec/domdec.cpp, line: 6987

Fatal error:
There is no domain decomposition for 20 ranks that is compatible with the given 
box and a minimum cell size of 2.0777 nm
Change the number of ranks or mdrun option -rcon or -dds or your LINCS settings
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---


Not sure what to do here..



Follow the link provided in the error message.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition

2017-06-22 Thread Sergio Manzetti
Checked the link, nothing written here on rcon and dds... 


Sergio Manzetti 

[ http://www.fjordforsk.no/logo_hr2.jpg ] 

[ http://www.fjordforsk.no/ | Fjordforsk AS ] [ http://www.fjordforsk.no/ |   ] 
Midtun 
6894 Vangsnes 
Norge 
Org.nr. 911 659 654 
Tlf: +47 57695621 
[ http://www.oekolab.com/ | Økolab  ] | [ http://www.nanofact.no/ | Nanofactory 
 ] | [ http://www.aq-lab.no/ | AQ-Lab  ] | [ http://www.phap.no/ | FAP ] 



From: "Justin Lemkul"  
To: "gmx-users"  
Sent: Thursday, June 22, 2017 3:21:28 PM 
Subject: Re: [gmx-users] Domain decomposition 

On 6/22/17 9:16 AM, Sergio Manzetti wrote: 
> Hi, I have (also) a system of one molecule in water box of 3 3 3 dimensions, 
> the procedure goes well all the way till the simulation starts, getting: 
> 
> Will use 20 particle-particle and 4 PME only ranks 
> This is a guess, check the performance at the end of the log file 
> 
> --- 
> Program gmx mdrun, VERSION 5.1.2 
> Source code file: 
> /build/gromacs-z6bPBg/gromacs-5.1.2/src/gromacs/domdec/domdec.cpp, line: 6987 
> 
> Fatal error: 
> There is no domain decomposition for 20 ranks that is compatible with the 
> given box and a minimum cell size of 2.0777 nm 
> Change the number of ranks or mdrun option -rcon or -dds or your LINCS 
> settings 
> Look in the log file for details on the domain decomposition 
> For more information and tips for troubleshooting, please check the GROMACS 
> website at http://www.gromacs.org/Documentation/Errors 
> --- 
> 
> 
> Not sure what to do here.. 
> 

Follow the link provided in the error message. 

-Justin 

-- 
== 

Justin A. Lemkul, Ph.D. 
Ruth L. Kirschstein NRSA Postdoctoral Fellow 

Department of Pharmaceutical Sciences 
School of Pharmacy 
Health Sciences Facility II, Room 629 
University of Maryland, Baltimore 
20 Penn St. 
Baltimore, MD 21201 

jalem...@outerbanks.umaryland.edu | (410) 706-7441 
http://mackerell.umaryland.edu/~jalemkul 

== 
-- 
Gromacs Users mailing list 

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! 

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists 

* For (un)subscribe requests visit 
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org. 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Domain decomposition

2017-06-22 Thread Justin Lemkul



On 6/22/17 9:22 AM, Sergio Manzetti wrote:

Checked the link, nothing written here on rcon and dds...



"Thus it is not possible to run a small simulation with large numbers of 
processors."


Google will help you find more suggestions.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition

2017-06-22 Thread Sergio Manzetti
Thanks! I reduced the number to 4, and it works: 

gmx mdrun -v -dds 0.37 -nt 4 


Cheers 


Sergio Manzetti 

[ http://www.fjordforsk.no/logo_hr2.jpg ] 

[ http://www.fjordforsk.no/ | Fjordforsk AS ] [ http://www.fjordforsk.no/ |   ] 
Midtun 
6894 Vangsnes 
Norge 
Org.nr. 911 659 654 
Tlf: +47 57695621 
[ http://www.oekolab.com/ | Økolab  ] | [ http://www.nanofact.no/ | Nanofactory 
 ] | [ http://www.aq-lab.no/ | AQ-Lab  ] | [ http://www.phap.no/ | FAP ] 



From: "Justin Lemkul"  
To: "gmx-users"  
Sent: Thursday, June 22, 2017 3:28:32 PM 
Subject: Re: [gmx-users] Domain decomposition 

On 6/22/17 9:22 AM, Sergio Manzetti wrote: 
> Checked the link, nothing written here on rcon and dds... 
> 

"Thus it is not possible to run a small simulation with large numbers of 
processors." 

Google will help you find more suggestions. 

-Justin 

-- 
== 

Justin A. Lemkul, Ph.D. 
Ruth L. Kirschstein NRSA Postdoctoral Fellow 

Department of Pharmaceutical Sciences 
School of Pharmacy 
Health Sciences Facility II, Room 629 
University of Maryland, Baltimore 
20 Penn St. 
Baltimore, MD 21201 

jalem...@outerbanks.umaryland.edu | (410) 706-7441 
http://mackerell.umaryland.edu/~jalemkul 

== 
-- 
Gromacs Users mailing list 

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! 

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists 

* For (un)subscribe requests visit 
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org. 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Domain Decomposition

2017-11-08 Thread Wes Barnett
On Wed, Nov 8, 2017 at 11:11 AM, Shraddha Parate 
wrote:

> Dear Gromacs Users,
>
> I was able to achieve a spherical water droplet without periodic boundary
> conditions (PBC) by changing few parameters in the .mdp files as below:
>




> However, I am facing the following error:
>
> *Fatal error:*
> *Domain decomposition does not support simple neighbor searching, use grid
> searching or run with one MPI rank.*
>
> I tried adding the '-nt 1' in the command for mdrun but it consumes 2 weeks
> for a 1 ns simulation since it utilizes only 1 CPU.
>
> Is the error occurring because of changes in .mdp file parameters? Is there
> any other way to make some changes in the mdrun command to make the
> simulation faster?
>
> Thank you in advance.
>
> Best regards,
> Shraddha Parate
>


The error indicates you should try changing how neighbor searching is done.
Have you tried that?

-- 
James "Wes" Barnett
Postdoctoral Research Scientist
Department of Chemical Engineering
Kumar Research Group 
Columbia University
w.barn...@columbia.edu
http://wbarnett.us
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain Decomposition

2017-11-08 Thread Justin Lemkul



On 11/8/17 12:02 PM, Wes Barnett wrote:

On Wed, Nov 8, 2017 at 11:11 AM, Shraddha Parate 
wrote:


Dear Gromacs Users,

I was able to achieve a spherical water droplet without periodic boundary
conditions (PBC) by changing few parameters in the .mdp files as below:






However, I am facing the following error:

*Fatal error:*
*Domain decomposition does not support simple neighbor searching, use grid
searching or run with one MPI rank.*

I tried adding the '-nt 1' in the command for mdrun but it consumes 2 weeks
for a 1 ns simulation since it utilizes only 1 CPU.

Is the error occurring because of changes in .mdp file parameters? Is there
any other way to make some changes in the mdrun command to make the
simulation faster?

Thank you in advance.

Best regards,
Shraddha Parate



The error indicates you should try changing how neighbor searching is done.
Have you tried that?



Simple neighbor searching is required when using infinite cutoffs (e.g. 
gas phase).


The solution is to use OpenMP parallelization, e.g.

mdrun -ntmpi 1 -ntomp X

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain Decomposition

2017-11-08 Thread Shraddha Parate
Dear Justin,

I tried using OpenMP parallelization with the following command:

mdrun -ntmpi 1 -ntomp 1

which works fine, but if ntomp is increased, I get the below error:-

*OpenMP threads have been requested with cut-off scheme group, but these
are only supported with cut-off scheme verlet*

Is there any other way to do OpenMP parallelization without changing the
cut-off scheme to Verlet?


Thank you in advance

Regards,
Shraddha Parate
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain Decomposition

2017-11-08 Thread Mark Abraham
Hi,

As you have learned, such boundary conditions are only available in the
group scheme, the boundary conditions restrict the number of ranks usable,
and the group scheme prevents OpenMP parallelism being useful. We hope to
relax this in future, but your current options are to run slowly, use
different boundary conditions, or to use different software.

Mark

On Wed, Nov 8, 2017 at 7:27 PM Shraddha Parate 
wrote:

> Dear Justin,
>
> I tried using OpenMP parallelization with the following command:
>
> mdrun -ntmpi 1 -ntomp 1
>
> which works fine, but if ntomp is increased, I get the below error:-
>
> *OpenMP threads have been requested with cut-off scheme group, but these
> are only supported with cut-off scheme verlet*
>
> Is there any other way to do OpenMP parallelization without changing the
> cut-off scheme to Verlet?
>
>
> Thank you in advance
>
> Regards,
> Shraddha Parate
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition

2019-08-21 Thread Justin Lemkul




On 8/21/19 1:00 AM, Dhrubajyoti Maji wrote:

Dear all,
 I am simulating a system consisting urea molecules. After successfully
generating tpr file while I am trying to run mdrun, the following error is
appearing.
Fatal error:
There is no domain decomposition for 72 ranks that is compatible with the
given box and a minimum cell size of 0.5924 nm
Change the number of ranks or mdrun option -rcon or -dds or your LINCS
settings.
All bonds are constrained are by LINCS algorithm in my system and dimension
of my box is 3.40146 nm. I have checked gromacs site as well as mailing
list but couldn't understand what to do. Please help me with the issue.


http://manual.gromacs.org/current/user-guide/run-time-errors.html#there-is-no-domain-decomposition-for-n-ranks-that-is-compatible-with-the-given-box-and-a-minimum-cell-size-of-x-nm

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition

2019-08-21 Thread Dhrubajyoti Maji
Many tanks Dr. Lemkul for your kind reply. I have checked the link. I have
done the equlibration step successfully but the error appears at production
run. The change is only that now I am writing the output trajectory. So, if
I had any problem in topology or mdp file then I think my equilibration
should have been failed. I am a newbie and I can't understand what exactly
is going wrong. Any kind of suggestion will be highly appreciated.
Thanks and regards.
Dhrubajyoti Maji


On Wed, 21 Aug 2019 at 16:21, Justin Lemkul  wrote:

>
>
> On 8/21/19 1:00 AM, Dhrubajyoti Maji wrote:
> > Dear all,
> >  I am simulating a system consisting urea molecules. After
> successfully
> > generating tpr file while I am trying to run mdrun, the following error
> is
> > appearing.
> > Fatal error:
> > There is no domain decomposition for 72 ranks that is compatible with the
> > given box and a minimum cell size of 0.5924 nm
> > Change the number of ranks or mdrun option -rcon or -dds or your LINCS
> > settings.
> > All bonds are constrained are by LINCS algorithm in my system and
> dimension
> > of my box is 3.40146 nm. I have checked gromacs site as well as mailing
> > list but couldn't understand what to do. Please help me with the issue.
>
>
> http://manual.gromacs.org/current/user-guide/run-time-errors.html#there-is-no-domain-decomposition-for-n-ranks-that-is-compatible-with-the-given-box-and-a-minimum-cell-size-of-x-nm
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition

2019-08-21 Thread Justin Lemkul




On 8/21/19 12:30 PM, Dhrubajyoti Maji wrote:

Many tanks Dr. Lemkul for your kind reply. I have checked the link. I have
done the equlibration step successfully but the error appears at production
run. The change is only that now I am writing the output trajectory. So, if
I had any problem in topology or mdp file then I think my equilibration
should have been failed. I am a newbie and I can't understand what exactly
is going wrong. Any kind of suggestion will be highly appreciated.


Use fewer processors. You can't arbitrarily split any system over a 
given number of processors. Prior runs may have worked if, for instance, 
box dimensions were different, but now you have to adjust.


-Justin


Thanks and regards.
Dhrubajyoti Maji


On Wed, 21 Aug 2019 at 16:21, Justin Lemkul  wrote:



On 8/21/19 1:00 AM, Dhrubajyoti Maji wrote:

Dear all,
  I am simulating a system consisting urea molecules. After

successfully

generating tpr file while I am trying to run mdrun, the following error

is

appearing.
Fatal error:
There is no domain decomposition for 72 ranks that is compatible with the
given box and a minimum cell size of 0.5924 nm
Change the number of ranks or mdrun option -rcon or -dds or your LINCS
settings.
All bonds are constrained are by LINCS algorithm in my system and

dimension

of my box is 3.40146 nm. I have checked gromacs site as well as mailing
list but couldn't understand what to do. Please help me with the issue.


http://manual.gromacs.org/current/user-guide/run-time-errors.html#there-is-no-domain-decomposition-for-n-ranks-that-is-compatible-with-the-given-box-and-a-minimum-cell-size-of-x-nm

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition

2019-08-21 Thread Dhrubajyoti Maji
Thank you sir. The problem is sorted out. Decreasing number of processors
did the trick. Thanks again

On Wed, 21 Aug 2019 at 22:02, Justin Lemkul  wrote:

>
>
> On 8/21/19 12:30 PM, Dhrubajyoti Maji wrote:
> > Many tanks Dr. Lemkul for your kind reply. I have checked the link. I
> have
> > done the equlibration step successfully but the error appears at
> production
> > run. The change is only that now I am writing the output trajectory. So,
> if
> > I had any problem in topology or mdp file then I think my equilibration
> > should have been failed. I am a newbie and I can't understand what
> exactly
> > is going wrong. Any kind of suggestion will be highly appreciated.
>
> Use fewer processors. You can't arbitrarily split any system over a
> given number of processors. Prior runs may have worked if, for instance,
> box dimensions were different, but now you have to adjust.
>
> -Justin
>
> > Thanks and regards.
> > Dhrubajyoti Maji
> >
> >
> > On Wed, 21 Aug 2019 at 16:21, Justin Lemkul  wrote:
> >
> >>
> >> On 8/21/19 1:00 AM, Dhrubajyoti Maji wrote:
> >>> Dear all,
> >>>   I am simulating a system consisting urea molecules. After
> >> successfully
> >>> generating tpr file while I am trying to run mdrun, the following error
> >> is
> >>> appearing.
> >>> Fatal error:
> >>> There is no domain decomposition for 72 ranks that is compatible with
> the
> >>> given box and a minimum cell size of 0.5924 nm
> >>> Change the number of ranks or mdrun option -rcon or -dds or your LINCS
> >>> settings.
> >>> All bonds are constrained are by LINCS algorithm in my system and
> >> dimension
> >>> of my box is 3.40146 nm. I have checked gromacs site as well as mailing
> >>> list but couldn't understand what to do. Please help me with the issue.
> >>
> >>
> http://manual.gromacs.org/current/user-guide/run-time-errors.html#there-is-no-domain-decomposition-for-n-ranks-that-is-compatible-with-the-given-box-and-a-minimum-cell-size-of-x-nm
> >>
> >> -Justin
> >>
> >> --
> >> ==
> >>
> >> Justin A. Lemkul, Ph.D.
> >> Assistant Professor
> >> Office: 301 Fralin Hall
> >> Lab: 303 Engel Hall
> >>
> >> Virginia Tech Department of Biochemistry
> >> 340 West Campus Dr.
> >> Blacksburg, VA 24061
> >>
> >> jalem...@vt.edu | (540) 231-3129
> >> http://www.thelemkullab.com
> >>
> >> ==
> >>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] domain decomposition failure

2020-01-21 Thread Harry Mark Greenblatt
BS”D


Dear All,

  I have now run into this issue with two very different systems (one with 762 
protein and 60 DNA residues, the other with 90 protein residues).  If I try and 
carry over the velocities from the final equilibration step into a production 
run, and try to use more than one MPI rank, I get an error like

Not all bonded interactions have been properly assigned to the domain 
decomposition cells
A list of missing interactions:
Bond of   7320 missing 41
   Angle of  25159 missing193
 Proper Dih. of  39680 missing450
   Improper Dih. of   2524 missing 15
   LJ-14 of  35682 missing286

and the job hangs.

This occurs in Version 2019.4 and 2019.5, and both systems were solvated with 
water and ions, and neutralised wrt charge.

The only way to proceed is to either:

1.  Run one MPI rank (rather slow)

or

2.  Set
  continuation   = no
  gen_vel= yes

I presume the desirable thing to do is to carry over the velocities, but that 
does not seem to work properly.  I can provide more details to the developers, 
if this appears to be abnormal behaviour.

Thanks


Harry




Harry M. Greenblatt
Associate Staff Scientist
Dept of Structural Biology   
harry.greenbl...@weizmann.ac.il
Weizmann Institute of SciencePhone:  972-8-934-6340
234 Herzl St.Facsimile:   972-8-934-3361
Rehovot, 7610001
Israel

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] domain decomposition error

2015-10-27 Thread Tsjerk Wassenaar
Hi SMA,

It says you have bonds over large distances. Check the
structure/topology/setup.

Cheers,

Tsjerk
On Oct 27, 2015 08:02, "Musharaf Ali"  wrote:

> Dear users
> During energy minimization for IL-water system in a box size of 4.7x4.7x9.4
> with432 BMIMTF2N and 3519 water molecules, the following error is written
> in the md.log file.
>
> Initializing Domain Decomposition on 144 nodes
> Dynamic load balancing: no
> Will sort the charge groups at every domain (re)decomposition
> Initial maximum inter charge-group distances:
> two-body bonded interactions: 9.371 nm, LJ-14, atoms 622 624
>   multi-body bonded interactions: 9.371 nm, Angle, atoms 620 622
> Minimum cell size due to bonded interactions: 10.308 nm
> Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.218 nm
> Estimated maximum distance required for P-LINCS: 0.218 nm
> Guess for relative PME load: 0.15
> Will use 120 particle-particle and 24 PME only nodes
> This is a guess, check the performance at the end of the log file
> Using 24 separate PME nodes, as guessed by mdrun
> Optimizing the DD grid for 120 cells with a minimum initial size of 10.308
> nm
> The maximum allowed number of cells is: X 0 Y 0 Z 0
>
> ---
> Program mdrun_mpi_d, VERSION 4.6.1
> Source code file: /root/GROMACS-GPU/gromacs-4.6.1/src/mdlib/domdec.c, line:
> 6775
>
> Fatal error:
> There is no domain decomposition for 120 nodes that is compatible with the
> given box and a minimum cell size of 10.3078 nm
> Change the number of nodes or mdrun option -rdd
> Look in the log file for details on the domain decomposition
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
> Could you please suggest how to get rid off it.
>
> Thanks in advance
>
> SMA
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error

2015-10-29 Thread Justin Lemkul



On 10/29/15 4:56 AM, badamkhatan togoldor wrote:

Dear GMX Users, I am simulating a free energy of a protein chain_A in water
by parallel. Then i got domain decomposition error in mdrun. Will use 15
particle-particle and 9 PME only ranksThis is a guess, check the performance
at the end of the log file
---Program mdrun_mpi,
VERSION 5.1.1-dev-20150819-f10f108Source code file:
/tmp/asillanp/gromacs/src/gromacs/domdec/domdec.cpp, line: 6969 Fatal
error:There is no domain decomposition for 15 ranks that is compatible with
the given box and a minimum cell size of 5.68559 nmChange the number of ranks
or mdrun option -rddLook in the log file for details on the domain
decomposition

Then i look through the .log file, there was 24 rank . So how can i change
this ranks? What's wrong in here? Or something wrong in my .mdp file ?  Or
wrong construction on my script in parallel ? I am using just 2 nodes with 24
cpu. Then i don't think my system is too small (one protein chain, solvent is
around 8000 molecules and few ions). Initializing Domain Decomposition on 24
ranksDynamic load balancing: offWill sort the charge groups at every domain
(re)decompositionInitial maximum inter charge-group distances:two-body
bonded interactions: 5.169 nm, LJC Pairs NB, atoms 81 558  multi-body bonded


Given the two-body interaction length, your .mdp file probably specifies 
couple-intramol = no, which generates explicit pairs and exclusions for 
intramolecular interactions, thus driving up the minimum size of a DD cell 
considerably.  So your system is incompatible with more than a few DD cells.


The better question is why you're trying to decouple an entire protein; that is 
extremely impractical and unlikely to be useful.


-Justin


interactions: 0.404 nm, Ryckaert-Bell., atoms 521 529Minimum cell size due to
bonded interactions: 5.686 nmMaximum distance for 13 constraints, at 120 deg.
angles, all-trans: 0.218 nmEstimated maximum distance required for P-LINCS:
0.218 nmGuess for relative PME load: 0.38Will use 15 particle-particle and 9
PME only ranksThis is a guess, check the performance at the end of the log
fileUsing 9 separate PME ranks, as guessed by mdrunOptimizing the DD grid for
15 cells with a minimum initial size of 5.686 nmThe maximum allowed number of
cells is: X 1 Y 1 Z 0 Can anybody help this issue? Tnx Khatnaa



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error

2015-10-30 Thread badamkhatan togoldor
Thank you Justin. 
>The better question is why you're trying to decouple an entire protein; that 
>is 
>extremely impractical and unlikely to be useful.

Did i do that? then it's my mistake of less knowledge off that. How i fix that? 
     Khatnaa 


 On Friday, 30 October 2015, 1:14, Justin Lemkul  wrote:
   

 

On 10/29/15 4:56 AM, badamkhatan togoldor wrote:
> Dear GMX Users, I am simulating a free energy of a protein chain_A in water
> by parallel. Then i got domain decomposition error in mdrun. Will use 15
> particle-particle and 9 PME only ranksThis is a guess, check the performance
> at the end of the log file
> ---Program mdrun_mpi,
> VERSION 5.1.1-dev-20150819-f10f108Source code file:
> /tmp/asillanp/gromacs/src/gromacs/domdec/domdec.cpp, line: 6969 Fatal
> error:There is no domain decomposition for 15 ranks that is compatible with
> the given box and a minimum cell size of 5.68559 nmChange the number of ranks
> or mdrun option -rddLook in the log file for details on the domain
> decomposition
>
> Then i look through the .log file, there was 24 rank . So how can i change
> this ranks? What's wrong in here? Or something wrong in my .mdp file ?  Or
> wrong construction on my script in parallel ? I am using just 2 nodes with 24
> cpu. Then i don't think my system is too small (one protein chain, solvent is
> around 8000 molecules and few ions). Initializing Domain Decomposition on 24
> ranksDynamic load balancing: offWill sort the charge groups at every domain
> (re)decompositionInitial maximum inter charge-group distances:    two-body
> bonded interactions: 5.169 nm, LJC Pairs NB, atoms 81 558  multi-body bonded

Given the two-body interaction length, your .mdp file probably specifies 
couple-intramol = no, which generates explicit pairs and exclusions for 
intramolecular interactions, thus driving up the minimum size of a DD cell 
considerably.  So your system is incompatible with more than a few DD cells.

The better question is why you're trying to decouple an entire protein; that is 
extremely impractical and unlikely to be useful.

-Justin

> interactions: 0.404 nm, Ryckaert-Bell., atoms 521 529Minimum cell size due to
> bonded interactions: 5.686 nmMaximum distance for 13 constraints, at 120 deg.
> angles, all-trans: 0.218 nmEstimated maximum distance required for P-LINCS:
> 0.218 nmGuess for relative PME load: 0.38Will use 15 particle-particle and 9
> PME only ranksThis is a guess, check the performance at the end of the log
> fileUsing 9 separate PME ranks, as guessed by mdrunOptimizing the DD grid for
> 15 cells with a minimum initial size of 5.686 nmThe maximum allowed number of
> cells is: X 1 Y 1 Z 0 Can anybody help this issue? Tnx Khatnaa
>

-- 
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==


  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Domain decomposition error

2015-10-30 Thread Justin Lemkul



On 10/30/15 7:09 AM, badamkhatan togoldor wrote:

Thank you Justin.

The better question is why you're trying to decouple an entire protein; that is
extremely impractical and unlikely to be useful.


Did i do that? then it's my mistake of less knowledge off that. How i fix that? 
 Khatnaa



You need to tell us what you're hoping to achieve if you want useful help.

-Justin



  On Friday, 30 October 2015, 1:14, Justin Lemkul  wrote:




On 10/29/15 4:56 AM, badamkhatan togoldor wrote:

Dear GMX Users, I am simulating a free energy of a protein chain_A in water
by parallel. Then i got domain decomposition error in mdrun. Will use 15
particle-particle and 9 PME only ranksThis is a guess, check the performance
at the end of the log file
---Program mdrun_mpi,
VERSION 5.1.1-dev-20150819-f10f108Source code file:
/tmp/asillanp/gromacs/src/gromacs/domdec/domdec.cpp, line: 6969 Fatal
error:There is no domain decomposition for 15 ranks that is compatible with
the given box and a minimum cell size of 5.68559 nmChange the number of ranks
or mdrun option -rddLook in the log file for details on the domain
decomposition

Then i look through the .log file, there was 24 rank . So how can i change
this ranks? What's wrong in here? Or something wrong in my .mdp file ?  Or
wrong construction on my script in parallel ? I am using just 2 nodes with 24
cpu. Then i don't think my system is too small (one protein chain, solvent is
around 8000 molecules and few ions). Initializing Domain Decomposition on 24
ranksDynamic load balancing: offWill sort the charge groups at every domain
(re)decompositionInitial maximum inter charge-group distances:two-body
bonded interactions: 5.169 nm, LJC Pairs NB, atoms 81 558  multi-body bonded


Given the two-body interaction length, your .mdp file probably specifies
couple-intramol = no, which generates explicit pairs and exclusions for
intramolecular interactions, thus driving up the minimum size of a DD cell
considerably.  So your system is incompatible with more than a few DD cells.

The better question is why you're trying to decouple an entire protein; that is
extremely impractical and unlikely to be useful.

-Justin


interactions: 0.404 nm, Ryckaert-Bell., atoms 521 529Minimum cell size due to
bonded interactions: 5.686 nmMaximum distance for 13 constraints, at 120 deg.
angles, all-trans: 0.218 nmEstimated maximum distance required for P-LINCS:
0.218 nmGuess for relative PME load: 0.38Will use 15 particle-particle and 9
PME only ranksThis is a guess, check the performance at the end of the log
fileUsing 9 separate PME ranks, as guessed by mdrunOptimizing the DD grid for
15 cells with a minimum initial size of 5.686 nmThe maximum allowed number of
cells is: X 1 Y 1 Z 0 Can anybody help this issue? Tnx Khatnaa





--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error

2015-10-30 Thread badamkhatan togoldor
I think i've just found my mistake. Thank you so much again.
Khatnaa   
 


 On Friday, 30 October 2015, 18:55, Justin Lemkul  wrote:
   

 

On 10/30/15 7:09 AM, badamkhatan togoldor wrote:
> Thank you Justin.
>> The better question is why you're trying to decouple an entire protein; that 
>> is
>> extremely impractical and unlikely to be useful.
>
> Did i do that? then it's my mistake of less knowledge off that. How i fix 
> that?      Khatnaa
>

You need to tell us what you're hoping to achieve if you want useful help.

-Justin

>
>      On Friday, 30 October 2015, 1:14, Justin Lemkul  wrote:
>
>
>
>
> On 10/29/15 4:56 AM, badamkhatan togoldor wrote:
>> Dear GMX Users, I am simulating a free energy of a protein chain_A in water
>> by parallel. Then i got domain decomposition error in mdrun. Will use 15
>> particle-particle and 9 PME only ranksThis is a guess, check the performance
>> at the end of the log file
>> ---Program mdrun_mpi,
>> VERSION 5.1.1-dev-20150819-f10f108Source code file:
>> /tmp/asillanp/gromacs/src/gromacs/domdec/domdec.cpp, line: 6969 Fatal
>> error:There is no domain decomposition for 15 ranks that is compatible with
>> the given box and a minimum cell size of 5.68559 nmChange the number of ranks
>> or mdrun option -rddLook in the log file for details on the domain
>> decomposition
>>
>> Then i look through the .log file, there was 24 rank . So how can i change
>> this ranks? What's wrong in here? Or something wrong in my .mdp file ?  Or
>> wrong construction on my script in parallel ? I am using just 2 nodes with 24
>> cpu. Then i don't think my system is too small (one protein chain, solvent is
>> around 8000 molecules and few ions). Initializing Domain Decomposition on 24
>> ranksDynamic load balancing: offWill sort the charge groups at every domain
>> (re)decompositionInitial maximum inter charge-group distances:    two-body
>> bonded interactions: 5.169 nm, LJC Pairs NB, atoms 81 558  multi-body bonded
>
> Given the two-body interaction length, your .mdp file probably specifies
> couple-intramol = no, which generates explicit pairs and exclusions for
> intramolecular interactions, thus driving up the minimum size of a DD cell
> considerably.  So your system is incompatible with more than a few DD cells.
>
> The better question is why you're trying to decouple an entire protein; that 
> is
> extremely impractical and unlikely to be useful.
>
> -Justin
>
>> interactions: 0.404 nm, Ryckaert-Bell., atoms 521 529Minimum cell size due to
>> bonded interactions: 5.686 nmMaximum distance for 13 constraints, at 120 deg.
>> angles, all-trans: 0.218 nmEstimated maximum distance required for P-LINCS:
>> 0.218 nmGuess for relative PME load: 0.38Will use 15 particle-particle and 9
>> PME only ranksThis is a guess, check the performance at the end of the log
>> fileUsing 9 separate PME ranks, as guessed by mdrunOptimizing the DD grid for
>> 15 cells with a minimum initial size of 5.686 nmThe maximum allowed number of
>> cells is: X 1 Y 1 Z 0 Can anybody help this issue? Tnx Khatnaa
>>
>

-- 
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==


  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Domain decomposition error

2018-04-15 Thread Justin Lemkul



On 4/15/18 9:29 AM, Dawid das wrote:

Dear Gromacs Users,

I run numerous MD simulations for similar systems of protein in water box
and
for only one system I encounter error:





*Fatal error:There is no domain decomposition for 4 ranks that is
compatible with the givenbox and a minimum cell size of 3.54253 nmChange
the number of ranks or mdrun option -rddLook in the log file for details on
the domain decomposition*

I found explanation of this error in Gromacs documentation as well as on
the mailing
list, however I still do not understand why I get this only for one system
out of many.
There is nothing special about it, I mean its size for instance is similar
to this of
others systems.

What can be source of this error, then? Can it be the system size or
placement of
charge groups?


A "normal" protein-in-water system will never have such a minimum cell 
size unless you're doing something unconventional, like long-distance 
restraints, free energy calculation adding explicit intramolecular 
exclusions, or some weird topology element that is not conventional. If 
you can give more details (and the information from the .log file about 
how DD is determining this minimum cell size), we can probably say more.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error

2018-04-15 Thread Dawid das
Well, I do not do anything special when preparing this system compared to
other systems that do not show this issue.

I have carefuly inspected my system and I know what is wrong. I did some
manipulations to PDB file due to missing fragment of
residue and accidentally put NZ atom of Lysine like 3.5 nm from the
side-chain carbon...

Sorry for bothering.

Best wishes,
Dawid

2018-04-15 16:41 GMT+02:00 Justin Lemkul :

>
>
> On 4/15/18 9:29 AM, Dawid das wrote:
>
>> Dear Gromacs Users,
>>
>> I run numerous MD simulations for similar systems of protein in water box
>> and
>> for only one system I encounter error:
>>
>>
>>
>>
>>
>> *Fatal error:There is no domain decomposition for 4 ranks that is
>> compatible with the givenbox and a minimum cell size of 3.54253 nmChange
>> the number of ranks or mdrun option -rddLook in the log file for details
>> on
>> the domain decomposition*
>>
>> I found explanation of this error in Gromacs documentation as well as on
>> the mailing
>> list, however I still do not understand why I get this only for one system
>> out of many.
>> There is nothing special about it, I mean its size for instance is similar
>> to this of
>> others systems.
>>
>> What can be source of this error, then? Can it be the system size or
>> placement of
>> charge groups?
>>
>
> A "normal" protein-in-water system will never have such a minimum cell
> size unless you're doing something unconventional, like long-distance
> restraints, free energy calculation adding explicit intramolecular
> exclusions, or some weird topology element that is not conventional. If you
> can give more details (and the information from the .log file about how DD
> is determining this minimum cell size), we can probably say more.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition error

2018-06-18 Thread Mark Abraham
Hi,

The implicit solvent support got a bit broken between 4.5 and 4.6, and
nobody has yet worked out how to fix it, sorry. If you can run with 1 cpu,
do that. Otherwise, please use GROMACS 4.5.7.

Mark

On Mon, Jun 18, 2018 at 9:21 AM Chhaya Singh 
wrote:

> I am running a simulation having protein in implicit solvent using amber
> ff99sb forcefield and gbsa as solvent .
> I am not able to use more than one cpu.
> It always gives domain decomposition error if i use more than one cpu.
> when i tried running using one cpu then it gave me this error :
> "Fatal error:
> Too many LINCS warnings (12766)
> If you know what you are doing you can adjust the lincs warning threshold
> in your mdp file
> or set the environment variable GMX_MAXCONSTRWARN to -1,
> but normally it is better to fix the problem".
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition problems

2017-01-29 Thread Justin Lemkul



On 1/29/17 4:33 AM, Albert wrote:

Hello,

I am trying to run MD simulation for a system:


box size: 105.166 x 105.166 x 105.166
atoms: 114K
FF: Amber99SB

I submitted the job with command line:

srun -n 1 gmx_mpi grompp -f mdp/01-em.mdp -o 60.tpr -n -c ion.pdb
srun -n 12 gmx_mpi mdrun -s 60.tpr -v -g 60.log -c 60.gro -x 60.xtc

but it always failed with messages:

There is no domain decomposition for 12 ranks that is compatible with the given
box and a minimum cell size of 4.99678 nm
Change the number of ranks or mdrun option -rdd
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


I google it, and the answer for above problem is that the system is too small to
use a large number of CPU. However, I don't think 12 CPU is too big for my
system which contains 114 K atoms in all.


Does anybody have other suggestions?



Check the information in the .log file for the DD setup.  Your cell size is 
pretty large, which suggests long-range interactions that are limiting to the DD 
cell size, usually due to some kind of long distance restraints or otherwise 
unconventional interactions (explicit pairs/exclusions in a free energy 
calculation, etc.)


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition Error

2017-03-06 Thread shweta singh
Thank you !

On Tue, Mar 7, 2017 at 9:47 AM, MRINAL ARANDHARA <
arandharamri...@iitkgp.ac.in> wrote:

> I am trying to run a lipid bilayer simulation but during the npt
> equillibration step I am getting the following error
> "1 particles communicated to PME rank 6 are more than 2/3 times the
> cut-off out of the domain decomposition cell of their charge group in
> dimension y"
> I have successfully run the NVT equillibration.What may be the  problem??
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 

--Thanks and Regards--

Shweta Kumari
M.Sc. Bioinformatics
Central University Of South Bihar, Patna

Project Assistant
Computational Structural Biology lab
CSIR-Institute of Genomics and Integrative Biology
Mathura Road, Sukhdev Vihar
New Delhi 110025
India

E-mail Id : shwetaasin...@gmail.com
Alternate e-mail id :  shweta.kum...@igib.in / shweta...@cub.ac.in
Mobile No. 8409033301
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition Error

2017-03-06 Thread Mark Abraham
Hi,

There's good advice for this problem at think link that was suggested in
the error message: http://www.gromacs.org/Documentation/Errors. Probably
your box volume or NpT protocol need some attention.

Mark

On Tue, 7 Mar 2017 06:23 shweta singh  wrote:

> Thank you !
>
> On Tue, Mar 7, 2017 at 9:47 AM, MRINAL ARANDHARA <
> arandharamri...@iitkgp.ac.in> wrote:
>
> > I am trying to run a lipid bilayer simulation but during the npt
> > equillibration step I am getting the following error
> > "1 particles communicated to PME rank 6 are more than 2/3 times the
> > cut-off out of the domain decomposition cell of their charge group in
> > dimension y"
> > I have successfully run the NVT equillibration.What may be the  problem??
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
>
> --
>
> --Thanks and Regards--
>
> Shweta Kumari
> M.Sc. Bioinformatics
> Central University Of South Bihar, Patna
>
> Project Assistant
> Computational Structural Biology lab
> CSIR-Institute of Genomics and Integrative Biology
> Mathura Road, Sukhdev Vihar
> New Delhi 110025
> India
>
> E-mail Id : shwetaasin...@gmail.com
> Alternate e-mail id :  shweta.kum...@igib.in / shweta...@cub.ac.in
> Mobile No. 8409033301
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition Error

2017-03-07 Thread MRINAL ARANDHARA
Thank You Mark for the reply.
The error comes during the npt equillibration  step only and not during the  
nvt equillibration step. I have successfully done 1-ns of nvt equillibration.


 --- -- Original Message -
From: "Mark Abraham" 
To: gmx-us...@gromacs.org
Sent: Tuesday, March 7, 2017 2:32:46 AM
Subject: Re: [gmx-users] domain decomposition Error

Hi,

There's good advice for this problem at think link that was suggested in
the error message: http://www.gromacs.org/Documentation/Errors. Probably
your box volume or NpT protocol need some attention.

Mark

On Tue, 7 Mar 2017 06:23 shweta singh  wrote:

> Thank you !
>
> On Tue, Mar 7, 2017 at 9:47 AM, MRINAL ARANDHARA <
> arandharamri...@iitkgp.ac.in> wrote:
>
> > I am trying to run a lipid bilayer simulation but during the npt
> > equillibration step I am getting the following error
> > "1 particles communicated to PME rank 6 are more than 2/3 times the
> > cut-off out of the domain decomposition cell of their charge group in
> > dimension y"
> > I have successfully run the NVT equillibration.What may be the  problem??
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
>
> --
>
> --Thanks and Regards--
>
> Shweta Kumari
> M.Sc. Bioinformatics
> Central University Of South Bihar, Patna
>
> Project Assistant
> Computational Structural Biology lab
> CSIR-Institute of Genomics and Integrative Biology
> Mathura Road, Sukhdev Vihar
> New Delhi 110025
> India
>
> E-mail Id : shwetaasin...@gmail.com
> Alternate e-mail id :  shweta.kum...@igib.in / shweta...@cub.ac.in
> Mobile No. 8409033301
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition Error

2017-03-07 Thread Mark Abraham
Hi,

Exactly. NVT not exploding doesn't mean it's ready for NpT, particularly if
the volume is just wrong, or you try to use parrinello rahaman too soon.

Mark

On Tue, 7 Mar 2017 09:01 MRINAL ARANDHARA 
wrote:

> Thank You Mark for the reply.
> The error comes during the npt equillibration  step only and not during
> the  nvt equillibration step. I have successfully done 1-ns of nvt
> equillibration.
>
>
>  --- -- Original Message -
> From: "Mark Abraham" 
> To: gmx-us...@gromacs.org
> Sent: Tuesday, March 7, 2017 2:32:46 AM
> Subject: Re: [gmx-users] domain decomposition Error
>
> Hi,
>
> There's good advice for this problem at think link that was suggested in
> the error message: http://www.gromacs.org/Documentation/Errors. Probably
> your box volume or NpT protocol need some attention.
>
> Mark
>
> On Tue, 7 Mar 2017 06:23 shweta singh  wrote:
>
> > Thank you !
> >
> > On Tue, Mar 7, 2017 at 9:47 AM, MRINAL ARANDHARA <
> > arandharamri...@iitkgp.ac.in> wrote:
> >
> > > I am trying to run a lipid bilayer simulation but during the npt
> > > equillibration step I am getting the following error
> > > "1 particles communicated to PME rank 6 are more than 2/3 times the
> > > cut-off out of the domain decomposition cell of their charge group in
> > > dimension y"
> > > I have successfully run the NVT equillibration.What may be the
> problem??
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> >
> >
> >
> > --
> >
> > --Thanks and Regards--
> >
> > Shweta Kumari
> > M.Sc. Bioinformatics
> > Central University Of South Bihar, Patna
> >
> > Project Assistant
> > Computational Structural Biology lab
> > CSIR-Institute of Genomics and Integrative Biology
> > Mathura Road, Sukhdev Vihar
> > New Delhi 110025
> > India
> >
> > E-mail Id : shwetaasin...@gmail.com
> > Alternate e-mail id :  shweta.kum...@igib.in / shweta...@cub.ac.in
> > Mobile No. 8409033301
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition Error

2017-03-07 Thread MRINAL ARANDHARA
Hello Mark,

should I increase the NVT steps or switch to berendsen for the NPT simulation

Actually my system is very big and has around 147891 water residues and more 
than one residues with the same residue name  SOL.Hope thts not the problem


- Original Message -
From: "Mark Abraham" 
To: gmx-us...@gromacs.org
Sent: Tuesday, March 7, 2017 4:25:12 AM
Subject: Re: [gmx-users] domain decomposition Error

Hi,

Exactly. NVT not exploding doesn't mean it's ready for NpT, particularly if
the volume is just wrong, or you try to use parrinello rahaman too soon.

Mark

On Tue, 7 Mar 2017 09:01 MRINAL ARANDHARA 
wrote:

> Thank You Mark for the reply.
> The error comes during the npt equillibration  step only and not during
> the  nvt equillibration step. I have successfully done 1-ns of nvt
> equillibration.
>
>
>  --- -- Original Message -
> From: "Mark Abraham" 
> To: gmx-us...@gromacs.org
> Sent: Tuesday, March 7, 2017 2:32:46 AM
> Subject: Re: [gmx-users] domain decomposition Error
>
> Hi,
>
> There's good advice for this problem at think link that was suggested in
> the error message: http://www.gromacs.org/Documentation/Errors. Probably
> your box volume or NpT protocol need some attention.
>
> Mark
>
> On Tue, 7 Mar 2017 06:23 shweta singh  wrote:
>
> > Thank you !
> >
> > On Tue, Mar 7, 2017 at 9:47 AM, MRINAL ARANDHARA <
> > arandharamri...@iitkgp.ac.in> wrote:
> >
> > > I am trying to run a lipid bilayer simulation but during the npt
> > > equillibration step I am getting the following error
> > > "1 particles communicated to PME rank 6 are more than 2/3 times the
> > > cut-off out of the domain decomposition cell of their charge group in
> > > dimension y"
> > > I have successfully run the NVT equillibration.What may be the
> problem??
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> >
> >
> >
> > --
> >
> > --Thanks and Regards--
> >
> > Shweta Kumari
> > M.Sc. Bioinformatics
> > Central University Of South Bihar, Patna
> >
> > Project Assistant
> > Computational Structural Biology lab
> > CSIR-Institute of Genomics and Integrative Biology
> > Mathura Road, Sukhdev Vihar
> > New Delhi 110025
> > India
> >
> > E-mail Id : shwetaasin...@gmail.com
> > Alternate e-mail id :  shweta.kum...@igib.in / shweta...@cub.ac.in
> > Mobile No. 8409033301
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition Error

2017-03-07 Thread Mark Abraham
Hi,

On Tue, Mar 7, 2017 at 12:24 PM MRINAL ARANDHARA <
arandharamri...@iitkgp.ac.in> wrote:

> Hello Mark,
>
> should I increase the NVT steps


If either of my guesses about the problem were correct, would this help?


> or switch to berendsen for the NPT simulation
>

I can't tell, because I know nothing about your volume, expected density or
NPT protocol. There's lots of resources that make suggestions for
equilibration protocols, and some of them are even linked from the errors
page I already pointed you towards. Did you explore there?


> Actually my system is very big and has around 147891 water residues and
> more than one residues with the same residue name  SOL.Hope thts not the
> problem
>

Yes, that's true for you and every other person doing explicit solvent
simulations :-)

Mark

- Original Message -
> From: "Mark Abraham" 
> To: gmx-us...@gromacs.org
> Sent: Tuesday, March 7, 2017 4:25:12 AM
> Subject: Re: [gmx-users] domain decomposition Error
>
> Hi,
>
> Exactly. NVT not exploding doesn't mean it's ready for NpT, particularly if
> the volume is just wrong, or you try to use parrinello rahaman too soon.
>
> Mark
>
> On Tue, 7 Mar 2017 09:01 MRINAL ARANDHARA 
> wrote:
>
> > Thank You Mark for the reply.
> > The error comes during the npt equillibration  step only and not during
> > the  nvt equillibration step. I have successfully done 1-ns of nvt
> > equillibration.
> >
> >
> >  --- -- Original Message -
> > From: "Mark Abraham" 
> > To: gmx-us...@gromacs.org
> > Sent: Tuesday, March 7, 2017 2:32:46 AM
> > Subject: Re: [gmx-users] domain decomposition Error
> >
> > Hi,
> >
> > There's good advice for this problem at think link that was suggested in
> > the error message: http://www.gromacs.org/Documentation/Errors. Probably
> > your box volume or NpT protocol need some attention.
> >
> > Mark
> >
> > On Tue, 7 Mar 2017 06:23 shweta singh  wrote:
> >
> > > Thank you !
> > >
> > > On Tue, Mar 7, 2017 at 9:47 AM, MRINAL ARANDHARA <
> > > arandharamri...@iitkgp.ac.in> wrote:
> > >
> > > > I am trying to run a lipid bilayer simulation but during the npt
> > > > equillibration step I am getting the following error
> > > > "1 particles communicated to PME rank 6 are more than 2/3 times the
> > > > cut-off out of the domain decomposition cell of their charge group in
> > > > dimension y"
> > > > I have successfully run the NVT equillibration.What may be the
> > problem??
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at http://www.gromacs.org/
> > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > --Thanks and Regards--
> > >
> > > Shweta Kumari
> > > M.Sc. Bioinformatics
> > > Central University Of South Bihar, Patna
> > >
> > > Project Assistant
> > > Computational Structural Biology lab
> > > CSIR-Institute of Genomics and Integrative Biology
> > > Mathura Road, Sukhdev Vihar
> > > New Delhi 110025
> > > India
> > >
> > > E-mail Id : shwetaasin...@gmail.com
> > > Alternate e-mail id :  shweta.kum...@igib.in / shweta...@cub.ac.in
> > > Mobile No. 8409033301
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> 

Re: [gmx-users] Domain decomposition error

2017-05-18 Thread Justin Lemkul



On 5/18/17 5:59 AM, Kashif wrote:

I got this error every time when I try to simulate one of my protein-ligand
complex.

 ---
Program mdrun, VERSION 4.6.6
Source code file: /root/Documents/gromacs-4.6.6/src/mdlib/pme.c, line: 851

Fatal error:
1 particles communicated to PME node 5 are more than 2/3 times the cut-off
out of the domain decomposition cell of their charge group in dimension y.
This usually means that your system is not well equilibrated.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
...

Although, same parameters in mdp files fairly simulated the other
drug-protein complex. But this drug complex is creating trouble.
kindly help.



http://www.gromacs.org/Documentation/Terminology/Blowing_Up#Diagnosing_an_Unstable_System

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain decomposition for parallel simulations

2018-02-09 Thread Kevin C Chan
Dear Users,

I have encountered the problem of "There is no domain decomposition for n
nodes that is compatible with the given box and a minimum cell size of x
nm" and by reading through the gromacs website and some threads I
understand that the problem might be caused by breaking the system into too
small boxes by too many ranks. However, I have no idea how to get the
correct estimation of suitable paralleling parameters. Hope someone could
share his experience.

Here are information stated in the log file:
*Initializing Domain Decomposition on 4000 ranks*
*Dynamic load balancing: on*
*Will sort the charge groups at every domain (re)decomposition*
*Initial maximum inter charge-group distances:*
*two-body bonded interactions: 0.665 nm, Dis. Rest., atoms 23558 23590*
*  multi-body bonded interactions: 0.425 nm, Proper Dih., atoms 12991 12999*
*Minimum cell size due to bonded interactions: 0.468 nm*
*Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.819
nm*
*Estimated maximum distance required for P-LINCS: 0.819 nm*
*This distance will limit the DD cell size, you can override this with
-rcon*
*Guess for relative PME load: 0.11*
*Will use 3500 particle-particle and 500 PME only ranks*
*This is a guess, check the performance at the end of the log file*
*Using 500 separate PME ranks, as guessed by mdrun*
*Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25*
*Optimizing the DD grid for 3500 cells with a minimum initial size of 1.024
nm*
*The maximum allowed number of cells is: X 17 Y 17 Z 17*

And I got this afterwards:
*Fatal error:*
*There is no domain decomposition for 3500 ranks that is compatible with
the given box and a minimum cell size of 1.02425 nm*

Here are some questions:
1. the maximum allowed number of cells is 17x17x17 which is 4913 and seems
to be larger than the requested 3500 particle-particle ranks, so the
minimum cell size is not causing the problem?
2. Where does this 1.024 nm comes from? We can see the inter charge-group
distances are listed as 0.665 and 0.425 nm
3. The distance restraint between atoms 23558 23590 was set explicitly (or
added manually) in the topology file and should be around 0.32 nm by using
[intermolecular_interactions]. How could I know my manual setting is
working or not? As it has shown a different value.


Thanks in advance,
Kevin
OSU
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain decomposition error with -rerun

2018-04-27 Thread Sahithya S Iyer
Hi,

I am trying to calculate interaction between specific residues using gmx
mdrun -rerun flag. The trajectory was in a dcd format, which I converted to
a trr file. I get the following error -

Domain decomposition has not been implemented for box vectors that have
non-zero components in directions that do not use domain decomposition:
ncells
= 1 8 1, box vector[2] = 0.00 10.536000 0.00

Can someone please tell me what could be going wrong here ?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain decomposition and large molecules

2018-12-11 Thread Tommaso D'Agostino
Dear all,

I have a system of 27000 atoms, that I am simulating on both local and
Marconi-KNL (cineca) clusters. In this system, I simulate a small molecule
that has a graphene sheet attached to it, surrounded by water. I have
already simulated with success this molecule in a system of 6500 atoms,
using a timestep of 2fs and LINCS algorithm. These simulations have run
flawlessly when executed with 8 mpi ranks.

Now I have increased the length of the graphene part and the number of
waters surrounding my molecule, arriving to a total of 27000 atoms;
however, every simulation that I try to launch on more than 2 cpus or with
a timestep greater than 0.5fs seems to crash sooner or later (strangely,
during multiple attempts with 8 cpus, I was able to run up to 5 ns of
simulations prior to get the crashes; sometimes, however, the crashes
happen as soon as after 100ps). When I obtain an error prior to the crash
(sometimes the simulation just hangs without providing any error) I get a
series of lincs warning, followed by a message like:

Fatal error:
An atom moved too far between two domain decomposition steps
This usually means that your system is not well equilibrated

The crashes are relative to a part of the molecule that I have not changed
when increasing the graphene part, and I already checked twice that there
are no missing/wrong terms in the molecule topology. Again, I have not
modified at all the part of the molecule that crashes.

I have already tried to increase lincs-order or lincs-iter up to 8,
decrease nlist to 1, increase rlist to 5.0, without any success. I have
also tried (without success) to use a unique charge group for the whole
molecule, but I would like to avoid this, as point-charges may affect my
analysis.

One note: I am using a V-rescale thermostat with a tau_t of 40 picoseconds,
and every 50ps the simulation is stopped and started again from the last
frame (preserving the velocities). I want to leave these options as they
are, for consistency with other system used for this work.

Do you have any suggestions on things I may try to launch these simulations
with a decent performance? even with these few atoms, if I do not use a
timestep greater than 0.5fs or if I do not use more than 2 cpus, I cannot
get more than 4ns/day. I think it may me connected with domain
decomposition, but option -pd was removed from last versions of gromacs (I
am using gromacs 2016.1) and I cannot check that.

Attached to this mail, you may find the input .mdp file used for the
simulation.

Thanks in advance for the help,

   Tommaso D'Agostino
   Postdoctoral Researcher

  Scuola Normale Superiore,

Palazzo della Carovana, Ufficio 99
  Piazza dei Cavalieri 7, 56126 Pisa (PI), Italy
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Domain decomposition error with implicit solvent

2014-07-16 Thread Sapna Sarupria
Dear All,

I am running simulations of BMP2 protein and graphite sheet using
implicit solvent model (mdp file is pasted below). The graphite atoms
are frozen in the simulation and BMP2 is free to translate.
I got an error "Step 1786210: The domain decomposition grid has
shifted too much in the Z-direction around cell 0 0 0" after 1749.7 ps
of the simulation.

I then restarted the simulation without changing anything using the
cpt file created from the previous (crashed) run and the simulation
continues. It has run for over 60 ps now and is continuing to run.
This is something we tried based on a previous email on gmxlist from
David van der Spoel. We are using gromacs 4.5.5.

Any idea what this error may be due to? We know that the system is not
blowing up since it continues to run with the cpt file.

Thanks,
Sapna

 Start MDP file 

dt  =  0.001; time step
nsteps  =  500  ; number of steps
;nstcomm =  10  ; reset c.o.m. motion
nstxout =  10   ; write coords
nstvout =  10   ; write velocities
nstlog  =  10   ; print to logfile
nstenergy   =  10   ; print energies
xtc_grps=  System
nstxtcout   =  10
nstlist =  10   ; update pairlist
ns_type =  grid ; pairlist method
pbc =  no
rlist   =  1.5
rcoulomb=  1.5
rvdw=  1.5
implicit-solvent=  GBSA
sa-algorithm=  Ace-approximation
gb_algorithm=  OBC
rgbradii=  1.5
gb-epsilon-solvent  =  78.3
Tcoupl  =  V-rescale
ref_t   =  300.0
tc-grps =  System
tau_t   =  0.5
gen_vel =  yes  ; generate init. vel
gen_temp=  300  ; init. temp.
gen_seed=  372340   ; random seed
;constraints =  all-bonds; constraining bonds with H
;constraint_algorithm = lincs
refcoord-scaling=  all
comm_mode   = ANGULAR
freezegrps  = Graphite
freezedim   = Y Y Y

 End MDP file 

-- 
Sapna Sarupria
Assistant Professor
Department of Chemical and
Biomolecular Engineering
128 Earle Hall
Clemson University
Clemson, SC 29634
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain Decomposition error with Implicit Solvent

2014-07-18 Thread Siva Dasetty
Dear All,

I am running simulations of BMP2 protein and graphite sheet using implicit 
solvent model (mdp file is pasted below). The graphite atoms are frozen in the 
simulation and BMP2 is free to translate.
I got an error "Step 1786210: The domain decomposition grid has shifted too 
much in the Z-direction around cell 0 0 0" after 1749.7 ps of the simulation. 

I then restarted the simulation without changing anything using the cpt file 
created from the previous (crashed) run and the simulation continues. It has 
run for over 60 ps now and is continuing to run. This is something we tried 
based on a previous email on gmxlist from David van der Spoel. We are using 
gromacs 4.5.5.

Any idea what this error may be due to? We know that the system is not blowing 
up since it continues to run with the cpt file. 

Thanks,
Siva

 Start MDP file 

dt  =  0.001; time step
nsteps  =  500  ; number of steps
;nstcomm =  10  ; reset c.o.m. motion
nstxout =  10   ; write coords
nstvout =  10   ; write velocities
nstlog  =  10   ; print to logfile
nstenergy   =  10   ; print energies
xtc_grps=  System
nstxtcout   =  10
nstlist =  10   ; update pairlist
ns_type =  grid ; pairlist method
pbc =  no
rlist   =  1.5
rcoulomb=  1.5
rvdw=  1.5
implicit-solvent=  GBSA
sa-algorithm=  Ace-approximation
gb_algorithm=  OBC
rgbradii=  1.5
gb-epsilon-solvent  =  78.3
Tcoupl  =  V-rescale
ref_t   =  300.0 
tc-grps =  System
tau_t   =  0.5  
gen_vel =  yes  ; generate init. vel
gen_temp=  300  ; init. temp.
gen_seed=  372340   ; random seed
;constraints =  all-bonds; constraining bonds with H
;constraint_algorithm = lincs
refcoord-scaling=  all
comm_mode   = ANGULAR
freezegrps  = Graphite
freezedim   = Y Y Y

 End MDP file 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain Decomposition error with Implicit Solvent

2014-07-21 Thread Siva Dasetty
Dear All,

I am running simulations of BMP2 protein and graphite sheet using implicit 
solvent model (mdp file is pasted below). The graphite atoms are frozen in the 
simulation and BMP2 is free to translate.
I got an error "Step 1786210: The domain decomposition grid has shifted too 
much in the Z-direction around cell 0 0 0" after 1749.7 ps of the simulation. 

I then restarted the simulation without changing anything using the cpt file 
created from the previous (crashed) run and the simulation continues. It has 
run for over 60 ps now and is continuing to run. This is something we tried 
based on a previous email on gmxlist from David van der Spoel. We are using 
gromacs 4.5.5.

Any idea what this error may be due to? We know that the system is not blowing 
up since it continues to run with the cpt file. 

Thanks,
Siva

 Start MDP file 

dt  =  0.001; time step
nsteps  =  500  ; number of steps
;nstcomm =  10  ; reset c.o.m. motion
nstxout =  10   ; write coords
nstvout =  10   ; write velocities
nstlog  =  10   ; print to logfile
nstenergy   =  10   ; print energies
xtc_grps=  System
nstxtcout   =  10
nstlist =  10   ; update pairlist
ns_type =  grid ; pairlist method
pbc =  no
rlist   =  1.5
rcoulomb=  1.5
rvdw=  1.5
implicit-solvent=  GBSA
sa-algorithm=  Ace-approximation
gb_algorithm=  OBC
rgbradii=  1.5
gb-epsilon-solvent  =  78.3
Tcoupl  =  V-rescale
ref_t   =  300.0 
tc-grps =  System
tau_t   =  0.5  
gen_vel =  yes  ; generate init. vel
gen_temp=  300  ; init. temp.
gen_seed=  372340   ; random seed
;constraints =  all-bonds; constraining bonds with H
;constraint_algorithm = lincs
refcoord-scaling=  all
comm_mode   = ANGULAR
freezegrps  = Graphite
freezedim   = Y Y Y

 End MDP file 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain Decomposition error with Implicit Solvent

2014-07-23 Thread Siva Dasetty
Dear All,

I am running simulations of BMP2 protein and graphite sheet using implicit 
solvent model (mdp file is pasted below). The graphite atoms are frozen in the 
simulation and BMP2 is free to translate.
I got an error "Step 1786210: The domain decomposition grid has shifted too 
much in the Z-direction around cell 0 0 0" after 1749.7 ps of the simulation. 

I then restarted the simulation without changing anything using the cpt file 
created from the previous (crashed) run and the simulation continues. It has 
run for over 60 ps now and is continuing to run. This is something we tried 
based on a previous email on gmxlist from David van der Spoel. We are using 
gromacs 4.5.5.

Any idea what this error may be due to? We know that the system is not blowing 
up since it continues to run with the cpt file. 

Thanks,
Siva

 Start MDP file 

dt  =  0.001; time step
nsteps  =  500  ; number of steps
;nstcomm =  10  ; reset c.o.m. motion
nstxout =  10   ; write coords
nstvout =  10   ; write velocities
nstlog  =  10   ; print to logfile
nstenergy   =  10   ; print energies
xtc_grps=  System
nstxtcout   =  10
nstlist =  10   ; update pairlist
ns_type =  grid ; pairlist method
pbc =  no
rlist   =  1.5
rcoulomb=  1.5
rvdw=  1.5
implicit-solvent=  GBSA
sa-algorithm=  Ace-approximation
gb_algorithm=  OBC
rgbradii=  1.5
gb-epsilon-solvent  =  78.3
Tcoupl  =  V-rescale
ref_t   =  300.0 
tc-grps =  System
tau_t   =  0.5  
gen_vel =  yes  ; generate init. vel
gen_temp=  300  ; init. temp.
gen_seed=  372340   ; random seed
;constraints =  all-bonds; constraining bonds with H
;constraint_algorithm = lincs
refcoord-scaling=  all
comm_mode   = ANGULAR
freezegrps  = Graphite
freezedim   = Y Y Y

 End MDP file 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition for parallel simulations

2018-02-09 Thread Mark Abraham
Hi,

On Fri, Feb 9, 2018, 17:15 Kevin C Chan  wrote:

> Dear Users,
>
> I have encountered the problem of "There is no domain decomposition for n
> nodes that is compatible with the given box and a minimum cell size of x
> nm" and by reading through the gromacs website and some threads I
> understand that the problem might be caused by breaking the system into too
> small boxes by too many ranks. However, I have no idea how to get the
> correct estimation of suitable paralleling parameters. Hope someone could
> share his experience.
>
> Here are information stated in the log file:
> *Initializing Domain Decomposition on 4000 ranks*
> *Dynamic load balancing: on*
> *Will sort the charge groups at every domain (re)decomposition*
> *Initial maximum inter charge-group distances:*
> *two-body bonded interactions: 0.665 nm, Dis. Rest., atoms 23558 23590*
> *  multi-body bonded interactions: 0.425 nm, Proper Dih., atoms 12991
> 12999*
> *Minimum cell size due to bonded interactions: 0.468 nm*
> *Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.819
> nm*
> *Estimated maximum distance required for P-LINCS: 0.819 nm*
>

Here we see mdrun report how large it needs to make the domains to ensure
they can do their job - in this case P-LINCS is the most demanding.

*This distance will limit the DD cell size, you can override this with
> -rcon*
> *Guess for relative PME load: 0.11*
> *Will use 3500 particle-particle and 500 PME only ranks*
> *This is a guess, check the performance at the end of the log file*
> *Using 500 separate PME ranks, as guessed by mdrun*
>

Mdrun guessed poorly, as we will see.

*Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25*
> *Optimizing the DD grid for 3500 cells with a minimum initial size of 1.024
> nm*
>

That's 1.25 × 0.819, so that domains can cope as particles move around.

*The maximum allowed number of cells is: X 17 Y 17 Z 17*
>

Thus the grid that produces 3500 ranks can have no dimension greater than
17.

And I got this afterwards:
> *Fatal error:*
> *There is no domain decomposition for 3500 ranks that is compatible with
> the given box and a minimum cell size of 1.02425 nm*
>
> Here are some questions:
> 1. the maximum allowed number of cells is 17x17x17 which is 4913 and seems
> to be larger than the requested 3500 particle-particle ranks, so the
> minimum cell size is not causing the problem?
>

It is. The prime factors of 3500 are not very forgiving. The closest
factorization that might produce a grid with all dimensions below 17 is 25
× 14 × 10. So mdrun painted itself into a corner when choosing 3500 PP
ranks. The choice of decomposition is not trivial (see one of my published
works, hint hint), and it is certainly possible that using less hardware
provides better performance through making it possible to have two PP and
PME decompositions have mutually agreeable decompositions that leads to
better message passing performance. 4000 is very awkward given the
constraint of 17. Maybe 16x16x15 overall ranks is good.

2. Where does this 1.024 nm comes from? We can see the inter charge-group
> distances are listed as 0.665 and 0.425 nm
> 3. The distance restraint between atoms 23558 23590 was set explicitly (or
> added manually) in the topology file and should be around 0.32 nm by using
> [intermolecular_interactions]. How could I know my manual setting is
> working or not? As it has shown a different value.
>

Well one of you is right, but I can't tell which :-) Try measuring it in a
different way.

Mark


> Thanks in advance,
> Kevin
> OSU
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Domain decomposition error with -rerun

2018-04-27 Thread RAHUL SURESH
Hi.

That indicates a  problem with dynamic load balancing. Try to build
different sizes of the box.

On Sat, Apr 28, 2018 at 11:57 AM, Sahithya S Iyer  wrote:

> Hi,
>
> I am trying to calculate interaction between specific residues using gmx
> mdrun -rerun flag. The trajectory was in a dcd format, which I converted to
> a trr file. I get the following error -
>
> Domain decomposition has not been implemented for box vectors that have
> non-zero components in directions that do not use domain decomposition:
> ncells
> = 1 8 1, box vector[2] = 0.00 10.536000 0.00
>
> Can someone please tell me what could be going wrong here ?
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
*Regards,*
*Rahul Suresh*
*Research Scholar*
*Bharathiar University*
*Coimbatore*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error with -rerun

2018-04-27 Thread Sahithya S Iyer
Hi,

Thanks for the reply. I am only doing a rerun of a trajectory that has
already evolved without any dynamic load balancing problems.
-rerun only recalculates energies right. I don't understand why the same
trajectory is giving decomposition error now.

On Sat, Apr 28, 2018 at 12:11 PM, RAHUL SURESH 
wrote:

> Hi.
>
> That indicates a  problem with dynamic load balancing. Try to build
> different sizes of the box.
>
> On Sat, Apr 28, 2018 at 11:57 AM, Sahithya S Iyer 
> wrote:
>
> > Hi,
> >
> > I am trying to calculate interaction between specific residues using gmx
> > mdrun -rerun flag. The trajectory was in a dcd format, which I converted
> to
> > a trr file. I get the following error -
> >
> > Domain decomposition has not been implemented for box vectors that have
> > non-zero components in directions that do not use domain decomposition:
> > ncells
> > = 1 8 1, box vector[2] = 0.00 10.536000 0.00
> >
> > Can someone please tell me what could be going wrong here ?
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
>
> --
> *Regards,*
> *Rahul Suresh*
> *Research Scholar*
> *Bharathiar University*
> *Coimbatore*
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error with -rerun

2018-04-28 Thread RAHUL SURESH
Hi

Sounds strange to my little knowledge. How I would justify is, it may be
due to the conversion from NAMD [dcd] to Gromacs profile [trr] though not
sure.

So you have converted the file format using VMD?



On Sat, Apr 28, 2018 at 12:26 PM, Sahithya S Iyer  wrote:

> Hi,
>
> Thanks for the reply. I am only doing a rerun of a trajectory that has
> already evolved without any dynamic load balancing problems.
> -rerun only recalculates energies right. I don't understand why the same
> trajectory is giving decomposition error now.
>
> On Sat, Apr 28, 2018 at 12:11 PM, RAHUL SURESH 
> wrote:
>
> > Hi.
> >
> > That indicates a  problem with dynamic load balancing. Try to build
> > different sizes of the box.
> >
> > On Sat, Apr 28, 2018 at 11:57 AM, Sahithya S Iyer 
> > wrote:
> >
> > > Hi,
> > >
> > > I am trying to calculate interaction between specific residues using
> gmx
> > > mdrun -rerun flag. The trajectory was in a dcd format, which I
> converted
> > to
> > > a trr file. I get the following error -
> > >
> > > Domain decomposition has not been implemented for box vectors that have
> > > non-zero components in directions that do not use domain decomposition:
> > > ncells
> > > = 1 8 1, box vector[2] = 0.00 10.536000 0.00
> > >
> > > Can someone please tell me what could be going wrong here ?
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> >
> >
> >
> > --
> > *Regards,*
> > *Rahul Suresh*
> > *Research Scholar*
> > *Bharathiar University*
> > *Coimbatore*
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
*Regards,*
*Rahul Suresh*
*Research Scholar*
*Bharathiar University*
*Coimbatore*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error with -rerun

2018-04-28 Thread Sahithya S Iyer
Yes.. I used VMD for conversion...

On Sat, Apr 28, 2018 at 12:50 PM, RAHUL SURESH 
wrote:

> Hi
>
> Sounds strange to my little knowledge. How I would justify is, it may be
> due to the conversion from NAMD [dcd] to Gromacs profile [trr] though not
> sure.
>
> So you have converted the file format using VMD?
>
>
>
> On Sat, Apr 28, 2018 at 12:26 PM, Sahithya S Iyer 
> wrote:
>
> > Hi,
> >
> > Thanks for the reply. I am only doing a rerun of a trajectory that has
> > already evolved without any dynamic load balancing problems.
> > -rerun only recalculates energies right. I don't understand why the same
> > trajectory is giving decomposition error now.
> >
> > On Sat, Apr 28, 2018 at 12:11 PM, RAHUL SURESH 
> > wrote:
> >
> > > Hi.
> > >
> > > That indicates a  problem with dynamic load balancing. Try to build
> > > different sizes of the box.
> > >
> > > On Sat, Apr 28, 2018 at 11:57 AM, Sahithya S Iyer 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I am trying to calculate interaction between specific residues using
> > gmx
> > > > mdrun -rerun flag. The trajectory was in a dcd format, which I
> > converted
> > > to
> > > > a trr file. I get the following error -
> > > >
> > > > Domain decomposition has not been implemented for box vectors that
> have
> > > > non-zero components in directions that do not use domain
> decomposition:
> > > > ncells
> > > > = 1 8 1, box vector[2] = 0.00 10.536000 0.00
> > > >
> > > > Can someone please tell me what could be going wrong here ?
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at http://www.gromacs.org/
> > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > >
> > >
> > >
> > > --
> > > *Regards,*
> > > *Rahul Suresh*
> > > *Research Scholar*
> > > *Bharathiar University*
> > > *Coimbatore*
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
>
> --
> *Regards,*
> *Rahul Suresh*
> *Research Scholar*
> *Bharathiar University*
> *Coimbatore*
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error with -rerun

2018-04-28 Thread Mark Abraham
Hi,

Clearly the conversion tool did not produce a file that conforms to the
requirements GROMACS has for specifying periodic boxes. That may not work
well even if you'd run mdrun without domain decomposition because the
periodicity may not be understood correctly. Find out what was going on and
how the conversion may have worked.

Mark

On Sat, Apr 28, 2018, 09:54 Sahithya S Iyer  wrote:

> Yes.. I used VMD for conversion...
>
> On Sat, Apr 28, 2018 at 12:50 PM, RAHUL SURESH 
> wrote:
>
> > Hi
> >
> > Sounds strange to my little knowledge. How I would justify is, it may be
> > due to the conversion from NAMD [dcd] to Gromacs profile [trr] though not
> > sure.
> >
> > So you have converted the file format using VMD?
> >
> >
> >
> > On Sat, Apr 28, 2018 at 12:26 PM, Sahithya S Iyer 
> > wrote:
> >
> > > Hi,
> > >
> > > Thanks for the reply. I am only doing a rerun of a trajectory that has
> > > already evolved without any dynamic load balancing problems.
> > > -rerun only recalculates energies right. I don't understand why the
> same
> > > trajectory is giving decomposition error now.
> > >
> > > On Sat, Apr 28, 2018 at 12:11 PM, RAHUL SURESH <
> drrahulsur...@gmail.com>
> > > wrote:
> > >
> > > > Hi.
> > > >
> > > > That indicates a  problem with dynamic load balancing. Try to build
> > > > different sizes of the box.
> > > >
> > > > On Sat, Apr 28, 2018 at 11:57 AM, Sahithya S Iyer  >
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I am trying to calculate interaction between specific residues
> using
> > > gmx
> > > > > mdrun -rerun flag. The trajectory was in a dcd format, which I
> > > converted
> > > > to
> > > > > a trr file. I get the following error -
> > > > >
> > > > > Domain decomposition has not been implemented for box vectors that
> > have
> > > > > non-zero components in directions that do not use domain
> > decomposition:
> > > > > ncells
> > > > > = 1 8 1, box vector[2] = 0.00 10.536000 0.00
> > > > >
> > > > > Can someone please tell me what could be going wrong here ?
> > > > > --
> > > > > Gromacs Users mailing list
> > > > >
> > > > > * Please search the archive at http://www.gromacs.org/
> > > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > > >
> > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > >
> > > > > * For (un)subscribe requests visit
> > > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> > or
> > > > > send a mail to gmx-users-requ...@gromacs.org.
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Regards,*
> > > > *Rahul Suresh*
> > > > *Research Scholar*
> > > > *Bharathiar University*
> > > > *Coimbatore*
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at http://www.gromacs.org/
> > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> >
> >
> >
> > --
> > *Regards,*
> > *Rahul Suresh*
> > *Research Scholar*
> > *Bharathiar University*
> > *Coimbatore*
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error with -rerun

2018-04-28 Thread Sahithya S Iyer
Thanks for the reply Mark.

On Sat, Apr 28, 2018 at 4:32 PM, Mark Abraham 
wrote:

> Hi,
>
> Clearly the conversion tool did not produce a file that conforms to the
> requirements GROMACS has for specifying periodic boxes. That may not work
> well even if you'd run mdrun without domain decomposition because the
> periodicity may not be understood correctly. Find out what was going on and
> how the conversion may have worked.
>
> Mark
>
> On Sat, Apr 28, 2018, 09:54 Sahithya S Iyer  wrote:
>
> > Yes.. I used VMD for conversion...
> >
> > On Sat, Apr 28, 2018 at 12:50 PM, RAHUL SURESH 
> > wrote:
> >
> > > Hi
> > >
> > > Sounds strange to my little knowledge. How I would justify is, it may
> be
> > > due to the conversion from NAMD [dcd] to Gromacs profile [trr] though
> not
> > > sure.
> > >
> > > So you have converted the file format using VMD?
> > >
> > >
> > >
> > > On Sat, Apr 28, 2018 at 12:26 PM, Sahithya S Iyer 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Thanks for the reply. I am only doing a rerun of a trajectory that
> has
> > > > already evolved without any dynamic load balancing problems.
> > > > -rerun only recalculates energies right. I don't understand why the
> > same
> > > > trajectory is giving decomposition error now.
> > > >
> > > > On Sat, Apr 28, 2018 at 12:11 PM, RAHUL SURESH <
> > drrahulsur...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi.
> > > > >
> > > > > That indicates a  problem with dynamic load balancing. Try to build
> > > > > different sizes of the box.
> > > > >
> > > > > On Sat, Apr 28, 2018 at 11:57 AM, Sahithya S Iyer <
> sah2...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I am trying to calculate interaction between specific residues
> > using
> > > > gmx
> > > > > > mdrun -rerun flag. The trajectory was in a dcd format, which I
> > > > converted
> > > > > to
> > > > > > a trr file. I get the following error -
> > > > > >
> > > > > > Domain decomposition has not been implemented for box vectors
> that
> > > have
> > > > > > non-zero components in directions that do not use domain
> > > decomposition:
> > > > > > ncells
> > > > > > = 1 8 1, box vector[2] = 0.00 10.536000 0.00
> > > > > >
> > > > > > Can someone please tell me what could be going wrong here ?
> > > > > > --
> > > > > > Gromacs Users mailing list
> > > > > >
> > > > > > * Please search the archive at http://www.gromacs.org/
> > > > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > > > >
> > > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > > >
> > > > > > * For (un)subscribe requests visit
> > > > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_
> gmx-users
> > > or
> > > > > > send a mail to gmx-users-requ...@gromacs.org.
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > *Regards,*
> > > > > *Rahul Suresh*
> > > > > *Research Scholar*
> > > > > *Bharathiar University*
> > > > > *Coimbatore*
> > > > > --
> > > > > Gromacs Users mailing list
> > > > >
> > > > > * Please search the archive at http://www.gromacs.org/
> > > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > > >
> > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > >
> > > > > * For (un)subscribe requests visit
> > > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> > or
> > > > > send a mail to gmx-users-requ...@gromacs.org.
> > > > >
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at http://www.gromacs.org/
> > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > >
> > >
> > >
> > > --
> > > *Regards,*
> > > *Rahul Suresh*
> > > *Research Scholar*
> > > *Bharathiar University*
> > > *Coimbatore*
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List bef

Re: [gmx-users] Domain decomposition error with -rerun

2018-04-28 Thread Nikhil Maroli
Check the trajectories before and after conversion and make sure that there
are no pbc effect, if so fix it.
Or do the analysis with the avaible trajectories(may be in VMD with tcl
scripts).

-- 
Regards,
Nikhil Maroli
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain decomposition distance restrains in gromacs2016.1

2017-11-10 Thread Bakary N'tji Diallo
Hello


I’m trying to run a simulation with distance restraint using Gromacs
version 2016.1-dev.

The distance restraint file contains:

[ distance_restraints ]

; ai aj type index type. low up1 up2 fac

  6602  2478  1  0   1   0.24 0.30 0.35 1.0

  6602  2504  1  0   1   0.24 0.30 0.35 1.0

  6602  3811  1  0   1   0.24 0.30 0.35 1.0



With


disre  = Simple

disre-fc = 1000


in mdp files.


And the .top file has

#include "distancerestraints.itp"

Run with:

mpirun -np ${NP} -machinefile ${PBS_NODEFILE} gmx_mpi mdrun -rdd 0.1 -cpi
-maxh 48 -deffnm md_0_1


When running the simulation the following warning appears after the last
grompp  before simulation run.

#atoms involved in distance restraints should be within the same domain. If
this is not the case mdrun generates a fatal error. If you encounter this,
use a single MPI rank (Verlet+OpenMP+GPUs work fine).

(The simulation is running with *mpirun -np 1 *but from my understanding it
is using a single processor/core which is slow.)

WARNING: Can not write distance restraint data to energy file with domain
decomposition

Effectively the simulation generates a fatal error.

Different mdrun options to control the domain decomposition (  -rdd, -dds,
  -rcon) were unsuccessfully tried.

Thank you in advance

-- 
*b*
*akary*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Domain decomposition and large molecules

2018-12-11 Thread Mark Abraham
Hi,

Unfortunately, you can't attach files to the mailing list. Please use a
file sharing service and share the link.

Mark

On Wed., 12 Dec. 2018, 02:20 Tommaso D'Agostino, 
wrote:

> Dear all,
>
> I have a system of 27000 atoms, that I am simulating on both local and
> Marconi-KNL (cineca) clusters. In this system, I simulate a small molecule
> that has a graphene sheet attached to it, surrounded by water. I have
> already simulated with success this molecule in a system of 6500 atoms,
> using a timestep of 2fs and LINCS algorithm. These simulations have run
> flawlessly when executed with 8 mpi ranks.
>
> Now I have increased the length of the graphene part and the number of
> waters surrounding my molecule, arriving to a total of 27000 atoms;
> however, every simulation that I try to launch on more than 2 cpus or with
> a timestep greater than 0.5fs seems to crash sooner or later (strangely,
> during multiple attempts with 8 cpus, I was able to run up to 5 ns of
> simulations prior to get the crashes; sometimes, however, the crashes
> happen as soon as after 100ps). When I obtain an error prior to the crash
> (sometimes the simulation just hangs without providing any error) I get a
> series of lincs warning, followed by a message like:
>
> Fatal error:
> An atom moved too far between two domain decomposition steps
> This usually means that your system is not well equilibrated
>
> The crashes are relative to a part of the molecule that I have not changed
> when increasing the graphene part, and I already checked twice that there
> are no missing/wrong terms in the molecule topology. Again, I have not
> modified at all the part of the molecule that crashes.
>
> I have already tried to increase lincs-order or lincs-iter up to 8,
> decrease nlist to 1, increase rlist to 5.0, without any success. I have
> also tried (without success) to use a unique charge group for the whole
> molecule, but I would like to avoid this, as point-charges may affect my
> analysis.
>
> One note: I am using a V-rescale thermostat with a tau_t of 40 picoseconds,
> and every 50ps the simulation is stopped and started again from the last
> frame (preserving the velocities). I want to leave these options as they
> are, for consistency with other system used for this work.
>
> Do you have any suggestions on things I may try to launch these simulations
> with a decent performance? even with these few atoms, if I do not use a
> timestep greater than 0.5fs or if I do not use more than 2 cpus, I cannot
> get more than 4ns/day. I think it may me connected with domain
> decomposition, but option -pd was removed from last versions of gromacs (I
> am using gromacs 2016.1) and I cannot check that.
>
> Attached to this mail, you may find the input .mdp file used for the
> simulation.
>
> Thanks in advance for the help,
>
>Tommaso D'Agostino
>Postdoctoral Researcher
>
>   Scuola Normale Superiore,
>
> Palazzo della Carovana, Ufficio 99
>   Piazza dei Cavalieri 7, 56126 Pisa (PI), Italy
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition and large molecules

2018-12-13 Thread Tommaso D'Agostino
>
> Dear all,
>
> I have a system of 27000 atoms, that I am simulating on both local and
> Marconi-KNL (cineca) clusters. In this system, I simulate a small molecule
> that has a graphene sheet attached to it, surrounded by water. I have
> already simulated with success this molecule in a system of 6500 atoms,
> using a timestep of 2fs and LINCS algorithm. These simulations have run
> flawlessly when executed with 8 mpi ranks.
>
> Now I have increased the length of the graphene part and the number of
> waters surrounding my molecule, arriving to a total of 27000 atoms;
> however, every simulation that I try to launch on more than 2 cpus or with
> a timestep greater than 0.5fs seems to crash sooner or later (strangely,
> during multiple attempts with 8 cpus, I was able to run up to 5 ns of
> simulations prior to get the crashes; sometimes, however, the crashes
> happen as soon as after 100ps). When I obtain an error prior to the crash
> (sometimes the simulation just hangs without providing any error) I get a
> series of lincs warning, followed by a message like:
>
> Fatal error:
> An atom moved too far between two domain decomposition steps
> This usually means that your system is not well equilibrated
>
> The crashes are relative to a part of the molecule that I have not changed
> when increasing the graphene part, and I already checked twice that there
> are no missing/wrong terms in the molecule topology. Again, I have not
> modified at all the part of the molecule that crashes.
>
> I have already tried to increase lincs-order or lincs-iter up to 8,
> decrease nlist to 1, increase rlist to 5.0, without any success. I have
> also tried (without success) to use a unique charge group for the whole
> molecule, but I would like to avoid this, as point-charges may affect my
> analysis.
>
> One note: I am using a V-rescale thermostat with a tau_t of 40
> picoseconds, and every 50ps the simulation is stopped and started again
> from the last frame (preserving the velocities). I want to leave these
> options as they are, for consistency with other system used for this work.
>
> Do you have any suggestions on things I may try to launch these
> simulations with a decent performance? even with these few atoms, if I do
> not use a timestep greater than 0.5fs or if I do not use more than 2 cpus,
> I cannot get more than 4ns/day. I think it may me connected with domain
> decomposition, but option -pd was removed from last versions of gromacs (I
> am using gromacs 2016.1) and I cannot check that.
>
> This is the input mdp file used for the simulation:
> https://drive.google.com/file/d/14SeZbjNy1RyU-sGfohvtVLM9tky__GJA/view?usp=sharing
>
> Thanks in advance for the help,
>
>Tommaso D'Agostino
>Postdoctoral Researcher
>
>   Scuola Normale Superiore,
>
> Palazzo della Carovana, Ufficio 99
>   Piazza dei Cavalieri 7, 56126 Pisa (PI), Italy
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain Decomposition error with Implicit Solvent

2014-07-23 Thread Mark Abraham
On Mon, Jul 21, 2014 at 3:48 PM, Siva Dasetty  wrote:

> Dear All,
>
> I am running simulations of BMP2 protein and graphite sheet using implicit
> solvent model (mdp file is pasted below). The graphite atoms are frozen in
> the simulation and BMP2 is free to translate.
> I got an error "Step 1786210: The domain decomposition grid has shifted
> too much in the Z-direction around cell 0 0 0" after 1749.7 ps of the
> simulation.
>
> I then restarted the simulation without changing anything using the cpt
> file created from the previous (crashed) run and the simulation continues.
> It has run for over 60 ps now and is continuing to run. This is something
> we tried based on a previous email on gmxlist from David van der Spoel. We
> are using gromacs 4.5.5.
>
> Any idea what this error may be due to?


Could be anything. I have a hundred bucks that says no developer has ever
run with frozen groups and implicit solvent. :-) Consider yourself warned!
However, you should look at the trajectory as it approaches the failing
step to see what the trigger is - e.g. diffusion further away than the
sheet is wide, or something.

Mark


> We know that the system is not blowing up since it continues to run with
> the cpt file.
>
> Thanks,
> Siva
>
>  Start MDP file 
>
> dt  =  0.001; time step
> nsteps  =  500  ; number of steps
> ;nstcomm =  10  ; reset c.o.m. motion
> nstxout =  10   ; write coords
> nstvout =  10   ; write velocities
> nstlog  =  10   ; print to logfile
> nstenergy   =  10   ; print energies
> xtc_grps=  System
> nstxtcout   =  10
> nstlist =  10   ; update pairlist
> ns_type =  grid ; pairlist method
> pbc =  no
> rlist   =  1.5
> rcoulomb=  1.5
> rvdw=  1.5
> implicit-solvent=  GBSA
> sa-algorithm=  Ace-approximation
> gb_algorithm=  OBC
> rgbradii=  1.5
> gb-epsilon-solvent  =  78.3
> Tcoupl  =  V-rescale
> ref_t   =  300.0
> tc-grps =  System
> tau_t   =  0.5
> gen_vel =  yes  ; generate init. vel
> gen_temp=  300  ; init. temp.
> gen_seed=  372340   ; random seed
> ;constraints =  all-bonds; constraining bonds with
> H
> ;constraint_algorithm = lincs
> refcoord-scaling=  all
> comm_mode   = ANGULAR
> freezegrps  = Graphite
> freezedim   = Y Y Y
>
>  End MDP file 
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain Decomposition error with Implicit Solvent

2014-07-26 Thread Siva Dasetty
Thank you Mark for the reply. 
We are not sure about it either as it worked when we started the simulation 
again using the cpt file and also there 
was no issue when we did the same simulation using (links algorithm) 
constraints.

Thanks,
Siva
On Jul 23, 2014, at 4:20 PM, Mark Abraham  wrote:

> On Mon, Jul 21, 2014 at 3:48 PM, Siva Dasetty  wrote:
> 
>> Dear All,
>> 
>> I am running simulations of BMP2 protein and graphite sheet using implicit
>> solvent model (mdp file is pasted below). The graphite atoms are frozen in
>> the simulation and BMP2 is free to translate.
>> I got an error "Step 1786210: The domain decomposition grid has shifted
>> too much in the Z-direction around cell 0 0 0" after 1749.7 ps of the
>> simulation.
>> 
>> I then restarted the simulation without changing anything using the cpt
>> file created from the previous (crashed) run and the simulation continues.
>> It has run for over 60 ps now and is continuing to run. This is something
>> we tried based on a previous email on gmxlist from David van der Spoel. We
>> are using gromacs 4.5.5.
>> 
>> Any idea what this error may be due to?
> 
> 
> Could be anything. I have a hundred bucks that says no developer has ever
> run with frozen groups and implicit solvent. :-) Consider yourself warned!
> However, you should look at the trajectory as it approaches the failing
> step to see what the trigger is - e.g. diffusion further away than the
> sheet is wide, or something.
> 
> Mark
> 
> 
>> We know that the system is not blowing up since it continues to run with
>> the cpt file.
>> 
>> Thanks,
>> Siva
>> 
>>  Start MDP file 
>> 
>> dt  =  0.001; time step
>> nsteps  =  500  ; number of steps
>> ;nstcomm =  10  ; reset c.o.m. motion
>> nstxout =  10   ; write coords
>> nstvout =  10   ; write velocities
>> nstlog  =  10   ; print to logfile
>> nstenergy   =  10   ; print energies
>> xtc_grps=  System
>> nstxtcout   =  10
>> nstlist =  10   ; update pairlist
>> ns_type =  grid ; pairlist method
>> pbc =  no
>> rlist   =  1.5
>> rcoulomb=  1.5
>> rvdw=  1.5
>> implicit-solvent=  GBSA
>> sa-algorithm=  Ace-approximation
>> gb_algorithm=  OBC
>> rgbradii=  1.5
>> gb-epsilon-solvent  =  78.3
>> Tcoupl  =  V-rescale
>> ref_t   =  300.0
>> tc-grps =  System
>> tau_t   =  0.5
>> gen_vel =  yes  ; generate init. vel
>> gen_temp=  300  ; init. temp.
>> gen_seed=  372340   ; random seed
>> ;constraints =  all-bonds; constraining bonds with
>> H
>> ;constraint_algorithm = lincs
>> refcoord-scaling=  all
>> comm_mode   = ANGULAR
>> freezegrps  = Graphite
>> freezedim   = Y Y Y
>> 
>>  End MDP file 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain decomposition fatal error during production run

2019-02-06 Thread Nawel Mele
Dear gromacs users,

I performed a MD simulation on a dimer system with pulling code during the
production run to force the two monomers to get closer. After 55 ns of
production run I got this error :



















*step 30616369: Water molecule starting at atom 30591 can not be
settled.Check for bad contacts and/or reduce the timestep if
appropriate.Wrote pdb files with previous and current coordinatesStep
30616370:The charge group starting at atom 669 moved more than the distance
allowed by the domain decomposition (1.00) in direction Zdistance out
of cell 10.671883Old coordinates:4.9877.4145.096New
coordinates:   10.9399.096   19.806Old cell boundaries in direction
Z:3.8089.132New cell boundaries in direction Z:3.822
9.134---Program mdrun,
VERSION 4.6.5Source code file:
/gem/00_installers/gromacs/gmx-centos6/gromacs-4.6.5/src/mdlib/domdec.c,
line: 4412Fatal error:A charge group moved too far between two domain
decomposition stepsThis usually means that your system is not well
equilibrated*

Here is my mdp input file:

title   =  Protein in water - Production Run
cpp =  cpp
constraints =  all-bonds
integrator  =  md
dt  =  0.002; ps !
nsteps  =  5000 ; total 100 ns.
nstcomm =  1
nstxout =  50 ; 1 ns
nstvout =  2500 ; 5 ps
nstfout =  0
nstlog  =  2500 ; 5 ps
nstenergy   =  2500 ; 5 ps
nstxtcout   =  2500 ; 5 ps
xtc-precision   =  1000
nstlist =  10
ns_type =  grid
rlist   =  1.0
coulombtype =  PME
rcoulomb=  1.0
rvdw=  1.0
; Temperature coupling
Tcoupl  =  nose-hoover
tc-grps =  system
tau_t   =  0.1
ref_t   =  300
; Energy monitoring
energygrps  =  Protein  Non-Protein
; Isotropic pressure coupling
Pcoupl  =  Parrinello-Rahman
Pcoupltype  =  isotropic
tau_p   =  1.0
compressibility =  4.5e-5
ref_p   =  1.0
; Generate velocites is off at 300 K.
gen_vel =  no

gen_temp=  300.0
gen_seed=  173529
; Mode for center of mass motion removal
comm-mode=  Linear
; Pull code
pull= umbrella
pull_geometry   = distance  ; simple distance increase
pull_dim= N N Y ; pull along z
pull_group0 = Chain_B
pull_group1 = Chain_A
pull_vec1   = 0.0 0.0 -1.0
pull_ngroups= 1 ; two groups defining one reaction
coordinate
pull_start  = yes   ; define initial COM distance > 0
pull_rate1  = -0.01  ; 0.01 nm per ps = 10 nm per ns due to
the minus sign it pulls to opposite direction
pull_k1 = 1000  ; kJ mol^-1 nm^-2


I am confused with what exactly happen here, when I looked at the COM
distance between the two monomers it is fluctuating around 2.6 nm but does
not go higher than 3 nm or less than 2.5 nm. Can anyone help me here?

Many thanks,

Regards,

Nawel
-- 

Dr Nawel Mele,
T: +33 (0) 634443794 (Fr)

+44 (0) 7704331840 (UK)
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain decomposition error tied to free energy perturbation

2016-03-19 Thread Ryan Muraglia
Hello,

I have been attempting to carry out some free energy calculations, but to
verify the sanity of my parameters, I decided to test them on a structure I
knew to be stable -- the lysozyme from Lemkul's lysozyme in water tutorial.

I chose the L75A mutation because it is out on the surface to minimize the
"difficulty of the transformation."
Using my regular mdp file (even with my mutatation topology generated with
the pmx package), my minimization runs to completion with no errors.

Once I introduce the following lines to my mdp file:

"
; Free energy calculations
free_energy = yes
delta_lambda = 0 ; no Jarzynski non-eq
calc_lambda_neighbors = 1 ; only calculate energy to immediate neighbors
(suitable for BAR, but MBAR needs all)
sc-alpha = 0.5
sc-coul  = no
sc-power = 1.0
sc-sigma = 0.3
couple-moltype   = Protein_chain_A  ; name of moleculetype to
decouple
couple-lambda0   = vdw-q  ; all interactions
couple-lambda1   = vdw ; remove electrostatics, only vdW
couple-intramol  = no
nstdhdl  = 100

; lambda vectors ; decharging only.
; init_lambda_state   0   1   2   3   4   5   6   7   8   9   10
init_lambda_state = 00
coul_lambdas =0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
vdw_lambdas = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
bonded_lambdas =  0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ; match
vdw
mass_lambdas =0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ; match
vdw
temperature_lambdas = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ; not
doing simulated tempering
"

I notice two things:
1) Running grompp to generate the tpr file takes much longer
2) The minimization fails to run due the following error related to domain
decomposition:

"
Fatal error:
There is no domain decomposition for 4 ranks that is compatible with the
given box and a minimum cell size of 5.51109 nm
Change the number of ranks or mdrun option -rdd
Look in the log file for details on the domain decomposition
"

I noted that it lists a two-body bonded interaction with a strangely large
distance:

"
Initial maximum inter charge-group distances:
two-body bonded interactions: 5.010 nm, LJC Pairs NB, atoms 1074 1937
  multi-body bonded interactions: 0.443 nm, Proper Dih., atoms 1156 1405
Minimum cell size due to bonded interactions: 5.511 nm
"

Atom 1074 corresponds to a hydrogen off the beta-carbon of proline 70, and
atom 1937 refers to a hydrogen on arginine 128. Neither residue is part of
the protein that is being mutated, and they certainly should not be bonded.
The [bonds] directive in the topology confirms that there should be no
interaction between these atoms.
To force the run to begin to get more information on the nature of the
error, I gave mdrun the -nt 1 option, and got the following warning at the
beginning of the minimization (which goes on to end prematurely prior to
reaching the desired Fmax):

"
WARNING: Listed nonbonded interaction between particles 1 and 195
at distance 2.271 which is larger than the table limit 2.200 nm.
"

I'm at a loss in terms of understanding why the addition of my FEP
parameters is causing this error, and appears to be causing the grompp
parser to decide that there is a bond where there shouldn't be, forcing the
minimimum box size to exceed what makes sense for domain decomposition.

Additional information that may be relevant: I am using the amber99sb
forcefield with explicit tip3p waters. I am attempting steepest descent
minimization. rcoulomb and rvdw are both set to 1.2.

Any advice would be greatly appreciated. Thank you!


-- 
Ryan Muraglia
rmurag...@gmail.com
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain Decomposition does not support simple neighbor searching.

2016-06-08 Thread Daniele Veclani
Dear Gromacs Users

I'm trying to do a simulation in a  NVE ensemble in vaccum. But I but
I find this
error:

"Domain Decomposition does not support simple neighbor searching, use grid
searching or run with one MPI rank"

If I use ns_type=grid I can generate the .tpr file, But when I run mdrun I
find:

"NOTE: This file uses the deprecated 'group' cutoff scheme. This will be
removed in a future release when 'verlet' supports all interaction forms.

and mdrun program crashes.


How can I do  energy minimization  and  simulation in NVE ensemble in
vaccum with GROMACS 5.0.4?

This is my .mdp file for energy minimization:

; Run control
integrator   = steep
nsteps   = 50
; EM criteria and other stuff
emtol= 10
emstep   = 0.001
niter= 20
nbfgscorr= 10
; Output control
nstlog   = 1
nstenergy= 1
; Neighborsearching PARAMETERS
cutoff-scheme= group
vdw-type = Cut-off
nstlist  = 1; 10 fs
ns_type  = grid ; search neighboring grid cells
pbc  = No
rlist= 0.0   ; short-range neighborlist cutoff (in
nm)
rlistlong= 0.0
; OPTIONS FOR ELECTROSTATICS AND VDW
coulombtype  = cut-off ; Particle Mesh Ewald for long-range
electrostatics
rcoulomb-switch  = 0
rcoulomb = 0.0; short-range electrostatic cutoff
(in nm)
rvdw = 0.0; short-range van der Waals cutoff
(in nm)
rvdw-switch  = 0.0
epsilon_r= 1
; Apply long range dispersion corrections for Energy and Pressure
DispCorr  = No
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12
; EWALD/PME/PPPM parameters
pme_order= 6
ewald_rtol   = 1e-06
epsilon_surface  = 0
; Temperature and pressure coupling are off during EM
tcoupl   = no
pcoupl   = no
; No velocities during EM
gen_vel  = no
 Bond parameters
continuation = no; first dynamics run
constraint_algorithm = lincs; holonomic constraints
constraints  = h-bonds ; all bonds (even heavy atom-H
bonds) constrained
lincs_iter   = 2 ; accuracy of LINCS
lincs_order  = 4 ; also related to accuracy


To use a single MPI is correct to do so:

-mpirun -np 1 mdrun -s *.trp ??

There still exists the possibility of using the Point Decomposition method
(mdrun -pd) in gromacs 5.x?

Best regards
D.V.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] domain decomposition error in the energy minimization step

2017-01-10 Thread Qasim Pars
Dear users,

I am trying to simulate a protein-ligand system including ~2 atoms with
waters using GROMACS-2016.1. The protocol I tried is forward state for the
free energy calculation. The best ligand pose used in the simulations was
got by AutoDock. At the beginning of the simulation GROMACS suffers from
domain decomposition error in the energy minimization step:

Fatal error:
There is no domain decomposition for 20 ranks that is compatible with the
given box and a minimum cell size of 1.7353 nm
Change the number of ranks or mdrun option -rdd
Look in the log file for details on the domain decomposition

I checked the complex structure visually. I don't see any distortion in the
structure. To check whether the problem is the number of nodes, I used 16
nodes:

gmx mdrun -v -deffnm em -nt 16

The energy minimization step was completed successfully. For the NVT step I
used 16 nodes again.

gmx mdrun -v -deffnm nvt -nt 16

After ~200 steps the system gave too many lincs warning.

Whereas there is no problem with wild type protein when it is simulated
without using -nt 16. These domain decomposition error and lincs warning
arise for complex structure.

By the way to keep the ligand in the active site of protein I use the bond,
angle and dihedral restraints under [ intermolecular_interactions ] section
in the top file.

[ intermolecular_interactions ]
[ bonds ]
  31317 10 0.294 0.29410.000  0.000 0.294
0.29410.000   4184.000

[ angle_restraints ]
  312   31317   313  1   140.445  0.000  1
140.445 41.840  1
  313171917  1   107.175  0.000  1
107.175 41.840  1

[ dihedral_restraints ]
  300   312   31317  156.245 0.000  0.000
56.245 0.000 41.840
  312   3131719  1-3.417 0.000  0.000
-3.417 0.000 41.840
  313171914  1  -110.822 0.000  0.000
-110.822 0.000 41.840

The mdp file used for the energy minimization is follows:

define  = -DFLEXIBLE
integrator  = steep
nsteps  = 5
emtol   = 1000.0
emstep  = 0.01
nstcomm = 100

; OUTPUT CONTROL
nstxout-compressed= 500
compressed-x-precision= 1000
nstlog= 500
nstenergy = 500
nstcalcenergy = 100

; BONDS
constraints = none

; NEIGHBOUR SEARCHING

cutoff-scheme= verlet
ns-type  = grid
nstlist  = 10
pbc  = xyz

; ELECTROSTATICS & EWALD
coulombtype  = PME
rcoulomb = 1.0
ewald_geometry   = 3d
pme-order= 4
fourierspacing   = 0.12

; VAN DER WAALS
vdwtype = Cut-off
vdw_modifier= Potential-switch
rvdw= 1.0
rvdw-switch = 0.9
DispCorr= EnerPres

; FREE ENERGY
free-energy  = yes
init-lambda  = 0
delta-lambda = 0
sc-alpha = 0.3
sc-power = 1
sc-sigma = 0.25
sc-coul  = yes
couple-moltype   = ligand
couple-intramol  = no
couple-lambda0   = vdw-q
couple-lambda1   = none
nstdhdl  = 100

I removed the free energy lines in the em.mdp and [
intermolecular_interactions ] section in the top file but GROMACS still
gives the domain decomposition error for the complex structure.

Will you please give suggestions on getting rid of the lincs warning and
domain decomposition messages?

I would appreciate any kind of help.

Thanks.

-- 
Qasim Pars
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error tied to free energy perturbation

2016-03-19 Thread Justin Lemkul



On 3/17/16 8:21 PM, Ryan Muraglia wrote:

Hello,

I have been attempting to carry out some free energy calculations, but to
verify the sanity of my parameters, I decided to test them on a structure I
knew to be stable -- the lysozyme from Lemkul's lysozyme in water tutorial.

I chose the L75A mutation because it is out on the surface to minimize the
"difficulty of the transformation."
Using my regular mdp file (even with my mutatation topology generated with
the pmx package), my minimization runs to completion with no errors.

Once I introduce the following lines to my mdp file:

"
; Free energy calculations
free_energy = yes
delta_lambda = 0 ; no Jarzynski non-eq
calc_lambda_neighbors = 1 ; only calculate energy to immediate neighbors
(suitable for BAR, but MBAR needs all)
sc-alpha = 0.5
sc-coul  = no
sc-power = 1.0
sc-sigma = 0.3
couple-moltype   = Protein_chain_A  ; name of moleculetype to
decouple
couple-lambda0   = vdw-q  ; all interactions
couple-lambda1   = vdw ; remove electrostatics, only vdW
couple-intramol  = no
nstdhdl  = 100

; lambda vectors ; decharging only.
; init_lambda_state   0   1   2   3   4   5   6   7   8   9   10
init_lambda_state = 00
coul_lambdas =0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
vdw_lambdas = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
bonded_lambdas =  0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ; match
vdw
mass_lambdas =0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ; match
vdw
temperature_lambdas = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ; not
doing simulated tempering
"

I notice two things:
1) Running grompp to generate the tpr file takes much longer
2) The minimization fails to run due the following error related to domain
decomposition:

"
Fatal error:
There is no domain decomposition for 4 ranks that is compatible with the
given box and a minimum cell size of 5.51109 nm
Change the number of ranks or mdrun option -rdd
Look in the log file for details on the domain decomposition
"

I noted that it lists a two-body bonded interaction with a strangely large
distance:

"
Initial maximum inter charge-group distances:
 two-body bonded interactions: 5.010 nm, LJC Pairs NB, atoms 1074 1937
   multi-body bonded interactions: 0.443 nm, Proper Dih., atoms 1156 1405
Minimum cell size due to bonded interactions: 5.511 nm
"

Atom 1074 corresponds to a hydrogen off the beta-carbon of proline 70, and
atom 1937 refers to a hydrogen on arginine 128. Neither residue is part of
the protein that is being mutated, and they certainly should not be bonded.
The [bonds] directive in the topology confirms that there should be no
interaction between these atoms.


With "couple-intramol = no" (from the manual):

"All intra-molecular non-bonded interactions for moleculetype couple-moltype are 
replaced by exclusions and explicit pair interactions."


So you have a much larger distance for intramolecular interactions, hence DD 
complains and you are more limited in the number of DD cells that can be 
constructed.  Trying to decouple an entire protein chain is (1) not usually 
reasonable and (2) fraught with algorithmic challenges.



To force the run to begin to get more information on the nature of the
error, I gave mdrun the -nt 1 option, and got the following warning at the
beginning of the minimization (which goes on to end prematurely prior to
reaching the desired Fmax):

"
WARNING: Listed nonbonded interaction between particles 1 and 195
at distance 2.271 which is larger than the table limit 2.200 nm.
"

I'm at a loss in terms of understanding why the addition of my FEP
parameters is causing this error, and appears to be causing the grompp
parser to decide that there is a bond where there shouldn't be, forcing the


It's not magically creating bonds; see above.  grompp is taking forever because 
it has to generate a massive list of exclusions and pairs.



minimimum box size to exceed what makes sense for domain decomposition.



If EM fails, that's usually a dead giveaway that either the topology is unsound 
or the initial coordinates are unsuitable in some way.  Without more 
information, it's hard to guess at what's going on.  Does EM proceed without the 
free energy options turned on?


-Justin


Additional information that may be relevant: I am using the amber99sb
forcefield with explicit tip3p waters. I am attempting steepest descent
minimization. rcoulomb and rvdw are both set to 1.2.

Any advice would be greatly appreciated. Thank you!






--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==

Re: [gmx-users] Domain decomposition error tied to free energy perturbation

2016-03-19 Thread Ryan Muraglia
On Fri, Mar 18, 2016 at 7:47 AM, <
gromacs.org_gmx-users-requ...@maillist.sys.kth.se> wrote:
>
> Message: 4
> Date: Fri, 18 Mar 2016 07:46:48 -0400
> From: Justin Lemkul 
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] Domain decomposition error tied to free
> energy perturbation
> Message-ID: <56ebeaa8.1050...@vt.edu>
> Content-Type: text/plain; charset=windows-1252; format=flowed
>
>
>
> On 3/17/16 8:21 PM, Ryan Muraglia wrote:
> > Hello,
> >
> > I have been attempting to carry out some free energy calculations, but to
> > verify the sanity of my parameters, I decided to test them on a
> structure I
> > knew to be stable -- the lysozyme from Lemkul's lysozyme in water
> tutorial.
> >
> > I chose the L75A mutation because it is out on the surface to minimize
> the
> > "difficulty of the transformation."
> > Using my regular mdp file (even with my mutatation topology generated
> with
> > the pmx package), my minimization runs to completion with no errors.
> >
> > Once I introduce the following lines to my mdp file:
> >
> > "
> > ; Free energy calculations
> > free_energy = yes
> > delta_lambda = 0 ; no Jarzynski non-eq
> > calc_lambda_neighbors = 1 ; only calculate energy to immediate neighbors
> > (suitable for BAR, but MBAR needs all)
> > sc-alpha = 0.5
> > sc-coul  = no
> > sc-power = 1.0
> > sc-sigma = 0.3
> > couple-moltype   = Protein_chain_A  ; name of moleculetype to
> > decouple
> > couple-lambda0   = vdw-q  ; all interactions
> > couple-lambda1   = vdw ; remove electrostatics, only vdW
> > couple-intramol  = no
> > nstdhdl  = 100
> >
> > ; lambda vectors ; decharging only.
> > ; init_lambda_state   0   1   2   3   4   5   6   7   8   9   10
> > init_lambda_state = 00
> > coul_lambdas =0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
> > vdw_lambdas = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
> > bonded_lambdas =  0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ; match
> > vdw
> > mass_lambdas =0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ; match
> > vdw
> > temperature_lambdas = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ; not
> > doing simulated tempering
> > "
> >
> > I notice two things:
> > 1) Running grompp to generate the tpr file takes much longer
> > 2) The minimization fails to run due the following error related to
> domain
> > decomposition:
> >
> > "
> > Fatal error:
> > There is no domain decomposition for 4 ranks that is compatible with the
> > given box and a minimum cell size of 5.51109 nm
> > Change the number of ranks or mdrun option -rdd
> > Look in the log file for details on the domain decomposition
> > "
> >
> > I noted that it lists a two-body bonded interaction with a strangely
> large
> > distance:
> >
> > "
> > Initial maximum inter charge-group distances:
> >  two-body bonded interactions: 5.010 nm, LJC Pairs NB, atoms 1074
> 1937
> >multi-body bonded interactions: 0.443 nm, Proper Dih., atoms 1156 1405
> > Minimum cell size due to bonded interactions: 5.511 nm
> > "
> >
> > Atom 1074 corresponds to a hydrogen off the beta-carbon of proline 70,
> and
> > atom 1937 refers to a hydrogen on arginine 128. Neither residue is part
> of
> > the protein that is being mutated, and they certainly should not be
> bonded.
> > The [bonds] directive in the topology confirms that there should be no
> > interaction between these atoms.
>
> With "couple-intramol = no" (from the manual):
>
> "All intra-molecular non-bonded interactions for moleculetype
> couple-moltype are
> replaced by exclusions and explicit pair interactions."
>
> So you have a much larger distance for intramolecular interactions, hence
> DD
> complains and you are more limited in the number of DD cells that can be
> constructed.  Trying to decouple an entire protein chain is (1) not usually
> reasonable and (2) fraught with algorithmic challenges.
>
> > To force the run to begin to get more information on the nature of the
> > error, I gave mdrun the -nt 1 option, and got the following warning at
> the
> > beginning of the minimization (which goes on to end prematurely prior to
> > reaching the desired Fmax):
> >
> > "
> > WARNING: Listed nonbonded interaction between particles 1 and 195
> > at dis

Re: [gmx-users] Domain Decomposition does not support simple neighbor searching.

2016-06-08 Thread Justin Lemkul



On 6/8/16 9:41 AM, Daniele Veclani wrote:

Dear Gromacs Users

I'm trying to do a simulation in a  NVE ensemble in vaccum. But I but
I find this
error:

"Domain Decomposition does not support simple neighbor searching, use grid
searching or run with one MPI rank"

If I use ns_type=grid I can generate the .tpr file, But when I run mdrun I
find:

"NOTE: This file uses the deprecated 'group' cutoff scheme. This will be
removed in a future release when 'verlet' supports all interaction forms.

and mdrun program crashes.


How can I do  energy minimization  and  simulation in NVE ensemble in
vaccum with GROMACS 5.0.4?

This is my .mdp file for energy minimization:

; Run control
integrator   = steep
nsteps   = 50
; EM criteria and other stuff
emtol= 10
emstep   = 0.001
niter= 20
nbfgscorr= 10
; Output control
nstlog   = 1
nstenergy= 1
; Neighborsearching PARAMETERS
cutoff-scheme= group
vdw-type = Cut-off
nstlist  = 1; 10 fs
ns_type  = grid ; search neighboring grid cells
pbc  = No
rlist= 0.0   ; short-range neighborlist cutoff (in
nm)
rlistlong= 0.0
; OPTIONS FOR ELECTROSTATICS AND VDW
coulombtype  = cut-off ; Particle Mesh Ewald for long-range
electrostatics
rcoulomb-switch  = 0
rcoulomb = 0.0; short-range electrostatic cutoff
(in nm)
rvdw = 0.0; short-range van der Waals cutoff
(in nm)
rvdw-switch  = 0.0
epsilon_r= 1
; Apply long range dispersion corrections for Energy and Pressure
DispCorr  = No
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12
; EWALD/PME/PPPM parameters
pme_order= 6
ewald_rtol   = 1e-06
epsilon_surface  = 0
; Temperature and pressure coupling are off during EM
tcoupl   = no
pcoupl   = no
; No velocities during EM
gen_vel  = no
 Bond parameters
continuation = no; first dynamics run
constraint_algorithm = lincs; holonomic constraints
constraints  = h-bonds ; all bonds (even heavy atom-H
bonds) constrained
lincs_iter   = 2 ; accuracy of LINCS
lincs_order  = 4 ; also related to accuracy


To use a single MPI is correct to do so:

-mpirun -np 1 mdrun -s *.trp ??



mdrun -nt 1 -s (etc) will do it, or if you want/need parallelization via OpenMP:

mdrun -nt N -tmpi 1 -s (etc)


There still exists the possibility of using the Point Decomposition method
(mdrun -pd) in gromacs 5.x?



Nope, that's long gone.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition error in the energy minimization step

2017-01-11 Thread Kutzner, Carsten
Dear Qasim,

those kinds of domain decomposition 'errors' can happen when you
try to distibute an MD system among too many MPI ranks. There is 
a minimum cell length for each domain decomposition cell in each
dimension, which depends on the chosen cutoff radii and possibly
other inter-atomic constraints. So this is normally just a technical 
limitation and not a problem with the MD system.

You can do the following steps to circumvent that issue:

a) use less ranks (at the domain decomposition limit, the parallel
   efficiency suffers anyway)

b) use separate PME ranks, so that you get less and larger domains
   on the MPI ranks that do domain decomposition
   (use mdrun -nt 20 -npme 4 ... for example, you will have to try
   around a bit with the exact number of PME ranks for optimal
   performance - or use the gmx tune_pme tool for that!)

c) In case you haven't done so already, compile GROMACS with
   MPI *and* OpenMP. Then, by using MPI domain decomposition plus
   OpenMP parallelism within each MPI rank, you will be
   able to use more cores in parallel even for smaller MD systems.

   Use mdrun -ntmpi 10 -ntomp 2 for 10 ranks * 2 threads or
   mdrun -ntmpi 4  -ntomp 5 for 4 ranks * 5 threads

   With real MPI, you would use something like

   mpirun -np 10 gmx mdrun -ntomp 2 ... 

   Don't forget to check your simulation performance, there will 
   be better and worse choices in terms of these decomposition parameters.

Happy simulating!
  Carsten


> On 11 Jan 2017, at 08:33, Qasim Pars  wrote:
> 
> Dear users,
> 
> I am trying to simulate a protein-ligand system including ~2 atoms with
> waters using GROMACS-2016.1. The protocol I tried is forward state for the
> free energy calculation. The best ligand pose used in the simulations was
> got by AutoDock. At the beginning of the simulation GROMACS suffers from
> domain decomposition error in the energy minimization step:
> 
> Fatal error:
> There is no domain decomposition for 20 ranks that is compatible with the
> given box and a minimum cell size of 1.7353 nm
> Change the number of ranks or mdrun option -rdd
> Look in the log file for details on the domain decomposition
> 
> I checked the complex structure visually. I don't see any distortion in the
> structure. To check whether the problem is the number of nodes, I used 16
> nodes:
> 
> gmx mdrun -v -deffnm em -nt 16
> 
> The energy minimization step was completed successfully. For the NVT step I
> used 16 nodes again.
> 
> gmx mdrun -v -deffnm nvt -nt 16
> 
> After ~200 steps the system gave too many lincs warning.
> 
> Whereas there is no problem with wild type protein when it is simulated
> without using -nt 16. These domain decomposition error and lincs warning
> arise for complex structure.
> 
> By the way to keep the ligand in the active site of protein I use the bond,
> angle and dihedral restraints under [ intermolecular_interactions ] section
> in the top file.
> 
> [ intermolecular_interactions ]
> [ bonds ]
>  31317 10 0.294 0.29410.000  0.000 0.294
> 0.29410.000   4184.000
> 
> [ angle_restraints ]
>  312   31317   313  1   140.445  0.000  1
> 140.445 41.840  1
>  313171917  1   107.175  0.000  1
> 107.175 41.840  1
> 
> [ dihedral_restraints ]
>  300   312   31317  156.245 0.000  0.000
> 56.245 0.000 41.840
>  312   3131719  1-3.417 0.000  0.000
> -3.417 0.000 41.840
>  313171914  1  -110.822 0.000  0.000
> -110.822 0.000 41.840
> 
> The mdp file used for the energy minimization is follows:
> 
> define  = -DFLEXIBLE
> integrator  = steep
> nsteps  = 5
> emtol   = 1000.0
> emstep  = 0.01
> nstcomm = 100
> 
> ; OUTPUT CONTROL
> nstxout-compressed= 500
> compressed-x-precision= 1000
> nstlog= 500
> nstenergy = 500
> nstcalcenergy = 100
> 
> ; BONDS
> constraints = none
> 
> ; NEIGHBOUR SEARCHING
> 
> cutoff-scheme= verlet
> ns-type  = grid
> nstlist  = 10
> pbc  = xyz
> 
> ; ELECTROSTATICS & EWALD
> coulombtype  = PME
> rcoulomb = 1.0
> ewald_geometry   = 3d
> pme-order= 4
> fourierspacing   = 0.12
> 
> ; VAN DER WAALS
> vdwtype = Cut-off
> vdw_modifier= Potential-switch
> rvdw= 1.0
> rvdw-switch = 0.9
> DispCorr= EnerPres
> 
> ; FREE ENERGY
> free-energy  = yes
> init-lambda  = 0
> delta-lambda = 0
> sc-alpha = 0.3
> sc-power = 1
> sc-sigma = 0.25
> sc-coul  = yes
> couple-moltype   = ligand
> couple-intramol  = no
> couple-lambda0   = vdw-q
> couple-lambda1   = none
> nstdhdl  = 100
> 
> I remove

Re: [gmx-users] domain decomposition error in the energy minimization step

2017-01-11 Thread Qasim Pars
Dear Carsten,

Thanks. The forward state simulations works properly with mdrun -ntmpi 8
-ntomp 2 or mdrun -ntmpi 4  -ntomp 4 as you suggested.
For the backward state GROMACS still gives too many lincs warning error
with those mdrun commands in the md step, indicating the system is far from
equilibrium. I used the below free energy parameters with the em, nvt, npt
and md steps for the backward state (fast growth). And to keep the ligand
in the active site of protein I use some restraints under the
intermolecular_interactions in the top file as I told in my previous email.

free-energy  = yes
init-lambda  = 1
delta-lambda = 0
sc-alpha = 0.3
sc-power = 1
sc-sigma = 0.25
sc-coul  = yes
couple-moltype   = ligand
couple-intramol  = no
couple-lambda0   = vdw-q
couple-lambda1   = none
nstdhdl  = 100

My questions:

1) Do you think that I should use the above free energy paramaters in all
MD steps (em, nvt, npt and md)?

2) The structure seems fine before arising the lincs warning. First linc
warning belongs to protein atoms (not ligand atoms). However the ligand
doesn't move out of the active site. Maybe the following mdp file for the
md step is not correct or the backward state simulation of a complex
structure is something impossible with GROMACS?

3) Maybe the intermolecular_interactions section and couple- flags don't
turn off intramolecular interactions of the ligand and turn on state B?

4) How can I get rid of the linc warnings in the md step?

#md.mdp:

; RUN CONTROL
integrator   = sd
nsteps   = 5000
dt   = 0.002
comm-mode= Linear
nstcomm  = 100

; OUTPUT CONTROL
nstxout-compressed= 500
compressed-x-precision= 1000
nstlog= 500
nstenergy = 500
nstcalcenergy = 100

; BONDS
constraint_algorithm   = lincs
constraints= all-bonds
lincs_iter = 1
lincs_order= 4
lincs-warnangle= 30
continuation   = no

; NEIGHBOUR SEARCHING
cutoff-scheme= verlet
ns-type  = grid
nstlist  = 20
pbc  = xyz

; ELECTROSTATICS-EWALD
coulombtype  = PME
rcoulomb = 1.0
ewald_geometry   = 3d
pme-order= 4
fourierspacing   = 0.12

; VAN DER WAALS
vdwtype = Cut-off
rvdw= 1.0
rvdw-switch = 0.9
DispCorr= EnerPres

; TEMPERATURE COUPLING (SD - Langevin dynamics)
tc_grps=  Protein ligand
tau_t  =  1.0 1.0
ref_t  =  298.15 298.15

; PRESSURE COUPLING
pcoupl   = Parrinello-Rahman
pcoupltype   = isotropic
tau_p= 2
ref_p= 1.0
compressibility  = 4.5e-05

; VELOCITY GENERATION
gen_vel  = yes
gen_seed = -1
gen_temp = 298.15

;FREE ENERGY
free-energy  = yes
init-lambda  = 1
delta-lambda = 0
sc-alpha = 0.3
sc-power = 1
sc-sigma = 0.25
sc-coul  = yes
couple-moltype   = ligand
couple-intramol  = no
couple-lambda0   = vdw-q
couple-lambda1   = none
nstdhdl  = 100

Thanks in advance.

On 11 January 2017 at 10:49, Kutzner, Carsten  wrote:

> Dear Qasim,
>
> those kinds of domain decomposition 'errors' can happen when you
> try to distibute an MD system among too many MPI ranks. There is
> a minimum cell length for each domain decomposition cell in each
> dimension, which depends on the chosen cutoff radii and possibly
> other inter-atomic constraints. So this is normally just a technical
> limitation and not a problem with the MD system.
>
> You can do the following steps to circumvent that issue:
>
> a) use less ranks (at the domain decomposition limit, the parallel
>efficiency suffers anyway)
>
> b) use separate PME ranks, so that you get less and larger domains
>on the MPI ranks that do domain decomposition
>(use mdrun -nt 20 -npme 4 ... for example, you will have to try
>around a bit with the exact number of PME ranks for optimal
>performance - or use the gmx tune_pme tool for that!)
>
> c) In case you haven't done so already, compile GROMACS with
>MPI *and* OpenMP. Then, by using MPI domain decomposition plus
>OpenMP parallelism within each MPI rank, you will be
>able to use more cores in parallel even for smaller MD systems.
>
>Use mdrun -ntmpi 10 -ntomp 2 for 10 ranks * 2 threads or
>mdrun -ntmpi 4  -ntomp 5 for 4 ranks * 5 threads
>
>With real MPI, you would use something like
>
>mpirun -np 10 gmx mdrun -ntomp 2 ...
>
>Don't forget to check your simulation performance, there will
>be better and worse choices in terms of these decomposition parameters.
>
> Happy simulating!
>   Carsten
>
>
> > On 11 Jan 2017, at 08:33, Qasim Pars  wrote:
> >
> > Dear users,
> >
> > I am trying to simulate a protein-ligand system including ~2 atom

Re: [gmx-users] domain decomposition error in the energy minimization step

2017-01-12 Thread Kutzner, Carsten
Hi Qasim,

> On 11 Jan 2017, at 20:29, Qasim Pars  wrote:
> 
> Dear Carsten,
> 
> Thanks. The forward state simulations works properly with mdrun -ntmpi 8
> -ntomp 2 or mdrun -ntmpi 4  -ntomp 4 as you suggested.
> For the backward state GROMACS still gives too many lincs warning error
> with those mdrun commands in the md step, indicating the system is far from
> equilibrium. I used the below free energy parameters with the em, nvt, npt
> and md steps for the backward state (fast growth). And to keep the ligand
> in the active site of protein I use some restraints under the
> intermolecular_interactions in the top file as I told in my previous email.
> 
> free-energy  = yes
> init-lambda  = 1
> delta-lambda = 0
> sc-alpha = 0.3
> sc-power = 1
> sc-sigma = 0.25
> sc-coul  = yes
> couple-moltype   = ligand
> couple-intramol  = no
> couple-lambda0   = vdw-q
> couple-lambda1   = none
> nstdhdl  = 100
> 
> My questions:
> 
> 1) Do you think that I should use the above free energy paramaters in all
> MD steps (em, nvt, npt and md)?
I think you want them only in the MD part.

> 
> 2) The structure seems fine before arising the lincs warning. First linc
> warning belongs to protein atoms (not ligand atoms). However the ligand
> doesn't move out of the active site. Maybe the following mdp file for the
> md step is not correct or the backward state simulation of a complex
> structure is something impossible with GROMACS?
It should work in both directions. 
Why is your delta-lambda zero in your snippets?

> 
> 3) Maybe the intermolecular_interactions section and couple- flags don't
> turn off intramolecular interactions of the ligand and turn on state B?
> 
> 4) How can I get rid of the linc warnings in the md step?
How many LINCS warnings do you get?
Does you system blow up?

Carsten

> 
> #md.mdp:
> 
> ; RUN CONTROL
> integrator   = sd
> nsteps   = 5000
> dt   = 0.002
> comm-mode= Linear
> nstcomm  = 100
> 
> ; OUTPUT CONTROL
> nstxout-compressed= 500
> compressed-x-precision= 1000
> nstlog= 500
> nstenergy = 500
> nstcalcenergy = 100
> 
> ; BONDS
> constraint_algorithm   = lincs
> constraints= all-bonds
> lincs_iter = 1
> lincs_order= 4
> lincs-warnangle= 30
> continuation   = no
> 
> ; NEIGHBOUR SEARCHING
> cutoff-scheme= verlet
> ns-type  = grid
> nstlist  = 20
> pbc  = xyz
> 
> ; ELECTROSTATICS-EWALD
> coulombtype  = PME
> rcoulomb = 1.0
> ewald_geometry   = 3d
> pme-order= 4
> fourierspacing   = 0.12
> 
> ; VAN DER WAALS
> vdwtype = Cut-off
> rvdw= 1.0
> rvdw-switch = 0.9
> DispCorr= EnerPres
> 
> ; TEMPERATURE COUPLING (SD - Langevin dynamics)
> tc_grps=  Protein ligand
> tau_t  =  1.0 1.0
> ref_t  =  298.15 298.15
> 
> ; PRESSURE COUPLING
> pcoupl   = Parrinello-Rahman
> pcoupltype   = isotropic
> tau_p= 2
> ref_p= 1.0
> compressibility  = 4.5e-05
> 
> ; VELOCITY GENERATION
> gen_vel  = yes
> gen_seed = -1
> gen_temp = 298.15
> 
> ;FREE ENERGY
> free-energy  = yes
> init-lambda  = 1
> delta-lambda = 0
> sc-alpha = 0.3
> sc-power = 1
> sc-sigma = 0.25
> sc-coul  = yes
> couple-moltype   = ligand
> couple-intramol  = no
> couple-lambda0   = vdw-q
> couple-lambda1   = none
> nstdhdl  = 100
> 
> Thanks in advance.
> 
> On 11 January 2017 at 10:49, Kutzner, Carsten  wrote:
> 
>> Dear Qasim,
>> 
>> those kinds of domain decomposition 'errors' can happen when you
>> try to distibute an MD system among too many MPI ranks. There is
>> a minimum cell length for each domain decomposition cell in each
>> dimension, which depends on the chosen cutoff radii and possibly
>> other inter-atomic constraints. So this is normally just a technical
>> limitation and not a problem with the MD system.
>> 
>> You can do the following steps to circumvent that issue:
>> 
>> a) use less ranks (at the domain decomposition limit, the parallel
>>   efficiency suffers anyway)
>> 
>> b) use separate PME ranks, so that you get less and larger domains
>>   on the MPI ranks that do domain decomposition
>>   (use mdrun -nt 20 -npme 4 ... for example, you will have to try
>>   around a bit with the exact number of PME ranks for optimal
>>   performance - or use the gmx tune_pme tool for that!)
>> 
>> c) In case you haven't done so already, compile GROMACS with
>>   MPI *and* OpenMP. Then, by using MPI domain decomposition plus
>>   OpenMP parallelism within each MPI rank, you will be
>>   able to use more cores in parallel even for smaller MD systems.
>> 
>>   Use mdrun -ntmpi 10 

Re: [gmx-users] domain decomposition error in the energy minimization step

2017-01-12 Thread qasimpars
Hi Carsten,

I think I couldn't clearly explain the protocol that I follow. Sorry for that. 
Firstly, I do the EM, nvt (100 ps), npt (100 ps) and md (100 ns) steps for the 
equilibrium. In all those steps I use the below free energy parameters for the 
forward state:

free-energy = yes
init-lambda = 0
delta-lambda = 0
...

For the equilibrium simulations of the backward state I use the below free 
energy parameters in all MD steps:

free-energy = yes
init-lambda = 1
delta-lambda = 0
...

After that, I check whether the system has converged in both forward and 
backward states. If yes, 50 starting snapshots are extracted from the last 50 
ns (converged part) of each of the two trajectories of 100 ns. Then 50 forward 
and 50 backward/reverse simulations of 50 ps each are carried out for the 
protein-ligand complex.
For 50 ps forward simulations I use the above free energy parameters of the 
equilibrium forward simulation. As for 50 ps backward simulations I use its 
previous free energy parameters again.

The answers to your questions:
- In my snippets delta-lambda=0. Because I use 1-step switching process in all 
simulations. That is known as fast growth method also.

If I change delta-lambda to 0.2 or something like that, it would be slow 
growth method.
-My system blows up in the md step of 100 ns backward state simulation with too 
many lincs error.

Maybe the protocol I follow is wrong especally for the backward state?
As far as I know, the slow growth method doesn't always provide the accurate 
results... I therefore prefer to use the fast growth method.
Another interesting thing is that no one has any experience on this topic on 
the gmx user mailing list :(

Any suggestions will be appreciated.

Thanks in advance.

> On 12 Jan 2017, at 18:34, "Kutzner, Carsten"  wrote:
> 
> Hi Qasim,
> 
>> On 11 Jan 2017, at 20:29, Qasim Pars  wrote:
>> 
>> Dear Carsten,
>> 
>> Thanks. The forward state simulations works properly with mdrun -ntmpi 8
>> -ntomp 2 or mdrun -ntmpi 4  -ntomp 4 as you suggested.
>> For the backward state GROMACS still gives too many lincs warning error
>> with those mdrun commands in the md step, indicating the system is far from
>> equilibrium. I used the below free energy parameters with the em, nvt, npt
>> and md steps for the backward state (fast growth). And to keep the ligand
>> in the active site of protein I use some restraints under the
>> intermolecular_interactions in the top file as I told in my previous email.
>> 
>> free-energy  = yes
>> init-lambda  = 1
>> delta-lambda = 0
>> sc-alpha = 0.3
>> sc-power = 1
>> sc-sigma = 0.25
>> sc-coul  = yes
>> couple-moltype   = ligand
>> couple-intramol  = no
>> couple-lambda0   = vdw-q
>> couple-lambda1   = none
>> nstdhdl  = 100
>> 
>> My questions:
>> 
>> 1) Do you think that I should use the above free energy paramaters in all
>> MD steps (em, nvt, npt and md)?
> I think you want them only in the MD part.
> 
>> 
>> 2) The structure seems fine before arising the lincs warning. First linc
>> warning belongs to protein atoms (not ligand atoms). However the ligand
>> doesn't move out of the active site. Maybe the following mdp file for the
>> md step is not correct or the backward state simulation of a complex
>> structure is something impossible with GROMACS?
> It should work in both directions. 
> Why is your delta-lambda zero in your snippets?
> 
>> 
>> 3) Maybe the intermolecular_interactions section and couple- flags don't
>> turn off intramolecular interactions of the ligand and turn on state B?
>> 
>> 4) How can I get rid of the linc warnings in the md step?
> How many LINCS warnings do you get?
> Does you system blow up?
> 
> Carsten
> 
>> 
>> #md.mdp:
>> 
>> ; RUN CONTROL
>> integrator   = sd
>> nsteps   = 5000
>> dt   = 0.002
>> comm-mode= Linear
>> nstcomm  = 100
>> 
>> ; OUTPUT CONTROL
>> nstxout-compressed= 500
>> compressed-x-precision= 1000
>> nstlog= 500
>> nstenergy = 500
>> nstcalcenergy = 100
>> 
>> ; BONDS
>> constraint_algorithm   = lincs
>> constraints= all-bonds
>> lincs_iter = 1
>> lincs_order= 4
>> lincs-warnangle= 30
>> continuation   = no
>> 
>> ; NEIGHBOUR SEARCHING
>> cutoff-scheme= verlet
>> ns-type  = grid
>> nstlist  = 20
>> pbc  = xyz
>> 
>> ; ELECTROSTATICS-EWALD
>> coulombtype  = PME
>> rcoulomb = 1.0
>> ewald_geometry   = 3d
>> pme-order= 4
>> fourierspacing   = 0.12
>> 
>> ; VAN DER WAALS
>> vdwtype = Cut-off
>> rvdw= 1.0
>> rvdw-switch = 0.9
>> DispCorr= EnerPres
>> 
>> ; TEMPERATURE COUPLING (SD - Langevin dynamics)
>> tc_grps=  Protein ligand
>> tau_t  =  1.0 1.0
>> ref_t  =  298.15 298.15
>> 
>> ; PRESSURE COUPLING
>> pco

Re: [gmx-users] domain decomposition error in the energy minimization step

2017-01-13 Thread Kutzner, Carsten
Hi Qasim,

> On 12 Jan 2017, at 20:22, qasimp...@gmail.com wrote:
> 
> Hi Carsten,
> 
> I think I couldn't clearly explain the protocol that I follow. Sorry for 
> that. Firstly, I do the EM, nvt (100 ps), npt (100 ps) and md (100 ns) steps 
> for the equilibrium. In all those steps I use the below free energy 
> parameters for the forward state:
> 
> free-energy = yes
> init-lambda = 0
> delta-lambda = 0
> ...
> 
> For the equilibrium simulations of the backward state I use the below free 
> energy parameters in all MD steps:
> 
> free-energy = yes
> init-lambda = 1
> delta-lambda = 0
> ...
> 
> After that, I check whether the system has converged in both forward and 
> backward states. If yes, 50 starting snapshots are extracted from the last 50 
> ns (converged part) of each of the two trajectories of 100 ns. Then 50 
> forward and 50 backward/reverse simulations of 50 ps each are carried out for 
> the protein-ligand complex.
> For 50 ps forward simulations I use the above free energy parameters of the 
> equilibrium forward simulation. As for 50 ps backward simulations I use its 
> previous free energy parameters again.
> 
> The answers to your questions:
> - In my snippets delta-lambda=0. Because I use 1-step switching process in 
> all simulations. That is known as fast growth method also.
> 
> If I change delta-lambda to 0.2 or something like that, it would be slow 
> growth method.
> -My system blows up in the md step of 100 ns backward state simulation with 
> too many lincs error.
But I assume that in the 50 forward and backward simulations of 50 ps each that 
you spawn
from your equilibrated 50 ns trajectories you 'switch' lambda from lambda=0 at 
0 ps to
lambda=1 at 50 ps, so you will have a delta-lambda != 0, right? Why else would 
you do the
switching simulations?

> 
> Maybe the protocol I follow is wrong especally for the backward state?
Maybe the protein+ligand complex is not stable, at least not in your simulation?
For it to be stable, the ligand should stay in the binding pocket in an 
equilibrium
MD simulation, without any further constraints.

Carsten
 
> As far as I know, the slow growth method doesn't always provide the accurate 
> results... I therefore prefer to use the fast growth method.
> Another interesting thing is that no one has any experience on this topic on 
> the gmx user mailing list :(
> 
> Any suggestions will be appreciated.
> 
> Thanks in advance.
> 
>> On 12 Jan 2017, at 18:34, "Kutzner, Carsten"  wrote:
>> 
>> Hi Qasim,
>> 
>>> On 11 Jan 2017, at 20:29, Qasim Pars  wrote:
>>> 
>>> Dear Carsten,
>>> 
>>> Thanks. The forward state simulations works properly with mdrun -ntmpi 8
>>> -ntomp 2 or mdrun -ntmpi 4  -ntomp 4 as you suggested.
>>> For the backward state GROMACS still gives too many lincs warning error
>>> with those mdrun commands in the md step, indicating the system is far from
>>> equilibrium. I used the below free energy parameters with the em, nvt, npt
>>> and md steps for the backward state (fast growth). And to keep the ligand
>>> in the active site of protein I use some restraints under the
>>> intermolecular_interactions in the top file as I told in my previous email.
>>> 
>>> free-energy  = yes
>>> init-lambda  = 1
>>> delta-lambda = 0
>>> sc-alpha = 0.3
>>> sc-power = 1
>>> sc-sigma = 0.25
>>> sc-coul  = yes
>>> couple-moltype   = ligand
>>> couple-intramol  = no
>>> couple-lambda0   = vdw-q
>>> couple-lambda1   = none
>>> nstdhdl  = 100
>>> 
>>> My questions:
>>> 
>>> 1) Do you think that I should use the above free energy paramaters in all
>>> MD steps (em, nvt, npt and md)?
>> I think you want them only in the MD part.
>> 
>>> 
>>> 2) The structure seems fine before arising the lincs warning. First linc
>>> warning belongs to protein atoms (not ligand atoms). However the ligand
>>> doesn't move out of the active site. Maybe the following mdp file for the
>>> md step is not correct or the backward state simulation of a complex
>>> structure is something impossible with GROMACS?
>> It should work in both directions. 
>> Why is your delta-lambda zero in your snippets?
>> 
>>> 
>>> 3) Maybe the intermolecular_interactions section and couple- flags don't
>>> turn off intramolecular interactions of the ligand and turn on state B?
>>> 
>>> 4) How can I get rid of the linc warnings in the md step?
>> How many LINCS warnings do you get?
>> Does you system blow up?
>> 
>> Carsten
>> 
>>> 
>>> #md.mdp:
>>> 
>>> ; RUN CONTROL
>>> integrator   = sd
>>> nsteps   = 5000
>>> dt   = 0.002
>>> comm-mode= Linear
>>> nstcomm  = 100
>>> 
>>> ; OUTPUT CONTROL
>>> nstxout-compressed= 500
>>> compressed-x-precision= 1000
>>> nstlog= 500
>>> nstenergy = 500
>>> nstcalcenergy = 100
>>> 
>>> ; BONDS
>>> constraint_algorithm   = lincs
>>> 

[gmx-users] Domain decomposition error while running coarse grained simulations on cluster

2019-09-01 Thread Avijeet Kulshrestha
Hi all,
I am running martini coarse-grained simulation with 15 fs of time step in
gromacs 2018.6. I have 25859 number of atoms and my box size is:
12.0  14.0  18.0
Where I have Protein, membrane (DPPC) and ions.
I have minimized energy with 16 processor and -rdd option as 2.5. It worked
fine but later in NVT simulation, this started giving errors.
*I am getting the following error without -rdd option when I used 1 node
and 8 processor per node:*
Fatal error:
There is no domain decomposition for 8 ranks that is compatible with the
given
box and a minimum cell size of 7.69783 nm
Change the number of ranks or mdrun option -rdd or -dds

*--> *I tried using several other numbers of processors but it didn't work.
It works fine with single processor per node.

I have used -rdd  2.5 with 16 processors then simulation runs but at some
point, it gives me the different kind of error,
Software inconsistency error:
Some interactions seem to be assigned multiple times
In few other simulations, I also get this kind of errors when using -rdd
option:
Fatal error:
30 of the 1378 bonded interactions could not be calculated because some
atoms involved moved further apart than the multi-body cut-off distance
(2.5 nm) or the two-body cut-off distance (2.5 nm), see option -rdd, for
pairs and tabulated bonds also see option -ddcheck

*Can someone tell me the proper protocol to find an optimum number of
processor required?*
*How to decide box size?*

*If -rdd option is required then what value should I choose?*


*I kept on getting these kinds of errors and I don't want trial and error
method so,*
*Can you suggest me some reading to get technical insight on this?*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] domain decomposition error >60 ns into simulation on a specific machine

2019-02-14 Thread Mala L Radhakrishnan
Hi all,

My student is trying to do a fairly straightforward MD simulation -- a
protein complex in water with ions with *no* pull coordinate.  It's on an
NVidia GPU-based machine and we're running gromacs 2018.3.

About 65 ns into the simulation, it dies with:

"an atom moved too far between two domain decomposition steps. This usually
means that your system is not well equilibrated"

If we restart at, say, 2 ns before it died, it then runs fine, PAST where
it died before, for another ~63 ns or so, and then dies with the same
error.  We have had far larger and arguably more complex gromacs jobs run
fine on this same machine.

Even stranger, when we run the same, problematic job on a different NVidia
GPU-based machine with slightly older CPUs that's running Gromacs 2016.4,
it runs fine (it's currently at 200 ns).

Below are the Gromacs hardware and compilation specs of the machine on
which it died in case that helps anyone:-  there is a note at the end of
this logfile output  that might be useful -- thanks in advance for any
ideas.
-

GROMACS version:2018.3
Precision:  single
Memory model:   64 bit
MPI library:thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:CUDA
SIMD instructions:  AVX2_256
FFT library:fftw-3.3.8-sse2-avx-avx2-avx2_128
RDTSCP usage:   enabled
TNG support:enabled
Hwloc support:  disabled
Tracing support:disabled
Built on:   2018-10-31 22:05:13
Build OS/arch:  Linux 3.10.0-693.21.1.el7.x86_64 x86_64
Build CPU vendor:   Intel
Build CPU brand:Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
Build CPU family:   6   Model: 85   Stepping: 4
Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl
clfsh cmov cx8 cx16 f16c fma hle htt intel lahf m
mx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm
sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
C compiler: /usr/bin/cc GNU 4.8.5
C compiler flags:-march=core-avx2 -O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
C++ compiler:   /usr/bin/c++ GNU 4.8.5
C++ compiler flags:  -march=core-avx2-std=c++11   -O3 -DNDEBUG
-funroll-all-loops -fexcess-precision=fast
CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
driver;Copyright (c) 2005-2018 NVIDIA Corporat
ion;Built on Sat_Aug_25_21:08:01_CDT_2018;Cuda compilation tools, release
10.0, V10.0.130
CUDA compiler
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=
sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode
;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;;;
 
;-march=core-avx2;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:10.0
CUDA runtime:   10.0
Running on 1 node with total 20 cores, 40 logical cores, 4 compatible GPUs
Hardware detected:
  CPU info:
Vendor: Intel
Brand:  Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
Family: 6   Model: 85   Stepping: 4
Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh
cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr
 nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2
sse3 sse4.1 sse4.2 ssse3 tdt x2apic
Number of AVX-512 FMA units: Cannot run AVX-512 detection - assuming 2
  Hardware topology: Basic
Sockets, cores, and logical processors:
  Socket  0: [   0  20] [   1  21] [   2  22] [   3  23] [   4  24] [
5  25] [   6  26] [   7  27] [   8  28] [   9
 29]
  Socket  1: [  10  30] [  11  31] [  12  32] [  13  33] [  14  34] [
15  35] [  16  36] [  17  37] [  18  38] [  19
 39]
  GPU info:
Number of GPUs detected: 4
#0: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC:  no, stat:
compatible
#1: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC:  no, stat:
compatible
#2: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC:  no, stat:
compatible
#3: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC:  no, stat:
compatible

Highest SIMD level requested by all nodes in run: AVX_512
SIMD instructions selected at compile time:   AVX2_256
This program was compiled for different hardware than you are running on,
which could influence performance. This build might have been configured on
a
login node with only a single AVX-512 FMA unit (in which case AVX2 is
faster),
while the node you are running on has dual AVX-512 FMA units.



-- 
Mala L. Radhakrishnan
Whitehead Associate Professor of Critical Thought
Associate Professor of Chemistry
Wellesley College
106 Central Street
Wellesley, MA 02481
(781)283-2981
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)

Re: [gmx-users] Domain decomposition error while running coarse grained simulations on cluster

2019-09-01 Thread Justin Lemkul




On 9/1/19 5:44 AM, Avijeet Kulshrestha wrote:

Hi all,
I am running martini coarse-grained simulation with 15 fs of time step in
gromacs 2018.6. I have 25859 number of atoms and my box size is:
12.0  14.0  18.0
Where I have Protein, membrane (DPPC) and ions.
I have minimized energy with 16 processor and -rdd option as 2.5. It worked
fine but later in NVT simulation, this started giving errors.
*I am getting the following error without -rdd option when I used 1 node
and 8 processor per node:*
Fatal error:
There is no domain decomposition for 8 ranks that is compatible with the
given
box and a minimum cell size of 7.69783 nm


This minimum cell size is significantly larger than a normal simulation 
would require. Do you have restraints or other non-standard interactions 
defined? The .log file snippet that describes the DD setup would be 
useful for tracking the issue down.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition error while running coarse grained simulations on cluster

2019-09-03 Thread Avijeet Kulshrestha
ct 2 columns (including the time column)
> >> Reading file umbrella71.tpr, VERSION 5.1.4 (single precision)
> >> Reading file umbrella98.tpr, VERSION 5.1.4 (single precision)
> >> Reading file umbrella111.tpr, VERSION 5.1.4 (single precision)
> >> Reading file umbrella119.tpr, VERSION 5.1.4 (single precision)
> >> Reading file umbrella139.tpr, VERSION 5.1.4 (single precision)
> >> Reading file umbrella146.tpr, VERSION 5.1.4 (single precision)
> >> Reading file umbrella157.tpr, VERSION 5.1.4 (single precision)
> >> Reading file umbrella180.tpr, VERSION 5.1.4 (single precision)
> >> Reading file umbrella202.tpr, VERSION 5.1.4 (single precision)
>
> What happens next? Nothing here says "error," however it looks like your
> input files are of an unexpected format. Perhaps you've switched
> pullx.xvg and pullf.xvg. As someone else suggested, you can use
> pullf.xvg files to get the PMF profile and avoid the issue entirely.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
>
>
> --
>
> Message: 3
> Date: Sun, 1 Sep 2019 13:22:15 -0400
> From: Justin Lemkul 
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] Domain decomposition error while running
> coarse grained simulations on cluster
> Message-ID: <71840e02-8747-38b0-d2a0-39da5adbb...@vt.edu>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
>
>
> On 9/1/19 5:44 AM, Avijeet Kulshrestha wrote:
> > Hi all,
> > I am running martini coarse-grained simulation with 15 fs of time step in
> > gromacs 2018.6. I have 25859 number of atoms and my box size is:
> > 12.0  14.0  18.0
> > Where I have Protein, membrane (DPPC) and ions.
> > I have minimized energy with 16 processor and -rdd option as 2.5. It
> worked
> > fine but later in NVT simulation, this started giving errors.
> > *I am getting the following error without -rdd option when I used 1 node
> > and 8 processor per node:*
> > Fatal error:
> > There is no domain decomposition for 8 ranks that is compatible with the
> > given
> > box and a minimum cell size of 7.69783 nm
>
> This minimum cell size is significantly larger than a normal simulation
> would require. Do you have restraints or other non-standard interactions
> defined? The .log file snippet that describes the DD setup would be
> useful for tracking the issue down.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
>
>
> --
>
> Message: 4
> Date: Sun, 1 Sep 2019 13:23:48 -0400
> From: Justin Lemkul 
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] mdrun error
> Message-ID: 
> Content-Type: text/plain; charset=utf-8; format=flowed
>
>
>
> On 9/1/19 7:02 AM, m g wrote:
> > Dear Justin,
> There are many other people on this list, and I don't always have answers
> :)
>
> >   I'm simulating a system with "wall" for z direction, but I gave this
> error in minimization step "software inconsistency error: lost particles
> while sorting",.Would you please help me? I used the following?parameter:
>
> What GROMACS version are you using? CPU or GPU? Does the simulation work
> without the walls? You need to provide a lot more diagnostic
> information. This is a cryptic error that suggests something very
> fundamental is wrong. That also makes it very hard to diagnose.
>
> If you're not using the latest version of GROMACS, start there. I recall
> some bug fixes that may be relevant. If the error is reproducible in
> 2019.3, please post a more complete description of what you're doing.
>
> -Justin
>
> > integrator? ??? =steep
> >
> > emtol?? ?? ?= 100.0emstep? = 0.01nsteps???
> ??? = 5000nstlist?? ??? = 1cutoff-scheme?? =
> Verletns_type?? ??? = gridrlist?? ?? ?= 1.2? ??coulombtype
> ?? ?= PMErcoulomb??? ?? ?= 1.2vdwtype =
> cutoffvdw-modi

Re: [gmx-users] Domain decomposition error while running coarse grained simulations on cluster

2019-09-04 Thread Justin Lemkul
ase, the log file showed

Running on 1 node with total 24 cores, 24 logical cores

Thus, it looks like the simulation was running on single node although

I

asked it to run on two nodes. I have no idea how to fix this issue.

Please

help me fix this issue or what I am doing wrong.

Thanks in advance,

Prabir


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--

*Prabir Khatua*
*Postdoctoral Research Associate*
*Department of Chemistry & Biochemistry*
*University of Oklahoma*
*Norman, Oklahoma 73019*
*U. S. A.*


--

Message: 2
Date: Sun, 1 Sep 2019 13:21:23 -0400
From: Justin Lemkul 
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] wham analysis
Message-ID: <65191dc8-43fd-bc2f-6b9a-20903052d...@vt.edu>
Content-Type: text/plain; charset=utf-8; format=flowed



On 9/1/19 5:35 AM, Negar Parvizi wrote:

   Dear all,
I used Justin tutorial(Tutorial 3: Umbrella Sampling: GROMACS Tutorial )

for my file which is protein-ligand complex.

The pulling force was in Y direction. when Umbrella sampling finished,

"Wham" couldn't analysis the data because wham is in z direction.what
should I do now for wham analysis? how can I change it to Y direction?

I sent message in gromacs comunity , what Justin said:
"WHAM does not presuppose the axis or vector; it does what you tell it.

If you're referring to the x-axis label in the PMF profile being "z,"  that
is just a generic (and perhaps imprecise) label that should be  changed to
the Greek character xi, per conventional notation."

I didn't understand it.

I made a guess based on minimal information. You asserted that you
pulled along y but WHAM indicated the bias was along z. I know that the
default x-axis label in profile.xvg says "z" and it causes confusion. So
I provided the comment that I did.

However, it is clear from the gmx wham output below that you did *not*
apply a bias along y, as you stated:


File umbrella0.tpr, 1 coordinates, geometry "distance", dimensions [N N

Y], (1 dimensions)

This means your reaction coordinate was the z-axis.


So I decided copy the error:

Here is the error:


Found 25 tpr and 25 pull force files in tpr-files.dat and

pullf-files.dat, respectively

Reading 12 tpr and pullf files
Automatic determination of boundaries...
Reading file umbrella0.tpr, VERSION 5.1.4 (single precision)
File umbrella0.tpr, 1 coordinates, geometry "distance", dimensions [N N

Y], (1 dimensions)

Pull group coordinates not expected in pullx files.
crd 0) k = 1000   position = 0.840198
Use option -v to see this output for all input tpr files


Reading pull force file with pull geometry distance and 1 pull

dimensions

Expecting these columns in pull file:

  0 reference columns for each individual pull coordinate
  1 data columns for each pull coordinate

With 1 pull groups, expect 2 columns (including the time column)
Reading file umbrella71.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella98.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella111.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella119.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella139.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella146.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella157.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella180.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella202.tpr, VERSION 5.1.4 (single precision)

What happens next? Nothing here says "error," however it looks like your
input files are of an unexpected format. Perhaps you've switched
pullx.xvg and pullf.xvg. As someone else suggested, you can use
pullf.xvg files to get the PMF profile and avoid the issue entirely.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==============



--

Message: 3
Date: Sun, 1 Sep 2019 13:22:15 -0400
From: Justin Lemkul 
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Domain decomposition error while r

Re: [gmx-users] Domain decomposition error while running coarse grained simulations on cluster

2019-09-11 Thread Avijeet Kulshrestha
>
> Here's your problem. You have pairs defined that are in excess of 12 nm,
> but they are assigned to a 1-4 interaction, so atoms that should be
> separated by three bonds. The user-defined potential shouldn't matter
> here unless you've added [pairs] to the topology.
>
> I see your point.
What can I do to define LJ potential between two atoms which are actually
far apart?
and the results from the single processors are correct?
I tried it in a different way too,
I generated tables between pairs which are far apart (with the LJ
potential) and kept them in the bonded section with function type 8 (to
exclude LJ from usual force field).
The system has 12282 number of particles and the size of the box is
9.76580  11.92760  11.62620.
I minimized energy with 10 processors and -rdd 1.4. It worked.
In NVT equilibration I am getting the following error:

Atom distribution over 8 domains: av 1535 stddev 38 min 1508 max 1582
Constraining the starting coordinates (step 0)
Constraining the coordinates at t0-dt (step 0)
Not all bonded interactions have been properly assigned to the domain
decomposition cells
A list of missing interactions:
  Tab. Bonds of818 missing 16
---
Program: gmx mdrun, version 2018.6
Source file: src/gromacs/domdec/domdec_topology.cpp (line 240)
MPI rank:0 (out of 8)
Software inconsistency error:
Some interactions seem to be assigned multiple times
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition error >60 ns into simulation on a specific machine

2019-02-14 Thread Mark Abraham
Hi,

What does the trajectory look like before it crashes?

We did recently fix a bug relevant to simulations using CHARMM switching
functions on GPUs, if that could be an explanation. We will probably put
out a new 2018 version with that fix next week (or so).

Mark

On Thu., 14 Feb. 2019, 20:26 Mala L Radhakrishnan, 
wrote:

> Hi all,
>
> My student is trying to do a fairly straightforward MD simulation -- a
> protein complex in water with ions with *no* pull coordinate.  It's on an
> NVidia GPU-based machine and we're running gromacs 2018.3.
>
> About 65 ns into the simulation, it dies with:
>
> "an atom moved too far between two domain decomposition steps. This usually
> means that your system is not well equilibrated"
>
> If we restart at, say, 2 ns before it died, it then runs fine, PAST where
> it died before, for another ~63 ns or so, and then dies with the same
> error.  We have had far larger and arguably more complex gromacs jobs run
> fine on this same machine.
>
> Even stranger, when we run the same, problematic job on a different NVidia
> GPU-based machine with slightly older CPUs that's running Gromacs 2016.4,
> it runs fine (it's currently at 200 ns).
>
> Below are the Gromacs hardware and compilation specs of the machine on
> which it died in case that helps anyone:-  there is a note at the end of
> this logfile output  that might be useful -- thanks in advance for any
> ideas.
> -
>
> GROMACS version:2018.3
> Precision:  single
> Memory model:   64 bit
> MPI library:thread_mpi
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> GPU support:CUDA
> SIMD instructions:  AVX2_256
> FFT library:fftw-3.3.8-sse2-avx-avx2-avx2_128
> RDTSCP usage:   enabled
> TNG support:enabled
> Hwloc support:  disabled
> Tracing support:disabled
> Built on:   2018-10-31 22:05:13
> Build OS/arch:  Linux 3.10.0-693.21.1.el7.x86_64 x86_64
> Build CPU vendor:   Intel
> Build CPU brand:Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
> Build CPU family:   6   Model: 85   Stepping: 4
> Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl
> clfsh cmov cx8 cx16 f16c fma hle htt intel lahf m
> mx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm
> sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
> C compiler: /usr/bin/cc GNU 4.8.5
> C compiler flags:-march=core-avx2 -O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
> C++ compiler:   /usr/bin/c++ GNU 4.8.5
> C++ compiler flags:  -march=core-avx2-std=c++11   -O3 -DNDEBUG
> -funroll-all-loops -fexcess-precision=fast
> CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
> driver;Copyright (c) 2005-2018 NVIDIA Corporat
> ion;Built on Sat_Aug_25_21:08:01_CDT_2018;Cuda compilation tools, release
> 10.0, V10.0.130
> CUDA compiler
>
> flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=
>
> sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode
>
> ;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;;;
>
>  
> ;-march=core-avx2;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
> CUDA driver:10.0
> CUDA runtime:   10.0
> Running on 1 node with total 20 cores, 40 logical cores, 4 compatible GPUs
> Hardware detected:
>   CPU info:
> Vendor: Intel
> Brand:  Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
> Family: 6   Model: 85   Stepping: 4
> Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh
> cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr
>  nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2
> sse3 sse4.1 sse4.2 ssse3 tdt x2apic
> Number of AVX-512 FMA units: Cannot run AVX-512 detection - assuming 2
>   Hardware topology: Basic
> Sockets, cores, and logical processors:
>   Socket  0: [   0  20] [   1  21] [   2  22] [   3  23] [   4  24] [
> 5  25] [   6  26] [   7  27] [   8  28] [   9
>  29]
>   Socket  1: [  10  30] [  11  31] [  12  32] [  13  33] [  14  34] [
> 15  35] [  16  36] [  17  37] [  18  38] [  19
>  39]
>   GPU info:
> Number of GPUs detected: 4
> #0: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC:  no, stat:
> compatible
> #1: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC:  no, stat:
> compatible
> #2: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC:  no, stat:
> compatible
> #3: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC:  no, stat:
> compatible
>
> Highest SIMD level requested by all nodes in run: AVX_512
> SIMD instructions selected at compile time:   AVX2_256
> This program was compiled for different hardware than you are running on,
> which could influence performance. This build might have been configured on
> a
> login nod

Re: [gmx-users] domain decomposition error >60 ns into simulation on a specific machine

2019-02-14 Thread Mala L Radhakrishnan
Hi Mark,

To my knowledge, she's not using CHARMM-related FF's at all -- I think she
is using Amber03 (Alyssa, correct me if I'm wrong). Visually and RSMD-wise
the trajectory looks totally normal, but is there something specific I
should be looking for in the trajectory, either visually or quantitatively?


Thanks,

Mala

On Thu, Feb 14, 2019 at 3:35 PM Mark Abraham 
wrote:

> Hi,
>
> What does the trajectory look like before it crashes?
>
> We did recently fix a bug relevant to simulations using CHARMM switching
> functions on GPUs, if that could be an explanation. We will probably put
> out a new 2018 version with that fix next week (or so).
>
> Mark
>
> On Thu., 14 Feb. 2019, 20:26 Mala L Radhakrishnan,  >
> wrote:
>
> > Hi all,
> >
> > My student is trying to do a fairly straightforward MD simulation -- a
> > protein complex in water with ions with *no* pull coordinate.  It's on an
> > NVidia GPU-based machine and we're running gromacs 2018.3.
> >
> > About 65 ns into the simulation, it dies with:
> >
> > "an atom moved too far between two domain decomposition steps. This
> usually
> > means that your system is not well equilibrated"
> >
> > If we restart at, say, 2 ns before it died, it then runs fine, PAST where
> > it died before, for another ~63 ns or so, and then dies with the same
> > error.  We have had far larger and arguably more complex gromacs jobs run
> > fine on this same machine.
> >
> > Even stranger, when we run the same, problematic job on a different
> NVidia
> > GPU-based machine with slightly older CPUs that's running Gromacs 2016.4,
> > it runs fine (it's currently at 200 ns).
> >
> > Below are the Gromacs hardware and compilation specs of the machine on
> > which it died in case that helps anyone:-  there is a note at the end of
> > this logfile output  that might be useful -- thanks in advance for any
> > ideas.
> > -
> >
> > GROMACS version:2018.3
> > Precision:  single
> > Memory model:   64 bit
> > MPI library:thread_mpi
> > OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> > GPU support:CUDA
> > SIMD instructions:  AVX2_256
> > FFT library:fftw-3.3.8-sse2-avx-avx2-avx2_128
> > RDTSCP usage:   enabled
> > TNG support:enabled
> > Hwloc support:  disabled
> > Tracing support:disabled
> > Built on:   2018-10-31 22:05:13
> > Build OS/arch:  Linux 3.10.0-693.21.1.el7.x86_64 x86_64
> > Build CPU vendor:   Intel
> > Build CPU brand:Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
> > Build CPU family:   6   Model: 85   Stepping: 4
> > Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl
> > clfsh cmov cx8 cx16 f16c fma hle htt intel lahf m
> > mx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm
> > sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
> > C compiler: /usr/bin/cc GNU 4.8.5
> > C compiler flags:-march=core-avx2 -O3 -DNDEBUG -funroll-all-loops
> > -fexcess-precision=fast
> > C++ compiler:   /usr/bin/c++ GNU 4.8.5
> > C++ compiler flags:  -march=core-avx2-std=c++11   -O3 -DNDEBUG
> > -funroll-all-loops -fexcess-precision=fast
> > CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda
> compiler
> > driver;Copyright (c) 2005-2018 NVIDIA Corporat
> > ion;Built on Sat_Aug_25_21:08:01_CDT_2018;Cuda compilation tools, release
> > 10.0, V10.0.130
> > CUDA compiler
> >
> >
> flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=
> >
> >
> sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode
> >
> >
> ;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;;;
> >
> >
> ;-march=core-avx2;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
> > CUDA driver:10.0
> > CUDA runtime:   10.0
> > Running on 1 node with total 20 cores, 40 logical cores, 4 compatible
> GPUs
> > Hardware detected:
> >   CPU info:
> > Vendor: Intel
> > Brand:  Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
> > Family: 6   Model: 85   Stepping: 4
> > Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh
> > cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr
> >  nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2
> > sse3 sse4.1 sse4.2 ssse3 tdt x2apic
> > Number of AVX-512 FMA units: Cannot run AVX-512 detection - assuming
> 2
> >   Hardware topology: Basic
> > Sockets, cores, and logical processors:
> >   Socket  0: [   0  20] [   1  21] [   2  22] [   3  23] [   4  24] [
> > 5  25] [   6  26] [   7  27] [   8  28] [   9
> >  29]
> >   Socket  1: [  10  30] [  11  31] [  12  32] [  13  33] [  14  34] [
> > 15  35] [  16  36] [  17  37] [  18  38] [  19
> >  39]
> >   GPU info:
> > Number of GPUs detected: 4
> > #0: NVIDIA GeForce GTX 1080 Ti, com