[gmx-users] gmx anadock

2018-11-15 Thread mohammad fathabadi
Hello
When I put pdb file from gmx cluster in gmx anadock, gromacs gave error that it 
cannot find  pdb files while my cluster pdb file involve 40 pdbs in one file.
I was wondering if you could give me some advice.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs 2018.3 with CUDA - segmentation fault (core dumped)

2018-11-15 Thread Szilárd Páll
That suggest there is an issue related to the CUDA FFT library -- or
something else indirectly related. Can you use a newer CUDA and try to see
if with -pmefft gpu you are still getting a crash?

--
Szilárd

On Mon, Nov 12, 2018, 11:58 AM Krzysztof Kolman  >
> > Dear Benson and Szilard,
> >
> > Thank you for your interest. Benson, I will try to answer you questions
> > first:
> > 1) No, I have only tested 2018.3 so far. I just changed from Gromacs
> 5.1.4
> > and these are my first tests.
> > 2) Not yet but I plan to do it.
> > 3) The results look reasonable after restart
> > 4) I think so but I did not this time. My computer has 16 gb of ram and I
> > checked the ram utilization 2h before the crash. It was only using 2gb
> out
> > of 16gb.
> > 5) I use my private computer so I think it should be possible to
> recompile
> > if needed or run with debug infortmation.
> >
> > Szilard. No but they are quite close. The first crashed happened at
> > 22536600, the second one 45006200. I did earlier different simulation
> > and it also crashed (seg fault) after around 12 h.
> >
>
>Ok. After more research I have manage to solve the problem. Mdrun with
> following flags does not trigger seg fault anymore:
>gmx mdrun -v -deffnm md_0_1 -nb gpu -pme cpu -pmefft cpu
>
>
>
> > Best regards,
> > Krzysztof
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Setting rcon according to system

2018-11-15 Thread Mark Abraham
Hi,

Ah I see. So unless your hydrated uranyl is modelled with bonded
interactions between uranyl atoms and water atoms, the only bonds in the
system are silicate hydroxyl, water, and uranyl. If so, then I suspect the
default value of lincs-order (which is 4, to suit highly connected
biomolecular use cases) is too high for the actual connectivity you have.
Reducing that to 3 will relax the minimum diameter that the domain
decomposition requires, which I feel is a more stable approach than
modifying -rcon. How does that work for you?

Perhaps we should automate such a check in grompp, to cater for such weakly
connected use cases.

Mark

On Thu, Nov 15, 2018 at 3:25 AM Sergio Perez  wrote:

> Actually the clay has the clayFF force which has only bonds on OH units,
> the rest of atoms are just LJ spheres with a charge. I guess the conclusion
> is still the same?
>
> On Wed, Nov 14, 2018 at 8:47 PM Mark Abraham 
> wrote:
>
> > Hi,
> >
> > On Wed, Nov 14, 2018 at 3:18 AM Sergio Perez 
> > wrote:
> >
> > > Hello,
> > > First of all thanks for the help :)
> > > I don't necessarily need to run it with 100 processors, I just want to
> > know
> > > how much I can reduce rcon taking into account the knowledge of my
> system
> > > without compromising the accuracy. Let me give some more details of my
> > > system. The system is a sodium montmorillonite clay with two solid
> > > alumino-silicate layers with two aqueous interlayers between them. The
> > >
> >
> > I assume the silicate network has many bonds over large space - these
> > adjacent bonds are the issue, not uranyl. (You would have the same
> problem
> > with a clay-only system.)
> >
> >
> > > system has TIP4P waters, some OH bonds within the clay and the bonds of
> > the
> > > uranyl hydrated ion described in my previous email as constraints. The
> > > system is orthorrhombic 4.67070x4.49090x3.77930 and has 9046 atoms.
> > >
> > > This is the ouput of gromacs:
> > >
> > > Initializing Domain Decomposition on 100 ranks
> > > Dynamic load balancing: locked
> > > Initial maximum inter charge-group distances:
> > >two-body bonded interactions: 0.470 nm, Tab. Bonds NC, atoms 10 13
> > > Minimum cell size due to bonded interactions: 0.000 nm
> > > Maximum distance for 5 constraints, at 120 deg. angles, all-trans:
> 0.842
> > nm
> > > Estimated maximum distance required for P-LINCS: 0.842 nm
> > > This distance will limit the DD cell size, you can override this with
> > -rcon
> > > Guess for relative PME load: 0.04
> > > Will use 90 particle-particle and 10 PME only ranks
> > >
> >
> > GROMACS has guessed to use 90 ranks in the real-space domain
> decomposition,
> > e.g. as an array of 6x5x3 ranks.
> >
> >
> > > This is a guess, check the performance at the end of the log file
> > > Using 10 separate PME ranks, as guessed by mdrun
> > > Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
> > > Optimizing the DD grid for 90 cells with a minimum initial size of
> 1.052
> > nm
> > > The maximum allowed number of cells is: X 4 Y 4 Z 3
> > >
> >
> > ... but only 4x4x3=48 ranks can work with the connectivity of your input.
> > Thus you are simply using too many ranks for a small system. You'd have
> to
> > relax the tolerances quite a lot to get to use 90 ranks. Just follow the
> > first part of the message advice and use fewer ranks :-)
> >
> > Mark
> >
> > ---
> > > Program: mdrun_mpi, version 2018.1
> > > Source file: src/gromacs/domdec/domdec.cpp (line 6571)
> > > MPI rank:0 (out of 100)
> > >
> > > Fatal error:
> > > There is no domain decomposition for 90 ranks that is compatible with
> the
> > > given box and a minimum cell size of 1.05193 nm
> > > Change the number of ranks or mdrun option -rcon or -dds or your LINCS
> > > settings
> > > Look in the log file for details on the domain decomposition
> > >
> > > For more information and tips for troubleshooting, please check the
> > GROMACS
> > > website at http://www.gromacs.org/Documentation/Errors
> > > ---
> > >
> > >
> > > Thank you for your help!
> > >
> > > On Wed, Nov 14, 2018 at 5:28 AM Mark Abraham  >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Possibly. It would be simpler to use fewer processors, such that the
> > > > domains can be larger.
> > > >
> > > > What does mdrun think it needs for -rcon?
> > > >
> > > > Mark
> > > >
> > > > On Tue, Nov 13, 2018 at 7:06 AM Sergio Perez  >
> > > > wrote:
> > > >
> > > > > Dear gmx comunity,
> > > > >
> > > > > I have been running my system without any problems with 100
> > processors.
> > > > But
> > > > > I decided to make some of the bonds of my main molecule constrains.
> > My
> > > > > molecule is not an extended chain, it is a molecular hydrated ion,
> in
> > > > > particular the uranyl cation with 5 water molecules forming a
> > > pentagonal
> > > > by
> > > > > bipyramid. At this point I get a domain decomposition error and I

Re: [gmx-users] Running GPU issue

2018-11-15 Thread Mark Abraham
Hi,

On Thu, Nov 15, 2018 at 1:54 PM Kovalskyy, Dmytro 
wrote:

> The error you saw is clear evidence of a bug. There's a few things you
> might try to help narrow things down.
> * Is the error reproducible on each run of mdrun?
>
> Yes
>
>
>
> * 2018.3 was not designed or tested on CUDA 10, which was released rather
> later. If you have an earlier CUDA, please see if building with it
> alleviates the error
>
> No I have only CUDA 10.0 installed. Taking into account that I made that
> running (see replies to the next questions) please let me know whether
> should I go with CUDA 9.
>
>
>
> * The error message could be triggered by multiple parts of the code; what
> do you get with mdrun -nb gpu -pme cpu?
>
> This way I get MD running, no crash. All 72 cores (dual Xeon Gold 6140)
>

OK, that's a good clue that there may be a bug in the code for PME on GPU.
It's not yet clear whether it relates to CUDA 10, Quadro GPUs, or your
inputs, or something else. Can you please open a ticket at
https://redmine.gromacs.org and attach your .tpr and log files from the
cases that produce the error? Then we can try to reproduce and see what
aspect is the issue.

Thanks!

Mark

with my system I got
>
> $mdrun -deffnm md -v -nb gpu -pme cpu
>
>  (ns/day)(hour/ns)
> Performance:   56.8670.422
> $nvidia-smi
> |   0  Quadro P5000On   | :D5:00.0  On |
> Off |
> | 35%   56CP063W / 180W |   1400MiB / 16256MiB | 37%
> Default |
>
>
>
> * Do you get any more diagnostics from running with a build with cmake
> -DCMAKE_BUILD_TYPE=Debug?
>
> Interesting comes here. Gromacs with Debug = ON allows to stably run MD
> with no additional params.
> I mean
> $ gmx mdrun -deffnm md -v
> runs with no crash. However, only half of the Xeon cores are used
> (monitored with htop)
>
>
>  (ns/day)(hour/ns)
> Performance:   69.5770.345
> $nvidia-smi
> |  0  Quadro P5000On   | :D5:00.0  On |
> Off |
> | 38%   62CP068W / 180W |   1430MiB / 16256MiB | 56%
> Default |
>
>
>
> If I run
> $ gmx mdrun -deffnm md -v -nb gpu -pme cpu
> Then all CPU cores are busy but performancs is poor
>  (ns/day)(hour/ns)
> Performance:   20.2491.185
> $nvidia-smi
> |   0  Quadro P5000On   | :D5:00.0  On |
> Off |
> | 37%   57CP058W / 180W |   1408MiB / 16256MiB | 22%
> Default |
>
>
> Finally,
>
> If I run same MD with no GPU at all (i.e. -nb cpu) then I got performance
>
>  (ns/day)(hour/ns)
> Performance:   47.5360.505
>
>
> Does this help?
>
>
> Dmytro
>
>
>
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Wednesday, November 14, 2018 1:37 PM
> To: gmx-us...@gromacs.org
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Running GPU issue
>
> Hi,
>
> I expect that that warning is fine (for now).
>
> The error you saw is clear evidence of a bug. There's a few things you
> might try to help narrow things down.
> * Is the error reproducible on each run of mdrun?
> * 2018.3 was not designed or tested on CUDA 10, which was released rather
> later. If you have an earlier CUDA, please see if building with it
> alleviates the error
> * The error message could be triggered by multiple parts of the code; what
> do you get with mdrun -nb gpu -pme cpu?
> * Do you get any more diagnostics from running with a build with cmake
> -DCMAKE_BUILD_TYPE=Debug?
>
> Mark
>
> On Wed, Nov 14, 2018 at 1:09 PM Kovalskyy, Dmytro 
> wrote:
>
> > I forgot to add. While compiling gromacs I got following error at the
> very
> > beggining:
> >
> >
> > [  3%] Built target gpu_utilstest_cuda
> > /usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu: In
> > function ?int do_sanity_checks(int, cudaDeviceProp*)?:
> > /usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu:258:28:
> > warning: ?cudaError_t cudaThreadSynchronize()? is deprecated
> > [-Wdeprecated-declarations]
> >  if (cudaThreadSynchronize() != cudaSuccess)
> > ^
> > /usr/local/cuda/include/cuda_runtime_api.h:947:46: note: declared here
> >  extern __CUDA_DEPRECATED __host__ cudaError_t CUDARTAPI
> > cudaThreadSynchronize(void);
> >
> >
> > But make has completed its job without falling down.
> >
> >
> >
> >
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> > Abraham 
> > Sent: Tuesday, November 13, 2018 10:29 PM
> > To: gmx-us...@gromacs.org
> > Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> > Subject: Re: [gmx-users] Running GPU issue
> >
> > Hi,
> >
> > It can share.
> >
> > Mark
> >
> > On Mon, Nov 12, 2018 at 10:19 PM Kovalskyy, Dmytro <
> kovals...@uthscsa.edu>
> > wrote:
> >
> > > Hi,
> > >
> > >
> > >
> > > To perform GPU 

Re: [gmx-users] Running GPU issue

2018-11-15 Thread Kovalskyy, Dmytro
The error you saw is clear evidence of a bug. There's a few things you
might try to help narrow things down.
* Is the error reproducible on each run of mdrun?

Yes



* 2018.3 was not designed or tested on CUDA 10, which was released rather
later. If you have an earlier CUDA, please see if building with it
alleviates the error

No I have only CUDA 10.0 installed. Taking into account that I made that 
running (see replies to the next questions) please let me know whether should I 
go with CUDA 9.



* The error message could be triggered by multiple parts of the code; what
do you get with mdrun -nb gpu -pme cpu?

This way I get MD running, no crash. All 72 cores (dual Xeon Gold 6140)

with my system I got 

$mdrun -deffnm md -v -nb gpu -pme cpu

 (ns/day)(hour/ns)
Performance:   56.8670.422
$nvidia-smi
|   0  Quadro P5000On   | :D5:00.0  On |  Off |
| 35%   56CP063W / 180W |   1400MiB / 16256MiB | 37%  Default |



* Do you get any more diagnostics from running with a build with cmake
-DCMAKE_BUILD_TYPE=Debug?

Interesting comes here. Gromacs with Debug = ON allows to stably run MD with no 
additional params. 
I mean
$ gmx mdrun -deffnm md -v 
runs with no crash. However, only half of the Xeon cores are used (monitored 
with htop) 


 (ns/day)(hour/ns)
Performance:   69.5770.345
$nvidia-smi 
|  0  Quadro P5000On   | :D5:00.0  On |  Off |
| 38%   62CP068W / 180W |   1430MiB / 16256MiB | 56%  Default |



If I run 
$ gmx mdrun -deffnm md -v -nb gpu -pme cpu
Then all CPU cores are busy but performancs is poor
 (ns/day)(hour/ns)
Performance:   20.2491.185
$nvidia-smi 
|   0  Quadro P5000On   | :D5:00.0  On |  Off |
| 37%   57CP058W / 180W |   1408MiB / 16256MiB | 22%  Default |


Finally,

If I run same MD with no GPU at all (i.e. -nb cpu) then I got performance

 (ns/day)(hour/ns)
Performance:   47.5360.505


Does this help?


Dmytro



From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Wednesday, November 14, 2018 1:37 PM
To: gmx-us...@gromacs.org
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Running GPU issue

Hi,

I expect that that warning is fine (for now).

The error you saw is clear evidence of a bug. There's a few things you
might try to help narrow things down.
* Is the error reproducible on each run of mdrun?
* 2018.3 was not designed or tested on CUDA 10, which was released rather
later. If you have an earlier CUDA, please see if building with it
alleviates the error
* The error message could be triggered by multiple parts of the code; what
do you get with mdrun -nb gpu -pme cpu?
* Do you get any more diagnostics from running with a build with cmake
-DCMAKE_BUILD_TYPE=Debug?

Mark

On Wed, Nov 14, 2018 at 1:09 PM Kovalskyy, Dmytro 
wrote:

> I forgot to add. While compiling gromacs I got following error at the very
> beggining:
>
>
> [  3%] Built target gpu_utilstest_cuda
> /usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu: In
> function ?int do_sanity_checks(int, cudaDeviceProp*)?:
> /usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu:258:28:
> warning: ?cudaError_t cudaThreadSynchronize()? is deprecated
> [-Wdeprecated-declarations]
>  if (cudaThreadSynchronize() != cudaSuccess)
> ^
> /usr/local/cuda/include/cuda_runtime_api.h:947:46: note: declared here
>  extern __CUDA_DEPRECATED __host__ cudaError_t CUDARTAPI
> cudaThreadSynchronize(void);
>
>
> But make has completed its job without falling down.
>
>
>
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Tuesday, November 13, 2018 10:29 PM
> To: gmx-us...@gromacs.org
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Running GPU issue
>
> Hi,
>
> It can share.
>
> Mark
>
> On Mon, Nov 12, 2018 at 10:19 PM Kovalskyy, Dmytro 
> wrote:
>
> > Hi,
> >
> >
> >
> > To perform GPU with Gromacs does it require exclusive  GPU card or
> Gromacs
> > can share the video card with X-server?
> >
> >
> > Thank you
> >
> >
> > Dmytro
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read 

[gmx-users] NVT LINCS Warning

2018-11-15 Thread zaved
Dear Gromacs Users

I am trying to simulate glucose molecule and for that I am utilizing the
gromos53a6carbo ff downloaded from
http://www.gromacs.org/Downloads/User_contributions/Force_fields.

After an successful energy minimization step, the NVT equilibration is
throwing error messages and the equilibration is getting killed.

Following is the error message (repeated no. of times):
Step 0
Step 83, time 0.166 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.000606, max 0.002103 (between atoms 16 and 17)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
 16 17   46.60.1001   0.0998  0.1000
 16 17   46.60.1001   0.0998  0.1000
 16 17   46.60.1001   0.0998  0.1000
 16 17   46.60.1001   0.0998  0.1000
 16 17   46.60.1001   0.0998  0.1000
 16 17   46.60.1001   0.0998  0.1000
 16 17   46.60.1001   0.0998  0.1000
 16 17   46.60.1001   0.0998  0.1000

N.B. I am using gromacs 5.1.4 version.

Any kind suggestion/s will be appreciated.

Thank You

Regards
Zaved Hazarika
Research Scholar
Dept. Of Molecular Biology and Biotechnology,
Tezpur University,
India



* * * D I S C L A I M E R * * *
This e-mail may contain privileged information and is intended solely for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail in error and destroy it 
from your system. Though considerable effort has been made to deliver error 
free e-mail messages but it can not be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, delayed, or may 
contain viruses. The recipient must verify the integrity of this e-mail message.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] pcoupl Berendsen

2018-11-15 Thread Gonzalez Fernandez, Cristina
Thank you very much Justin for your reply

-Mensaje original-
De: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 En nombre de Justin Lemkul
Enviado el: miércoles, 14 de noviembre de 2018 14:51
Para: gmx-us...@gromacs.org
Asunto: Re: [gmx-users] pcoupl Berendsen



On 11/14/18 5:28 AM, Gonzalez Fernandez, Cristina wrote:
> Hi Justin,
>
> I have taken a few days in answering you because I was trying to reduce the 
> discrepancies between the pressure I obtain after simulation and the one I 
> set in the .mdp file. However, I have no achieve very good results. As I 
> indicated in the previous email, in my simulations ref_p=1bar and ref_t=298K.
> According to the output of gmx energy, the pressure average after the 
> simulation is 0.19bar , Err.est= 0.59 and RMSD=204,98; and for the 
> temperature average=298.003, Err.est=0,0032; RMSD=2,76.
>
>  From this results and your previous email, as the error (Err.est) in 
> the pressure is of the same magnitude as the

That error estimate is not relevant here. Your reported pressure is 0.19 ± 205, 
which is indistinguishable from the target pressure of 1.

-Justin

> pressure value after simulation, the pressure significantly differs from the 
> reference value (1bar), so for example, more simulation time will be 
> required. This is also supported by the high RMSD, which is in the order of 
> hundreds. For the temperature, the error is 5 orders of magnitude lower than 
> the obtained value and the RMSD is very low. This suggests that the system 
> has reach the equilibrium temperature. Are these reasons correct?
>
> Another thing that makes me think that the pressure I obtain is not correct 
> is that the pressure I obtained after simulation and the one I obtain after 
> analysis also differ significantly (0.19 and 3.9 bar respectively).
>
> I have used long both NPT equilibration and simulation times (50 ns) and the 
> results are similar to the ones I have indicated above, which apparently 
> means that the system is stable.
>
>
>  From these discrepancies, do you think the differences are not as important 
> as I am considering?, what could I do to obtain more accurate pressure values?
>
> Regarding the Parrinello-Rahnman is "not stable for low pressures", I 
> understood that when using low pressures obtaining the reference pressure is 
> sometimes difficult by using Parrinello-Rahman. I was trying to use this 
> article in order to explain why my simulation pressures differ from the 
> ref_p, but as you say, I have also read papers that use Parrinello-Rahman for 
> simulating 1bar systems.
>
> Thank you very much for all your help,
>
> C.
>
>
> -Mensaje original-
> De: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
>  En nombre de 
> Justin Lemkul Enviado el: jueves, 8 de noviembre de 2018 14:01
> Para: gmx-us...@gromacs.org
> Asunto: Re: [gmx-users] pcoupl Berendsen
>
>
>
> On 11/8/18 7:50 AM, Gonzalez Fernandez, Cristina wrote:
>> Dear Gromacs users,
>>
>> In my simulations, I have specified ref_p= 1bar but after MD 
>> simulation I obtain pressures equal to 0.19 bar (even
> A pressure without an error bar is a meaningless value. The fluctuations of 
> pressure in most systems are on the order of tens or hundreds of bar, meaning 
> your result is indistinguishable from the target value.
>
>> with long simulation times) when using pcoupl=Parrinello-Rahman. I 
>> know that Parrinello-Rahman is recommend for production runs and 
>> Berendsen for NPT equilibration. However, I have read in an article 
>> that Parrinello-Rahman is not stable for low pressures, so in such 
>> situations its better to use Berendsen. I have tried to use Berendsen 
>> for
> I would be interested to know how this "not stable for low pressures"
> was determined, because it seems completely unlikely to be true. Most 
> MD simulations nowadays use Parrinello-Rahman for pressure coupling at 
> 1
> bar/1 atm without any issue if the system is properly equilibrated (and if 
> not, the problem is with preparation, not the barostat itself).
>
>> MD simulation but I obtain this Warning and I cannot remove it with the 
>> -maxwarn option.
>>
>> "Using Berendsen pressure coupling invalidates the true ensemble for the 
>> thermostat"
>>
>>
>> How can I use Berendsen for MD simulation?
> Simply, you can't, and you shouldn't. The Berendsen method produces an 
> invalid statistical mechanical ensemble. It relaxes systems quickly and is 
> therefore still useful for equilibration, but should never be employed during 
> data collection. Full stop.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> 

Re: [gmx-users] Setting rcon according to system

2018-11-15 Thread Sergio Perez
Actually the clay has the clayFF force which has only bonds on OH units,
the rest of atoms are just LJ spheres with a charge. I guess the conclusion
is still the same?

On Wed, Nov 14, 2018 at 8:47 PM Mark Abraham 
wrote:

> Hi,
>
> On Wed, Nov 14, 2018 at 3:18 AM Sergio Perez 
> wrote:
>
> > Hello,
> > First of all thanks for the help :)
> > I don't necessarily need to run it with 100 processors, I just want to
> know
> > how much I can reduce rcon taking into account the knowledge of my system
> > without compromising the accuracy. Let me give some more details of my
> > system. The system is a sodium montmorillonite clay with two solid
> > alumino-silicate layers with two aqueous interlayers between them. The
> >
>
> I assume the silicate network has many bonds over large space - these
> adjacent bonds are the issue, not uranyl. (You would have the same problem
> with a clay-only system.)
>
>
> > system has TIP4P waters, some OH bonds within the clay and the bonds of
> the
> > uranyl hydrated ion described in my previous email as constraints. The
> > system is orthorrhombic 4.67070x4.49090x3.77930 and has 9046 atoms.
> >
> > This is the ouput of gromacs:
> >
> > Initializing Domain Decomposition on 100 ranks
> > Dynamic load balancing: locked
> > Initial maximum inter charge-group distances:
> >two-body bonded interactions: 0.470 nm, Tab. Bonds NC, atoms 10 13
> > Minimum cell size due to bonded interactions: 0.000 nm
> > Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.842
> nm
> > Estimated maximum distance required for P-LINCS: 0.842 nm
> > This distance will limit the DD cell size, you can override this with
> -rcon
> > Guess for relative PME load: 0.04
> > Will use 90 particle-particle and 10 PME only ranks
> >
>
> GROMACS has guessed to use 90 ranks in the real-space domain decomposition,
> e.g. as an array of 6x5x3 ranks.
>
>
> > This is a guess, check the performance at the end of the log file
> > Using 10 separate PME ranks, as guessed by mdrun
> > Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
> > Optimizing the DD grid for 90 cells with a minimum initial size of 1.052
> nm
> > The maximum allowed number of cells is: X 4 Y 4 Z 3
> >
>
> ... but only 4x4x3=48 ranks can work with the connectivity of your input.
> Thus you are simply using too many ranks for a small system. You'd have to
> relax the tolerances quite a lot to get to use 90 ranks. Just follow the
> first part of the message advice and use fewer ranks :-)
>
> Mark
>
> ---
> > Program: mdrun_mpi, version 2018.1
> > Source file: src/gromacs/domdec/domdec.cpp (line 6571)
> > MPI rank:0 (out of 100)
> >
> > Fatal error:
> > There is no domain decomposition for 90 ranks that is compatible with the
> > given box and a minimum cell size of 1.05193 nm
> > Change the number of ranks or mdrun option -rcon or -dds or your LINCS
> > settings
> > Look in the log file for details on the domain decomposition
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > ---
> >
> >
> > Thank you for your help!
> >
> > On Wed, Nov 14, 2018 at 5:28 AM Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > Possibly. It would be simpler to use fewer processors, such that the
> > > domains can be larger.
> > >
> > > What does mdrun think it needs for -rcon?
> > >
> > > Mark
> > >
> > > On Tue, Nov 13, 2018 at 7:06 AM Sergio Perez 
> > > wrote:
> > >
> > > > Dear gmx comunity,
> > > >
> > > > I have been running my system without any problems with 100
> processors.
> > > But
> > > > I decided to make some of the bonds of my main molecule constrains.
> My
> > > > molecule is not an extended chain, it is a molecular hydrated ion, in
> > > > particular the uranyl cation with 5 water molecules forming a
> > pentagonal
> > > by
> > > > bipyramid. At this point I get a domain decomposition error and I
> would
> > > > like to reduce rcon in order to run with 100 processors. Since I know
> > > that
> > > > by the shape of my molecule, two atoms connected by several
> constraints
> > > > will never be further appart than 0.6nm, can I use this safely for
> > -rcon?
> > > >
> > > > Thank you very much!
> > > > Best regards,
> > > > Sergio Pérez-Conesa
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > >