Re: [gmx-users] Set up anti parallel membrane system for CompEL simulation

2020-05-04 Thread Jochen Hub




Am 04.05.20 um 23:21 schrieb Jochen Hub:



Am 04.05.20 um 21:33 schrieb Zheng Ruan:

Hi,

I'm trying to setup an antiparallel membrane system for CompEL 
simulation.

It is relatively straightforward to convert an existing single membrane
system to a parallel system by using

# gmx genconf -f system.gro -nbox 1 1 2 -o system.parallel.gro

However, is there an easy way to invert one of the membrane protein
configurations along with the membrane, water and ions?


You could try:

gmx editconf -rotate 90 0 0


Sorry, I meant of course editconf -rotate 180 0 0

You probably have to combine this with an editconf -translate, together 
with a manual extension of the box by 1-2 Angstroem to avoid overlapping 
water molecules at the box edge.


Jochen



Cheers, Jochen



Thanks,
Zheng





--
---
Prof. Dr. Jochen Hub
Theoretical Biophysics Group
Department of Physics, Saarland University
Campus E2 6, Zi. 4.11, 66123 Saarbruecken, Germany
Phone: +49 (0)681 302-2740
https://biophys.uni-saarland.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Set up anti parallel membrane system for CompEL simulation

2020-05-04 Thread Jochen Hub




Am 04.05.20 um 21:33 schrieb Zheng Ruan:

Hi,

I'm trying to setup an antiparallel membrane system for CompEL simulation.
It is relatively straightforward to convert an existing single membrane
system to a parallel system by using

# gmx genconf -f system.gro -nbox 1 1 2 -o system.parallel.gro

However, is there an easy way to invert one of the membrane protein
configurations along with the membrane, water and ions?


You could try:

gmx editconf -rotate 90 0 0

Cheers, Jochen



Thanks,
Zheng



--
---
Prof. Dr. Jochen Hub
Theoretical Biophysics Group
Department of Physics, Saarland University
Campus E2 6, Zi. 4.11, 66123 Saarbruecken, Germany
Phone: +49 (0)681 302-2740
https://biophys.uni-saarland.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] 2020.1: comm broken with -update gpu -bonded gpu ?

2020-03-23 Thread Jochen Hub

Hi Magnus and Justin,

Am 23.03.20 um 13:52 schrieb Magnus Lundborg:
> Hi Jochen,
>
> Have you tested if this happens with -update cpu? Perhaps it's the 
bonded on GPU that's the problem, unrelated to updating.


The problem only appears with -update gpu, not with -update cpu.

 I didn't see a
> Redmine issue yet. Could you make one?

I tried, but I seem to lack permissions: On

https://redmine.gromacs.org/projects/gromacs/issues/new

I get "You are not authorized to access this page." Did I miss something 
here?


Thanks,
Jochen


Am 23.03.20 um 13:52 schrieb Magnus Lundborg:

Hi Jochen,

Have you tested if this happens with -update cpu? Perhaps it's the 
bonded on GPU that's the problem, unrelated to updating. I didn't see a 
Redmine issue yet. Could you make one?


Cheers,
Magnus

On 2020-03-19 18:01, Jochen Hub wrote:

Hi developers,

I am running a simple DPPC membrane (Berger force field, PME, 1nm 
cutoff, 4fs time step, all standard) with 2020.1 and


comm-mode    = Linear
nstcomm  = 100
comm-grps    = DPPC water  ; OR System

but the membrane rapidly drifts along the z direction: approx. once 
across the box per 100ps, and accelerating over time.


This happens only with

mdrun -update gpu -bonded gpu

but not with

mdrun -update gpu -bonded cpu   (no spelling mistake)

(with a GTX 1070Ti).

Also no problems with 2018.6 or 2019.6.

Seems like the center of mass motion removal is broken when doing both 
*bonded and updating* on the GPU. Is this issue known?


Cheers,
Jochen





--
-------
Prof. Dr. Jochen Hub
Theoretical Biophysics Group
Department of Physics, Saarland University
Campus E2 6, Zi. 4.11, 66123 Saarbruecken, Germany
Phone: +49 (0)681 302-2740
https://biophys.uni-saarland.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] 2020.1: comm broken with -update gpu -bonded gpu ?

2020-03-19 Thread Jochen Hub

Hi developers,

I am running a simple DPPC membrane (Berger force field, PME, 1nm 
cutoff, 4fs time step, all standard) with 2020.1 and


comm-mode= Linear
nstcomm  = 100
comm-grps= DPPC water  ; OR System

but the membrane rapidly drifts along the z direction: approx. once 
across the box per 100ps, and accelerating over time.


This happens only with

mdrun -update gpu -bonded gpu

but not with

mdrun -update gpu -bonded cpu   (no spelling mistake)

(with a GTX 1070Ti).

Also no problems with 2018.6 or 2019.6.

Seems like the center of mass motion removal is broken when doing both 
*bonded and updating* on the GPU. Is this issue known?


Cheers,
Jochen

--
---
Prof. Dr. Jochen Hub
Theoretical Biophysics Group
Department of Physics, Saarland University
Campus E2 6, Zi. 4.11, 66123 Saarbruecken, Germany
Phone: +49 (0)681 302-2740
https://biophys.uni-saarland.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Performance with Epyc Rome

2019-08-28 Thread Jochen Hub

Dear Gromacs users,

does someone already have experience with the new AMD Epyc Rome? Can we 
expect that 4 Epyc Cores per Nvidia RTX 2080 on a CPU/GPU node is 
sufficient for common simulations (as one would expect with an common 
Intel Xeon)?


Many thanks,
Jochen


--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] wham analysis

2019-08-26 Thread Jochen Hub

Hi,

you can also use the pullf output for WHAM (option -if), this may be easier.

Cheers, Jochen

Am 26.08.19 um 12:59 schrieb Negar Parvizi:


  Dear all,I used Justin's tutorial(Tutorial 3: Umbrella Sampling: GROMACS 
Tutorial ) for my file which is protein-ligand complex.
The pulling force was in Y direction. when Umbrella sampling finished, "Wham" 
couldn't analysis the data because wham is in z direction.what should I do now for wham 
analysis? how can I change it to Y direction?what justin said: I didn't understand it:
"WHAM does not presuppose the axis or vector; it does what you tell it.  If you're referring 
to the x-axis label in the PMF profile being "z,"  that is just a generic (and perhaps 
imprecise) label that should be  changed to the Greek character xi, per conventional notation."

So I decided copy the error:
  Here is the error:

Found 25 tpr and 25 pull force files in tpr-files.dat and pullf-files.dat, 
respectively
Reading 12 tpr and pullf files
Automatic determination of boundaries...
Reading file umbrella0.tpr, VERSION 5.1.4 (single precision)
File umbrella0.tpr, 1 coordinates, geometry "distance", dimensions [N N Y], (1 
dimensions)

     Pull group coordinates not expected in pullx files.
     crd 0) k = 1000   position = 0.840198
     Use option -v to see this output for all input tpr files


Reading pull force file with pull geometry distance and 1 pull dimensions
Expecting these columns in pull file:

     0 reference columns for each individual pull coordinate
     1 data columns for each pull coordinate

With 1 pull groups, expect 2 columns (including the time column)
Reading file umbrella71.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella98.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella111.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella119.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella139.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella146.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella157.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella180.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella202.tpr, VERSION 5.1.4 (single precision)



I would  appreciate any help
Tanks in advance,
Negar



--
-------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] OPLS parameters for O2

2019-05-10 Thread Jochen Hub
The O2 model pointed out by David is certainly ok for many applications, 
but Luca Monticelli did some careful work on testig O2 models. That is 
worth taking a look as well.


Cheers,
Jochen

Am 10.05.19 um 00:42 schrieb Shadi Fuladi:

Hi,

I'm trying to test molecular oxygen diffusion in electrolytes using OPLS
force field. Is there any O2 parameters tested with OPLS AA forcefield?

Thanks,
SF



--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Detection for best SIMD instructions failed, using SIMD None

2019-04-04 Thread Jochen Hub

Hi developers,

cmake is not able to detect that our CPUs support AVX2_256:

-- Detecting best SIMD instructions for this CPU
-- Detection for best SIMD instructions failed, using SIMD - None

This happens on

- Arch Linux
- Gromacs 2018.6 and 2019.1
- cmake version 3.14.1
- with Intel Xeon E5-2643 v4 and Xeon E-2136
- gcc 8.2.1

When specifying manually

cmake -DGMX_SIMD=AVX2_256

all is fine and mdrun runs smoothly with expected performance.

Is this the intended behavior? Is there something we should report to 
find the reason for this behavior?


Thank you,
Jochen

--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Installation with CUDA on Debian / gcc 6+

2019-04-01 Thread Jochen Hub

Hi Mark, Szilárd, and Åke,

many thanks for your help, this fully answers our questions.

Cheers,
Jochen

Am 01.04.19 um 18:28 schrieb Szilárd Páll:

On Mon, Apr 1, 2019 at 5:08 PM Jochen Hub  wrote:


Hi Åke,

ah, thanks, we had indeed a CUDA 8.0 on our Debian. So we'll try to
install CUA 10.1.

But as a side question: Doesn't the supported gcc version strongly
depend on the Linux distribution, see here:

https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html



On paper yes in practice not so much.

The officially listed "qualified" combinations are not strict (and hard)
requirement-combinations; as long as the CUDA dkms compiles for your kernel
and the nvcc works with the gcc compiler you provide it, things will
generally work. Kernels or compilers shipped by a distro can deviate enough
from others that issues may arise, but those cases are not overly common
(as far as I know, though admittedly I don't maintain diverse
infrastructure).

By the way, your distro is not "qualified" at all ;)

--
Szilárd

Thanks,

Jochen


Am 01.04.19 um 16:52 schrieb Åke Sandgren:

Use a newer version of CUDA?

CUDA 10.1 supports GCC 8.

On 4/1/19 4:33 PM, Jochen Hub wrote:

Hi all,

we try to install Gromacs with CUDA support on a Debian system. Cuda
complains about the gcc 6.30 naively installed on Debian, since Cuda
supports gcc only until gcc 5.

The problem is that Debian removed packages for gcc-5, so installing an
older gcc is more tedious.

We understand that CUDA support for gcc strongly depends on the Linux
Distribution, see

https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

Therefore: Is there any workaround to compile Gromacs with CUDA under
Debian with a gcc 6+ ?

Thanks a lot,
Jochen






--
-------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
-------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Installation with CUDA on Debian / gcc 6+

2019-04-01 Thread Jochen Hub

Hi Åke,

ah, thanks, we had indeed a CUDA 8.0 on our Debian. So we'll try to 
install CUA 10.1.


But as a side question: Doesn't the supported gcc version strongly 
depend on the Linux distribution, see here:


https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

Thanks,
Jochen


Am 01.04.19 um 16:52 schrieb Åke Sandgren:

Use a newer version of CUDA?

CUDA 10.1 supports GCC 8.

On 4/1/19 4:33 PM, Jochen Hub wrote:

Hi all,

we try to install Gromacs with CUDA support on a Debian system. Cuda
complains about the gcc 6.30 naively installed on Debian, since Cuda
supports gcc only until gcc 5.

The problem is that Debian removed packages for gcc-5, so installing an
older gcc is more tedious.

We understand that CUDA support for gcc strongly depends on the Linux
Distribution, see

https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

Therefore: Is there any workaround to compile Gromacs with CUDA under
Debian with a gcc 6+ ?

Thanks a lot,
Jochen






--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Installation with CUDA on Debian / gcc 6+

2019-04-01 Thread Jochen Hub

Hi all,

we try to install Gromacs with CUDA support on a Debian system. Cuda 
complains about the gcc 6.30 naively installed on Debian, since Cuda 
supports gcc only until gcc 5.


The problem is that Debian removed packages for gcc-5, so installing an 
older gcc is more tedious.


We understand that CUDA support for gcc strongly depends on the Linux 
Distribution, see


https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

Therefore: Is there any workaround to compile Gromacs with CUDA under 
Debian with a gcc 6+ ?


Thanks a lot,
Jochen


--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Fixing the molecule in the centre of the micelle

2018-12-13 Thread Jochen Hub

Hi,

I would use

pull-geometry = distance

with pull-dim = Y Y Y

between the COMs of the micelle and the drug. You can use this already 
during the energy minimization.


I would not use comm-mode = angular, this is meant for other applications.

Cheers,
Jochen

Am 21.11.18 um 19:29 schrieb Alexey Kaa:

Dear Gromacs users,

I am wondering if you could help with advice. In my simulation I have a
drug that is initially put into the centre of a micelle. It tends to drift
away towards the micelle-water interface. I would like to run an umbrella
sampling simulation in order to get a potential of mean force function from
the centre of the micelle (let's assume it is spherical) towards bulk. If I
run energy minimisation and the NPT-equillibration the drug molecule (or
the micelle) already drifts away to the energetically more favourable
position, but obviously these steps must take place as otherwise we have a
non-balanced system. I tried to let the molecule equilibrate first and then
pull it through the center towards the opposite side of the micelle, but
then it rather rotates the whole micelle (even if I apply comm-mode Angular
to the micelle-building type of molecule), than goes through the centre. I
am wondering if it is possible to fix the centers of mass of both - the
drug molecule and also the center of mass of the micelle through the
minimisation/equilibration steps before applying the pull-code, but so that
the micelle-constructing molecules would equilibrate inside it and also the
pressure of water would become uniform outside? Or am I restraining the
rotation of the micelle wrongly?

API = drug, DLiPC = phospholipids making a micelle.
; mode for center of mass motion removal
comm-mode= Angular
; number of steps for center of mass motion removal
nstcomm  = 1
comm-grps= API DLiPC

Thanks,
Aleksei



--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Non-symmetric PMF across lipid bilayer

2018-11-21 Thread Jochen Hub
rg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Shreyas Sanjay Kaptan
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.





--
-------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] AMBER ff14 RNA force field

2018-06-07 Thread Jochen Hub

Hi all,

does anyone know if the recent AMBER ff14 RNA force field by Tana, 
Piana, Dirks, and Shaw is available for download 
(www.pnas.org/cgi/doi/10.1073/pnas.1713027115)?


Google does not find anything.

Many thanks,
Jochen

--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gmx wham (again)

2018-04-23 Thread Jochen Hub

Hi Alex, hi Justin,

Am 18.04.18 um 09:17 schrieb Alex:

I suppose this question is mostly for Justin...

Let me remind what I am dealing with and ask if my idea is correct.

I have a rectangular membrane in XY with a pore at (X/2, Y/2) in water 
and want to get the Gibbs free energy curve for an ion. For this, I have 
a bunch of starting configurations at (X/2, Y/2) and Z varying between 
some -z0 and z0. The bias in the "fake" pull mdp is applied as N N Y. 
Near the membrane, this means the entire plane is sampled, which adds 
contributions I am not interested in. I want the pore and a small region 
of the membrane around it, not the membrane, given its propertie


So, I applied weak (k=50) in-plane restraint to the ion for each of the 
sampled configurations -- to keep the sampling region a bit closer to 
the pore. The results look completely different, but they finally make 
very good qualitative sense. The ion still walks around within a small 
disk, but not much -- this tentatively makes me happy.


What you want is a /cylindrical flat-bottomed (FB) position restraint/ 
around the pore, see the manual for "flat-bottomed position restraints". 
You will make sure that the ion stays close the pore in the xy-plane, 
and this way you avoid sampling problems at the pore entry and exit.


Note that the FB position restraint will change the entropy of the ion 
in the x-y plane. The entropic correction needed for your PMFs (to 
remove the contribution form the flat-bottomed restraint) is worked out 
in Appendix of this paper:


http://dx.doi.org/10.1021/acs.jpcb.6b11279

This correction allows you to refer your PMF to a well-defined /area per 
pore/ or, equivalently, /density of pores/ in the membrane (which 
sometimes causes a lot of confusion).


I would definitely not use a simple (not flat-bottomed) position 
restraint to restrain the ion near the pore - this may cause artifacts 
in your PMF.


The cylinder-based reaction coordinate (mentioned by Justin) affects how 
the center of mass of the membrane is computed in Z-direction. Hence, it 
changes the *reaction coordinate*. But it does *not* affect the sampled 
xy-plane of the ion.


The idea behind the cylinder-based reaction coordinate is to avoid that 
permeation barriers are smeared out due to membrane undulations. Hence, 
for a small membrane that hardly undulates, a simple reaction coordinate 
is probably fine (using pull_coord1_geometry = direction or 
direction-periodic).


I hope this helped.

Cheers,
Jochen



Would you believe the results obtained this way, assuming otherwise 
proper setup?


Thanks,

Alex



--
-------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-04-05 Thread Jochen Hub



Am 03.04.18 um 19:03 schrieb Szilárd Páll:

On Tue, Apr 3, 2018 at 5:10 PM, Jochen Hub <j...@gwdg.de> wrote:



Am 03.04.18 um 16:26 schrieb Szilárd Páll:


On Tue, Apr 3, 2018 at 3:41 PM, Jochen Hub <j...@gwdg.de> wrote:


benchmar

Am 29.03.18 um 20:57 schrieb Szilárd Páll:


Hi Jochen,

For that particular benchmark I only measured performance with
1,2,4,8,16 cores with a few different kinds of GPUs. It would be easy
to do the runs on all possible core counts with increments of 1, but
that won't tell a whole lot more than what the performance is of a run
using a E5-2620 v4 CPU (with some GPUs) on a certain core count. Even
extrapolating from that 2620 to a E5-2630 v4 and expecting to get a
good estimate is tricky (given that the latter has 25% more cores for
the same TDP!), let alone to any 26xxv4 CPU or the current-gen Skylake
chips which have different performance characteristics.

As Mark notes, there are some mdp option as well as some system
charateristics that will have a strong influence on performance -- if
tens of % is something you consider "strong" (some users are fine to
be within a 2x ballpark :).

What's worth considering is to try to avoid ending up strongly CPU or
GPU bound from start. That may admittedly could be a difficult task
you would run e.g. both biased MD with large pull groups and all-bonds
constraints with Amber FF on large-ish (>100k) systems as well as
vanilla MD with CHARMM FF with small-ish (<25k) systems. On the same
hardware the former will be more prone to be CPU-bound while the
latter will have relatively more GPU-heavy workload.

There are many factors that influence the performance of a run and
therefore giving a the one right answer to your question is not really
possible. What can say is that 7-10 "core-GHz" per fast Pascal GPU is
generally sufficient for "typical" protein simulations to run at >=85%
of peak.




Hi Szilárd,

many thanks, this alrady helps me a lot. Just to get it 100% clear what
you
mean with core-GHz: A 10-core E5-2630v4 with 2.2 GHz would have 22
core-GHz,
right?



Yes, that's what I was referring to; note that a 2630v4 won't be
running at a 2.2 GHz base clock if you run AVX code ;)



Okay, I didn't know this. What would be the base clock instead with AVX
code?



Short version: It's not easy to out details as Intel conveniently
omits it from the specs, but it's AFAIK 3-400 MHz lower; also note
that "turbo bins" change as a function of cores used (so you can't
just benchmark on a few cores leaving the rest idle). Also, the actual
clock speed (and overall performance) depend on other factors too so
benchmarking and extrapolation might require consider other factors
too.

Let me know if you are interested in more details.


Hi Szilárd,

many thanks, that helps a lot!

Best,
Jochen



--
Szilárd






Thanks,
Jochen




Cheers,
--
Szilárd


On Wed, Mar 28, 2018 at 4:31 AM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:



Hi,

On Tue, Mar 27, 2018 at 6:43 PM Jochen Hub <j...@gwdg.de> wrote:


Dear Gromacs community, dear Mark,

Mark showed in the webinar today that having more than 8 Xeon
E5-26XXv4
cores does not help when using a GTX 1080Ti and PME on the GPU.



... for that particular simulation system.



Unfortunately, there were no data points between 4 and 8 CPU cores,
hence it was not clear at which #cores the performance actually levels
off. With a GTX 1080 (not Ti) I once found that having more than 5
Xeon
cores does not help, if not having UB potentials, but I don't have a
1080Ti at hand to test for that.



Those data points may not have been run. Szilard might have the data -
this
was GLIC 2fs comparing 1080 with 1080Ti from the recent plots he
shared.



So my questions are:

- At which number of E5-26XXv4 cores does the performance for common
systems level off with a 1080Ti for your test system (GLIC)?

- Does the answer depend strongly on the mdp settings (in particular
on
the LJ cutoff)?



Longer LJ cutoff (e.g. from different forcefields) will certainly
require
more non-bonded work, so then fewer CPU cores would be needed to do the
remaining non-offloaded work. However any sweet spot for a particular
.tpr
would be highly dependent on other effects, such as the ratio of
solvent
(which typically has less LJ and simpler update) to solute, or the
density
of dihedral or U-B interactions. And doing pulling or FEP is very
different
again. The sweet spot for the next project will be elsewhere, sadly.

This would help us a lot when choosing the appropriate CPU for a
1080Ti.




Many thanks for any suggestions,
Jochen

--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany


<https://maps.google.com/?q=Justus-von-Liebig-Weg+11,+37077+G%C3%B6ttingen,+Germany=gmail=g>
.

Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-04-03 Thread Jochen Hub



Am 03.04.18 um 16:26 schrieb Szilárd Páll:

On Tue, Apr 3, 2018 at 3:41 PM, Jochen Hub <j...@gwdg.de> wrote:



Am 29.03.18 um 20:57 schrieb Szilárd Páll:


Hi Jochen,

For that particular benchmark I only measured performance with
1,2,4,8,16 cores with a few different kinds of GPUs. It would be easy
to do the runs on all possible core counts with increments of 1, but
that won't tell a whole lot more than what the performance is of a run
using a E5-2620 v4 CPU (with some GPUs) on a certain core count. Even
extrapolating from that 2620 to a E5-2630 v4 and expecting to get a
good estimate is tricky (given that the latter has 25% more cores for
the same TDP!), let alone to any 26xxv4 CPU or the current-gen Skylake
chips which have different performance characteristics.

As Mark notes, there are some mdp option as well as some system
charateristics that will have a strong influence on performance -- if
tens of % is something you consider "strong" (some users are fine to
be within a 2x ballpark :).

What's worth considering is to try to avoid ending up strongly CPU or
GPU bound from start. That may admittedly could be a difficult task
you would run e.g. both biased MD with large pull groups and all-bonds
constraints with Amber FF on large-ish (>100k) systems as well as
vanilla MD with CHARMM FF with small-ish (<25k) systems. On the same
hardware the former will be more prone to be CPU-bound while the
latter will have relatively more GPU-heavy workload.

There are many factors that influence the performance of a run and
therefore giving a the one right answer to your question is not really
possible. What can say is that 7-10 "core-GHz" per fast Pascal GPU is
generally sufficient for "typical" protein simulations to run at >=85%
of peak.



Hi Szilárd,

many thanks, this alrady helps me a lot. Just to get it 100% clear what you
mean with core-GHz: A 10-core E5-2630v4 with 2.2 GHz would have 22 core-GHz,
right?


Yes, that's what I was referring to; note that a 2630v4 won't be
running at a 2.2 GHz base clock if you run AVX code ;)


Okay, I didn't know this. What would be the base clock instead with AVX 
code?





Thanks,
Jochen




Cheers,
--
Szilárd


On Wed, Mar 28, 2018 at 4:31 AM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:


Hi,

On Tue, Mar 27, 2018 at 6:43 PM Jochen Hub <j...@gwdg.de> wrote:


Dear Gromacs community, dear Mark,

Mark showed in the webinar today that having more than 8 Xeon E5-26XXv4
cores does not help when using a GTX 1080Ti and PME on the GPU.



... for that particular simulation system.



Unfortunately, there were no data points between 4 and 8 CPU cores,
hence it was not clear at which #cores the performance actually levels
off. With a GTX 1080 (not Ti) I once found that having more than 5 Xeon
cores does not help, if not having UB potentials, but I don't have a
1080Ti at hand to test for that.



Those data points may not have been run. Szilard might have the data -
this
was GLIC 2fs comparing 1080 with 1080Ti from the recent plots he shared.



So my questions are:

- At which number of E5-26XXv4 cores does the performance for common
systems level off with a 1080Ti for your test system (GLIC)?

- Does the answer depend strongly on the mdp settings (in particular on
the LJ cutoff)?



Longer LJ cutoff (e.g. from different forcefields) will certainly require
more non-bonded work, so then fewer CPU cores would be needed to do the
remaining non-offloaded work. However any sweet spot for a particular
.tpr
would be highly dependent on other effects, such as the ratio of solvent
(which typically has less LJ and simpler update) to solute, or the
density
of dihedral or U-B interactions. And doing pulling or FEP is very
different
again. The sweet spot for the next project will be elsewhere, sadly.

This would help us a lot when choosing the appropriate CPU for a 1080Ti.



Many thanks for any suggestions,
Jochen

--
-------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany

<https://maps.google.com/?q=Justus-von-Liebig-Weg+11,+37077+G%C3%B6ttingen,+Germany=gmail=g>
.
Phone: +49-551-39-14189 <+49%20551%203914189>
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_

Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-04-03 Thread Jochen Hub



Am 29.03.18 um 20:57 schrieb Szilárd Páll:

Hi Jochen,

For that particular benchmark I only measured performance with
1,2,4,8,16 cores with a few different kinds of GPUs. It would be easy
to do the runs on all possible core counts with increments of 1, but
that won't tell a whole lot more than what the performance is of a run
using a E5-2620 v4 CPU (with some GPUs) on a certain core count. Even
extrapolating from that 2620 to a E5-2630 v4 and expecting to get a
good estimate is tricky (given that the latter has 25% more cores for
the same TDP!), let alone to any 26xxv4 CPU or the current-gen Skylake
chips which have different performance characteristics.

As Mark notes, there are some mdp option as well as some system
charateristics that will have a strong influence on performance -- if
tens of % is something you consider "strong" (some users are fine to
be within a 2x ballpark :).

What's worth considering is to try to avoid ending up strongly CPU or
GPU bound from start. That may admittedly could be a difficult task
you would run e.g. both biased MD with large pull groups and all-bonds
constraints with Amber FF on large-ish (>100k) systems as well as
vanilla MD with CHARMM FF with small-ish (<25k) systems. On the same
hardware the former will be more prone to be CPU-bound while the
latter will have relatively more GPU-heavy workload.

There are many factors that influence the performance of a run and
therefore giving a the one right answer to your question is not really
possible. What can say is that 7-10 "core-GHz" per fast Pascal GPU is
generally sufficient for "typical" protein simulations to run at >=85%
of peak.


Hi Szilárd,

many thanks, this alrady helps me a lot. Just to get it 100% clear what 
you mean with core-GHz: A 10-core E5-2630v4 with 2.2 GHz would have 22 
core-GHz, right?


Thanks,
Jochen



Cheers,
--
Szilárd


On Wed, Mar 28, 2018 at 4:31 AM, Mark Abraham <mark.j.abra...@gmail.com> wrote:

Hi,

On Tue, Mar 27, 2018 at 6:43 PM Jochen Hub <j...@gwdg.de> wrote:


Dear Gromacs community, dear Mark,

Mark showed in the webinar today that having more than 8 Xeon E5-26XXv4
cores does not help when using a GTX 1080Ti and PME on the GPU.



... for that particular simulation system.



Unfortunately, there were no data points between 4 and 8 CPU cores,
hence it was not clear at which #cores the performance actually levels
off. With a GTX 1080 (not Ti) I once found that having more than 5 Xeon
cores does not help, if not having UB potentials, but I don't have a
1080Ti at hand to test for that.



Those data points may not have been run. Szilard might have the data - this
was GLIC 2fs comparing 1080 with 1080Ti from the recent plots he shared.



So my questions are:

- At which number of E5-26XXv4 cores does the performance for common
systems level off with a 1080Ti for your test system (GLIC)?

- Does the answer depend strongly on the mdp settings (in particular on
the LJ cutoff)?



Longer LJ cutoff (e.g. from different forcefields) will certainly require
more non-bonded work, so then fewer CPU cores would be needed to do the
remaining non-offloaded work. However any sweet spot for a particular .tpr
would be highly dependent on other effects, such as the ratio of solvent
(which typically has less LJ and simpler update) to solute, or the density
of dihedral or U-B interactions. And doing pulling or FEP is very different
again. The sweet spot for the next project will be elsewhere, sadly.

This would help us a lot when choosing the appropriate CPU for a 1080Ti.


Many thanks for any suggestions,
Jochen

--
-------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany
<https://maps.google.com/?q=Justus-von-Liebig-Weg+11,+37077+G%C3%B6ttingen,+Germany=gmail=g>
.
Phone: +49-551-39-14189 <+49%20551%203914189>
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-Un

[gmx-users] Number of Xeon cores per GTX 1080Ti

2018-03-27 Thread Jochen Hub

Dear Gromacs community, dear Mark,

Mark showed in the webinar today that having more than 8 Xeon E5-26XXv4 
cores does not help when using a GTX 1080Ti and PME on the GPU.


Unfortunately, there were no data points between 4 and 8 CPU cores, 
hence it was not clear at which #cores the performance actually levels 
off. With a GTX 1080 (not Ti) I once found that having more than 5 Xeon 
cores does not help, if not having UB potentials, but I don't have a 
1080Ti at hand to test for that.


So my questions are:

- At which number of E5-26XXv4 cores does the performance for common 
systems level off with a 1080Ti for your test system (GLIC)?


- Does the answer depend strongly on the mdp settings (in particular on 
the LJ cutoff)?


This would help us a lot when choosing the appropriate CPU for a 1080Ti.

Many thanks for any suggestions,
Jochen

--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Umbrella Sampling - good histogram but no result in profile

2018-03-14 Thread Jochen Hub
ur help.

Best regards,

Ben


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.



--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] walls with slab of water

2018-02-26 Thread Jochen Hub

Hi,

I would use a flat-bottomed position restraint in Z-direction for this 
purpose, see the Gromacs manual.


Cheers,
Jochen

Am 25.02.18 um 10:17 schrieb Adriano Santana Sanchez:

Hi,

I am trying to run a SLAB of water with a solute and I want to put a wall
on the z axis edge.

My problem is how to define *wall_atomtype *in the topology file or in the
.itp

I am using oplsaa.ff force field with SPC/E water.

This is a section of the .mpd:

Neighborsearching and short-range nonbonded interactions
cutoff-scheme= verlet
nstlist  = 1
ns_type  = grid
pbc  = xy
nwall= 2
*wall_atomtype= W1 W2*
wall_type= 10-4
wall_r_linpot= -1
wall_density = 5 5
wall_ewald_zfac  = 3
ewald_geometry   = 3dc
rlist= 1.2
---
ERROR 1 [file topol.top, line 45]:
   Specified wall atom type W1 is not defined
ERROR 2 [file topol.top, line 45]:
   Specified wall atom type W2 is not defined

  Thanks,
Adriano



--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] 2018-beta1: PME/GPU performance question

2017-12-01 Thread Jochen Hub

Hi Szilárd,

thank you for the quick reply.

Yes, but Urey-Bradley makes only 0.2% of the M-Flops. 99.2% comes from 
"NxN Ewald Elec. + LJ [F]" or "NxN Ewald Elec. + LJ [V]".


Update: I tested Tip3 vs. Charmm-modified Tip3p - not the problem

But: The cutoff has a big influence on this effect: This goes so far 
that, with 4 CPU cores, one gets better performance with 1.4 nm cutoff 
than with 1.0 nm cutoff (!), see:


(New runs, now with Slipids, they also use UB.)

# 128 Slipids, 1nm cutoff (poor at small nt)
 458.70 <- !
 699.79
 8   123.81
10   142.46
12   148.26

# 128 Slipids, 1.4nm cutoff (seems ok)
 478.12 <- !
 6   106.48
 8   127.24
10   130.26
12   134.25


Something similar happens with a 4x larger system, yet not as extreme.

# 512 Slipids, 1nm cutoff (poor at small nt)
 421.10
 630.67
 840.06
1048.01
1251.66

# 512 Slipids, 1.4nm cutoff (seems ok)
 420.98
 629.98
 832.99
1034.68
1236.03

Do you still think this due to bonded work?

Thank you,
Jochen


Am 01.12.17 um 02:26 schrieb Szilárd Páll:

Hi Jochen,

Short answer: (most likely) it is due to the large difference in the
amount of bonded work (relative to the total step time). Does CHARMM36
use UB?

Cheers,
--
Szilárd


On Thu, Nov 30, 2017 at 5:33 PM, Jochen Hub <j...@gwdg.de> wrote:

Dear all,

I have a question on the performance of the new PME-on-GPU code (2018-beta1)
on a Xeon 12-core / GTX 1080 node (Cuda 8, gcc 4.85).

With a 84 kAtoms system, I get that the simulations do not benefit from a
strong CPU any more. See, using 6 Xeon cores with a GTX 1080 is sufficient.

#CPU  ns/day
  292.88
  4   113.18
  6   123.36
  8   122.62
10   125.76
12   128.84

(This is nice, as we can buy cheap CPUs).

(with pinning, pinstride 1, one GPU, -ntmpi 1)

On a small system (Charmm36 lipid patch, 30 kAtoms), in contrast, the
simulations strongly benefit from more CPU cores.

#CPU  ns/day
  484.11
  6   119.24
  8   150.84
10   159.63
12   171.30

Is this the expected behaviour? Do you know why?

Thank you for any hints,
Jochen

--
-------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.


--
-------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] 2018-beta1: PME/GPU performance question

2017-11-30 Thread Jochen Hub

Dear all,

I have a question on the performance of the new PME-on-GPU code 
(2018-beta1) on a Xeon 12-core / GTX 1080 node (Cuda 8, gcc 4.85).


With a 84 kAtoms system, I get that the simulations do not benefit from 
a strong CPU any more. See, using 6 Xeon cores with a GTX 1080 is 
sufficient.


#CPU  ns/day
 292.88
 4   113.18
 6   123.36
 8   122.62
10   125.76
12   128.84

(This is nice, as we can buy cheap CPUs).

(with pinning, pinstride 1, one GPU, -ntmpi 1)

On a small system (Charmm36 lipid patch, 30 kAtoms), in contrast, the 
simulations strongly benefit from more CPU cores.


#CPU  ns/day
 484.11
 6   119.24
 8   150.84
10   159.63
12   171.30

Is this the expected behaviour? Do you know why?

Thank you for any hints,
Jochen

--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Heavy atom - hydrogens bond lengths constraints.

2017-05-06 Thread Jochen Hub



Am 06.05.17 um 15:35 schrieb Dawid das:

Dear Gromacs Users,

I am a bit anxious about the results I get for my simulation of protein in
a box of water using CHARMM 27 force field.
Namely, even though I use following options to constrain bond lengths
between
hydrogen atoms and heavy atoms:

constraint_algorithm= lincs
constraints = h-bonds
lincs_iter  = 1
lincs_order = 4

and I do not use

define = -DFLEXIBLE

so if I understand correctly I do simulate rigid water molecules, right?

Anyway, I observe fluctutations of bond lengths between hydrogen atoms and
heavy atoms
with amplitude up to 0.020 AA.


Is this possibly due to the limited precision of xtc files? Do you get 
the same when reading the trr file?




Is this what I can expect? Also, I can clearly see that my water molecules
(TIP3P model) are not stiff.

Best wishes,
Dawid Grabarek



--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Strange energy minimization problem with large Martini box

2017-05-06 Thread Jochen Hub

Hi all,

we are having a strange problem with an energy minimization of a large 
box of MARTINI octane. This seems like a Gromacs problem (and not a 
Martini problem), that's why I ask here.


When creating a larger octane box from a small equilibrated octane box 
with genbox, such as:


gmx genconf -f small.gro -nbox 5 5 2 -o large.pdb

Then, starting an MD directly from large.pdb works fine, as it should, 
since small.gro was equilibrated.


However, when first doing an energy minimization on large.pdb, then the 
follow-up MD stops in error due to huge box scalings. This also happens 
with even larger boxes, such as when using -nbox 6 6 2, -nbox 7 7 2, etc.


When creating a smaller box, such as:

gmx genconf -f small.gro -nbox 3 3 2 -o not-so-large.pdb

(or with -nbox 3 3 2) then no problem occurs after the minimiation. 
Also, when creating only one layer in Z-direction , such as -nbox 5 5 1 
or -nbox 10 10 1, all works fine.


Finally: the Fmax in the unstable energy minimization are reasonably low 
(10E+3 or 10E+2), but also seem a bit unstable, with occational 
increases to ~10E4. This typically means that there is an issue with the 
energy minimization.


This occurred in 2016.3 and 5.13, in single and double precision, with 
steep and cg integrator. So seems to be a general problem.


Did anyone see something similar?

Many thanks,
Jochen

--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Setting up simulation system of a protein-detergent complex

2017-04-20 Thread Jochen Hub

Hi Gromacs users,

if you want to set up an MD simulation system of a protein-detergent 
complex, you can find some helpful scripts and suggestions here:


http://cmb.bio.uni-goettingen.de/build_pdc.html

Cheers,
Jochen

--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Pulling/restraints along the radius of gyration

2017-04-04 Thread Jochen Hub

Hi Gromacs users,

we have written a little Gromacs extension that allows you to apply 
harmonic restraints along the radius of gyration of a group of atoms 
(such as a protein). If you want to use it, here are some more details:


http://cmb.bio.uni-goettingen.de/rg_pulling.html

Cheers,
Jochen

--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] bootstrapping of PMF

2016-11-18 Thread Jochen Hub
Hi Alex,

there is no simple answer to your questions. MD simulations often suffer
from long and unknown autocorrelations. Computing reliable errors from
simulations is difficult since it is not clear which simulation frames
are truly statistically independent. With the bootstrapping of
histograms, you get a reasonable error estimate if

1) Your individual histograms are really independent. This may be
violated, for instance, if the starring position for each window is
similar. For example, if the orientation of peptide with respect to the
surface is was always the same at t=0, or if the internal structure of
the peptide was always the same.

2) Your histograms are sufficiently tight, such that at each position
along the reaction coordinate you have several histograms (such as 5 or
10). If your histograms overlap at +- sigma (or even +-2 sigma), this is
clearly violated.

However, getting individual histograms independent from each other is in
practice easier than getting frames from a single simulation independent
(due to the very long autocorrelation within one simulation). Therefore,
bootstrapping complete histograms is in many cases the best one can do
(if the points 1 and 2 are more or less fulfilled).

Btw: The integrated autocorrelation times in iact.xvg are mainly
important when you enforce a cyclic (or periodic) PMF, in order to
distribute a offset between the right and left end over the PMF (to get
it cyclic). But they are in most cased by no means suitable for getting
the "true" autocorrelation time, which you would need to compute the
error via binning single long simulations (to make sure your bins are
independent).

I hope this helps a bit.

Cheers,
Jochen



Am 09/11/2016 um 17:11 schrieb Alex:
> Dear gromacs user,
> 
> I have performed a US simulation to find PMF of a peptide adsorbed to a
> solid surface.
> 
> I have already evaluated the result by bootstrapping in gmx WHAM using the
> b-hist method and 600 number of bootstraps and 1200 bins also with
> considering the integrated autocorrelation time into account
> 
> Here is the command:
> 
> gmx wham -hist Histo.xvg -nBootstrap 600 -bins 1200 -bs-method b-hist
> -bsres bsResult.xvg -bsprof bsProfs.xvg -if Fpull.dat -it TPR.dat -min 1.95
> -max 4.7 -ac -o Profile.xvg -zprof0 4.69
> 
> And here are the result:
> 
> bsProfs.pdf
> https://drive.google.com/open?id=0B_CbyhnbKqQDLWpnQzJ1WmlINXc
> 
> bsResult.pdf
> https://drive.google.com/open?id=0B_CbyhnbKqQDRU5kRVdfaU5ObFE
> 
> iact.xvg   integrated autocorrelation time
> https://drive.google.com/open?id=0B_CbyhnbKqQDOERHTXlrb095VUU
> 
> My first question is that if I have well converged PMF result, based on
> above files?
> 
> I was also wondering that what exactly I have to be reported later in for
> example a publication and ... ? the normal profile.xvg with out bootstrap
> or this bsProfs.xvg? What is the difference between bsProfs.xvg and the
> normal profile.xvg that we can get from normal gmx WHAM with out
> bootstrapping?
> 
> And why the first 130 lines of iact.xvg file have been autocratically
> commented out from  the rest?
> 
> And finally, do we always and here need to correct all the PMF profile by
> the "$k_{B}T*log[4π(\epsilon)^2]$" factor in which \epsilon is reaction
> coordinate? as been mentioned here:
> Hub, J. S.; de Groot, B. L.; van der Spoel, D.J. Chem. Theory
> Comput.2010,6, 3713-3720
> 
> Many thanks for your comments in advance.
> 
> Regards,
> Alex
> 

-- 
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] PMF using umbrella sampling and Gromacs 5.0

2015-08-06 Thread Jochen Hub
?

 Best wishes,

 Dawid Grabarek
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
 or
 send a mail to gmx-users-requ...@gromacs.org.




 --
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.



 --

 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

 End of gromacs.org_gmx-users Digest, Vol 135, Issue 136
 ***


-- 
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] New web server for setup of membrane simulation systems

2015-02-08 Thread Jochen Hub
Dear MD community,

we have set up a web server that automatically sets up membrane
simulation systems containing an arbitrary mixture
of different lipids. The server, called MemGen
(memgen.uni-goettingen.de) is not restricted to a specific set of lipid
types, force fields, or MD software. Instead, MemGen works with any
all-atom or united-atom lipid.

The user uploads lipids in one of various file formats (pdb, crd, xyz,
ml2, gro), and the webserver returns a PDB file
of the lipid patch with the requested number relative concentration of
the lipids, requested number of water molecules
per lipid, and salt content. Counterions are always added.

Please give it a try at:

http://memgen.uni-goettingen.de/

Happy simulating!

The MemGen team at the University of Göttingen

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Jochen Hub
)
-- Performing Test HAS_NO_UNUSED_VARIABLE
-- Performing Test HAS_NO_UNUSED_VARIABLE - Success
-- Check if the system is big endian
-- Searching 16 bit integer
-- Using unsigned short
-- Check if the system is big endian - little endian
-- Looking for inttypes.h
-- Looking for inttypes.h - found
-- Performing Test HAS_NO_UNUSED_PARAMETER
-- Performing Test HAS_NO_UNUSED_PARAMETER - Success
-- Performing Test HAS_NO_DEPRECATED_REGISTER
-- Performing Test HAS_NO_DEPRECATED_REGISTER - Success
-- Configuring done
-- Generating done
-- Build files have been written to: /home/waxs/opt/gmx/5.03-rotmax


-- 
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Jochen Hub


Am 12/12/14 19:07, schrieb Mark Abraham:
 Hi,
 
 I have seen similar behaviour on interesting setups, e.g. where the same
 physical file system has different logical locations, but I don't know
 where the issue is. $(pwd) should be expanded by the shell before cmake
 sees it, so how a wrong path could get into cmake_install.cmake is a
 mystery to me.
 
 Mark
 
 On Fri, Dec 12, 2014 at 6:33 PM, Johnny Lu johnny.lu...@gmail.com wrote:

 I'm not sure what happened, but so far when i install, i use full path
 instead of $(pwd) and it was fine for gromacs 4.6 and 5.0, 5.0.2.

Thanks, but this is not the issue. As mark says, the shell expands the
$(pwd). But I have also before expanded it myself - same error.

Jochen

 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.


-- 
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Jochen Hub


Am 12/12/14 19:07, schrieb Mark Abraham:
 Hi,
 
 I have seen similar behaviour on interesting setups, e.g. where the same
 physical file system has different logical locations, but I don't know
 where the issue is. $(pwd) should be expanded by the shell before cmake
 sees it, so how a wrong path could get into cmake_install.cmake is a
 mystery to me.

This may in fact be the issue. Our computing center has some extra-fancy
distributed file system. And the webserver we are running is a on a
virtual machine, so also some kind of distributed thingie...

Hmpf - so is there no solution for that?

Jochen

 
 Mark
 
 On Fri, Dec 12, 2014 at 6:33 PM, Johnny Lu johnny.lu...@gmail.com wrote:

 I'm not sure what happened, but so far when i install, i use full path
 instead of $(pwd) and it was fine for gromacs 4.6 and 5.0, 5.0.2.
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.


-- 
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Jochen Hub


Am 12/12/14 19:40, schrieb Johnny Lu:
 Oh, and what version of cmake are you using?
 
 If that is too old, you can compile a newer version of cmake and then use
 that.

Hey, that was a good hint!! I used 2.8.12.2 before. I just gave it a try
with the latest 3.02 - and it worked! So many thanks.

Mark, maybe it would be good to suggest a more recent cmake version in
general for Gromacs?

Best,
Jochen



 
 On Fri, Dec 12, 2014 at 1:35 PM, Johnny Lu johnny.lu...@gmail.com wrote:

 maybe ... compile it on the head node of the cluster, and hope it has a
 local storage?

 fftpack is slow, and let gromacs build its own fftw3 library is better. I
 don't know if the fftpack code of gromacs is old.

 May be

 Location of where you run cmake/CMakeFiles/CMakeError.log

 will tell a bit more.


 On Fri, Dec 12, 2014 at 1:24 PM, Jochen Hub j...@gwdg.de wrote:



 Am 12/12/14 19:07, schrieb Mark Abraham:
 Hi,

 I have seen similar behaviour on interesting setups, e.g. where the
 same
 physical file system has different logical locations, but I don't know
 where the issue is. $(pwd) should be expanded by the shell before cmake
 sees it, so how a wrong path could get into cmake_install.cmake is a
 mystery to me.

 This may in fact be the issue. Our computing center has some extra-fancy
 distributed file system. And the webserver we are running is a on a
 virtual machine, so also some kind of distributed thingie...

 Hmpf - so is there no solution for that?

 Jochen


 Mark

 On Fri, Dec 12, 2014 at 6:33 PM, Johnny Lu johnny.lu...@gmail.com
 wrote:

 I'm not sure what happened, but so far when i install, i use full path
 instead of $(pwd) and it was fine for gromacs 4.6 and 5.0, 5.0.2.
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.


 --
 ---
 Dr. Jochen Hub
 Computational Molecular Biophysics Group
 Institute for Microbiology and Genetics
 Georg-August-University of Göttingen
 Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
 Phone: +49-551-39-14189
 http://cmb.bio.uni-goettingen.de/
 ---
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.



-- 
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.