Re: [gmx-users] viscosity calculation using g_energy

2018-04-05 Thread Ali Ahmed
Hello David,
I have the same problem
I run the simulation for 10 ns and I saved energies every 1 ps but the
results were far and the curves go to negative values.
Please, do have an idea how to get accurate results?
Thank you
Ali



On Thu, Apr 5, 2018 at 2:12 PM, David van der Spoel 
wrote:

> Den 2018-04-05 kl. 09:19, skrev Jo:
>
>> Hello,
>>
>> I would like to calculate bulk and shear viscosity on gromacs using
>> g_energy.  I have read a number of other emails about this topic but there
>> still seems to be no conclusive answer to how to use this tool.  I use the
>> following command:
>>
>> gmx energy -f  prod_500.edr -vis viscosity.xvg
>>
>
> The einstein viscosity is usually more accurate. How long is your
> simulation and how often do you save the energies?
>
>
>> From this, a file called 'viscosity.xvg' is produced with data on time
>>>
>> (ps), shear viscosity, and bulk viscosity.  However, these values
>> fluctuate
>> wildly and are off from the correct viscosity for this model (SPCE) by at
>> least an order of magnitude.  I know the simulation trajectory is correct
>> as I have matched potential energy, density, and diffusion coefficition
>> for
>> these runs with literature.  However, the viscosity numbers are far off
>> from the expected values.
>>
>> Any suggesions on what I can try?  I assume there must be a way to do it
>> by
>> hand via the pressure tensors.  Can someone directly me to how I can
>> practically take the pressure tensors to calculate via Green-Kubo method
>> the viscosity?
>>
>> Thanks,
>>
>> Shuwen
>>
>>
>
> --
> David van der Spoel, Ph.D., Professor of Biology
> Head of Department, Cell & Molecular Biology, Uppsala University.
> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
> http://www.icm.uu.se
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] anyone have a development version or fork of gromacs that puts bonded interactions on the GPUs?

2018-04-05 Thread Christopher Neale
Dear Kevin:


Perhaps I misstated my goals. I am not saying that GPUs don't ever help. I am 
saying that Martini simulations with GPUs should be really heavily cpu-bound 
with bonded interactions, once an afterthought, now a dominant factor in 
determining throughput. Therefore, martini simulations should benefit massively 
from putting the bonded interactions on the GPUs too.


But I'm really just looking for code if it is available.


Thank you,

Chris.


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Kevin Boyd 

Sent: 05 April 2018 16:46:53
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] anyone have a development version or fork of gromacs 
that puts bonded interactions on the GPUs?

Hi Chris,

My experience has been that GPUs do significantly increase performance
in Martini simulations, perhaps not quite as much as all-atom
simulations but typically at least ~2x the speed of the same system on
cpus alone. What combination of gromacs version/mdp options/hardware
are you running with?

Kevin

On Thu, Apr 5, 2018 at 3:10 PM, Christopher Neale
 wrote:
> Hello,
>
>
> running Martini simulations with gromacs does not dramatically benefit from 
> the presence of GPUs, presumably because the bonded interactions on the CPUs 
> is the bottleneck. Does anyone have even an untested hacky version of gromacs 
> with bonded pushed to the GPUs? PME is not used so that should not be the 
> issue.
>
>
> Thank you,
>
> Chris.
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] anyone have a development version or fork of gromacs that puts bonded interactions on the GPUs?

2018-04-05 Thread Kevin Boyd
Hi Chris,

My experience has been that GPUs do significantly increase performance
in Martini simulations, perhaps not quite as much as all-atom
simulations but typically at least ~2x the speed of the same system on
cpus alone. What combination of gromacs version/mdp options/hardware
are you running with?

Kevin

On Thu, Apr 5, 2018 at 3:10 PM, Christopher Neale
 wrote:
> Hello,
>
>
> running Martini simulations with gromacs does not dramatically benefit from 
> the presence of GPUs, presumably because the bonded interactions on the CPUs 
> is the bottleneck. Does anyone have even an untested hacky version of gromacs 
> with bonded pushed to the GPUs? PME is not used so that should not be the 
> issue.
>
>
> Thank you,
>
> Chris.
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding calculation of Spatial Distribution Function

2018-04-05 Thread Christopher Neale
Mark addressed your second post. Regarding your first post, it looks like that 
program help output was mangled somewhere between 4.6.5 and 5.1.2. Below is the 
output from g_spatial in gromacs 4.6.5, which should give you an idea of what 
the help output should say (e.g. use a .xtc input file and not a .tng input 
file). I just checked gromacs 2016.4 and it also have this broken text about a 
.tng file. I did not check gromacs 2018. Since it looks like a failed search 
and replace, it's probably worthwhile for somebody to grep the entire source 
code to see where else this .tng was used in error.



[cneale@seawolf1 ~]$ g_spatial -h
 :-)  G  R  O  M  A  C  S  (-:

   GRoups of Organic Molecules in ACtion for Science

:-)  VERSION 4.6.5  (-:

Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,
   Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,
 Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans,
Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,
   Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
Michael Shirts, Alfons Sijbers, Peter Tieleman,

   Berk Hess, David van der Spoel, and Erik Lindahl.

   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2012,2013, The GROMACS development team at
Uppsala University & The Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
   modify it under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
 of the License, or (at your option) any later version.

  :-)  g_spatial  (-:

DESCRIPTION
---
g_spatial calculates the spatial distribution function and outputs it in a
form that can be read by VMD as Gaussian98 cube format. This was developed
from template.c (GROMACS-3.3). For a system of 32,000 atoms and a 50 ns
trajectory, the SDF can be generated in about 30 minutes, with most of the
time dedicated to the two runs through trjconv that are required to center
everything properly. This also takes a whole bunch of space (3 copies of the
.xtc file). Still, the pictures are pretty and very informative when the
fitted selection is properly made. 3-4 atoms in a widely mobile group (like a
free amino acid in solution) works well, or select the protein backbone in a
stable folded structure to get the SDF of solvent and look at the
time-averaged solvation shell. It is also possible using this program to
generate the SDF based on some arbitrary Cartesian coordinate. To do that,
simply omit the preliminary trjconv steps.
USAGE:
1. Use make_ndx to create a group containing the atoms around which you want
the SDF
2. trjconv -s a.tpr -f a.xtc -o b.xtc -boxcenter tric -ur compact -pbc none
3. trjconv -s a.tpr -f b.xtc -o c.xtc -fit rot+trans
4. run g_spatial on the .xtc output of step #3.
5. Load grid.cube into VMD and view as an isosurface.
Note that systems such as micelles will require trjconv -pbc cluster between
steps 1 and 2
WARNINGS:
The SDF will be generated for a cube that contains all bins that have some
non-zero occupancy. However, the preparatory -fit rot+trans option to trjconv
implies that your system will be rotating and translating in space (in order
that the selected group does not). Therefore the values that are returned
will only be valid for some region around your central group/coordinate that
has full overlap with system volume throughout the entire translated/rotated
system over the course of the trajectory. It is up to the user to ensure that
this is the case.
BUGS:
When the allocated memory is not large enough, a segmentation fault may
occur. This is usually detected and the program is halted prior to the fault
while displaying a warning message suggesting the use of the -nab (Number of
Additional Bins) option. However, the program does not detect all such
events. If you encounter a segmentation fault, run it again with an increased
-nab value.
RISKY OPTIONS:
To reduce the amount of space and time required, you can output only the
coords that are going to be used in the first and subsequent run through
trjconv. However, be sure to set the -nab option to a sufficiently high value
since memory is allocated for cube bins based on the initial coordinates and
the -nab option value.


Option Filename  Type Description

  -s  topol.tpr  InputStructure+mass(db): tpr tpb tpa gro g96 pdb
  -f   traj.xtc  InputTrajectory: xtc trr trj gro g96 pdb cpt
  -n  index.ndx  Input, Opt.  Index file

Option   Type   Value   Description
--

Re: [gmx-users] viscosity calculation using g_energy

2018-04-05 Thread David van der Spoel

Den 2018-04-05 kl. 09:19, skrev Jo:

Hello,

I would like to calculate bulk and shear viscosity on gromacs using
g_energy.  I have read a number of other emails about this topic but there
still seems to be no conclusive answer to how to use this tool.  I use the
following command:

gmx energy -f  prod_500.edr -vis viscosity.xvg


The einstein viscosity is usually more accurate. How long is your 
simulation and how often do you save the energies?





From this, a file called 'viscosity.xvg' is produced with data on time

(ps), shear viscosity, and bulk viscosity.  However, these values fluctuate
wildly and are off from the correct viscosity for this model (SPCE) by at
least an order of magnitude.  I know the simulation trajectory is correct
as I have matched potential energy, density, and diffusion coefficition for
these runs with literature.  However, the viscosity numbers are far off
from the expected values.

Any suggesions on what I can try?  I assume there must be a way to do it by
hand via the pressure tensors.  Can someone directly me to how I can
practically take the pressure tensors to calculate via Green-Kubo method
the viscosity?

Thanks,

Shuwen




--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] anyone have a development version or fork of gromacs that puts bonded interactions on the GPUs?

2018-04-05 Thread Christopher Neale
Hello,


running Martini simulations with gromacs does not dramatically benefit from the 
presence of GPUs, presumably because the bonded interactions on the CPUs is the 
bottleneck. Does anyone have even an untested hacky version of gromacs with 
bonded pushed to the GPUs? PME is not used so that should not be the issue.


Thank you,

Chris.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Problem with CUDA

2018-04-05 Thread Borchert, Christopher B ERDC-RDE-ITL-MS Contractor
Hello. I'm taking a working build from a co-worker and trying to add GPU 
support on a Cray XC. CMake works but make fails. Both 2016 and 2018 die at the 
same point -- can't find gromac's own routines.

2016.5:
/opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -O2 -fPIC -dynamic 
-std=c++0x   -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast  
-dynamic CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
-Wl,-rpath,/p/work/cots/gromacs-2016.5/build/lib:/opt/nvidia/cudatoolkit8.0/8.0.54_2.3.12_g180d272-2.2/lib64/stubs
 -dynamic ../../lib/libgromacs_mpi.so.2.5.0 -fopenmp -lcudart 
/opt/nvidia/cudatoolkit8.0/8.0.54_2.3.12_g180d272-2.2/lib64/stubs/libnvidia-ml.so
 -lhwloc -lz -ldl -lrt -lm -lfftw3f 
../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
`__cudaRegisterLinkedBinary_59_tmpxft_0001bc78__21_pmalloc_cuda_compute_61_cpp1_ii_63d60154'
../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
`__cudaRegisterLinkedBinary_56_tmpxft_0001bac2__21_gpu_utils_compute_61_cpp1_ii_d70ebee0'
../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
`__cudaRegisterLinkedBinary_56_tmpxft_0001b90b__21_cudautils_compute_61_cpp1_ii_24d20763'
../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
`__cudaRegisterLinkedBinary_71_tmpxft_0001c016__21_cuda_version_information_compute_61_cpp1_ii_e35285be'
../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
`__cudaRegisterLinkedBinary_57_tmpxft_0001b592__21_nbnxn_cuda_compute_61_cpp1_ii_6e47f057'
../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
`__cudaRegisterLinkedBinary_67_tmpxft_0001b754__21_nbnxn_cuda_data_mgmt_compute_61_cpp1_ii_a1eafeba'
collect2: error: ld returned 1 exit status

2018.1:
/opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -O2 -fPIC -dynamic 
-std=c++11   -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast  
-dynamic CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
-Wl,-rpath,/p/work/cots/gromacs-2018.1/build/lib 
../../lib/libgromacs_mpi.so.3.1.0 -fopenmp -lm 
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_56_tmpxft_68a5__21_pme_3dfft_compute_61_cpp1_ii_79dff388'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_67_tmpxft_6621__21_nbnxn_cuda_data_mgmt_compute_61_cpp1_ii_a1eafeba'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_57_tmpxft_6f47__21_pme_spread_compute_61_cpp1_ii_d982d3ad'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_56_tmpxft_6d70__21_pme_solve_compute_61_cpp1_ii_06051a94'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_59_tmpxft_8da7__21_pmalloc_cuda_compute_61_cpp1_ii_63d60154'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_50_tmpxft_7930__21_pme_compute_61_cpp1_ii_6dbf966c'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_58_tmpxft_7382__21_pme_timings_compute_61_cpp1_ii_75ae0e44'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_57_tmpxft_6b11__21_pme_gather_compute_61_cpp1_ii_a7a2f9c7'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_56_tmpxft_7f9f__21_cudautils_compute_61_cpp1_ii_25933dd5'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_54_tmpxft_88f9__21_pinning_compute_61_cpp1_ii_5d0f4aae'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_57_tmpxft_39b7__21_nbnxn_cuda_compute_61_cpp1_ii_f147f02c'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_71_tmpxft_91d4__21_cuda_version_information_compute_61_cpp1_ii_8ab8dc1d'
../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_56_tmpxft_8407__21_gpu_utils_compute_61_cpp1_ii_70828085'
collect2: error: ld returned 1 exit status

BUILD INSTRUCTIONS:
module swap PrgEnv-cray PrgEnv-gnu
module swap gcc gcc/5.3.0
export CRAYPE_LINK_TYPE=dynamic

module load cudatoolkit/8.0.54_2.3.12_g180d272-2.2
module load cmake/gcc-6.3.0/3.7.2
module load fftw/3.3.4.11
export BOOST_DIR=/app/unsupported/boost/1.64.0-gcc-6.3.0

export PREFIX=/app/unsupported/gromacs/201x.x
mkdir $PREFIX

cmake ..  \
 -DCMAKE_VERBOSE_MAKEFILE:BOOL=TRUE \
 -DCMAKE_C_COMPILER:FILEPATH=`which cc` \
 -DCMAKE_C_FLAGS:STRING="-O2 -fPIC -dynamic" \
 -DCMAKE_CXX_COMPILER:FILEPATH=`which CC` \
 -DCMAKE_CXX_FLAGS:STRING="-O2 -fPIC -dynamic" \
 -DCMAKE_INSTALL_PREFIX:PATH=$PREFIX \
 -DGMX_FFT_LIBRARY=fftw3 \
 -DCMAKE_EXE_LINKER_FLAGS:STRING=-dynamic \
 -DGMX_SIMD=AVX2_256 \
 -DFFTWF_LIBRARY:FILEPATH=${FFTW_DIR}/libfftw3f.so \
 -DFFTWF_INCLUDE_DIR:PATH=$FFTW_INC \
 

[gmx-users] charmm-gui

2018-04-05 Thread m g
Dear Justin,Can I use the charmm-gui output file for peptide (.itp file) as an 
input file for GROMACS, and instead of one peptide, I set 4 peptides in the 
bulk of water and near the head groups?In fact , I want to extract manually 
peptide from the final .gro file containing peptide + lipids + ion +water, and 
set four peptides in the bulk of water and near the head groups.Is it possible? 
What is the pathway exactly ? I want to use the same force field for peptide 
and lipid that obtained from CHARMM-GUI.
Would you please help me?thanks,Ganj
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-04-05 Thread Jochen Hub



Am 03.04.18 um 19:03 schrieb Szilárd Páll:

On Tue, Apr 3, 2018 at 5:10 PM, Jochen Hub  wrote:



Am 03.04.18 um 16:26 schrieb Szilárd Páll:


On Tue, Apr 3, 2018 at 3:41 PM, Jochen Hub  wrote:


benchmar

Am 29.03.18 um 20:57 schrieb Szilárd Páll:


Hi Jochen,

For that particular benchmark I only measured performance with
1,2,4,8,16 cores with a few different kinds of GPUs. It would be easy
to do the runs on all possible core counts with increments of 1, but
that won't tell a whole lot more than what the performance is of a run
using a E5-2620 v4 CPU (with some GPUs) on a certain core count. Even
extrapolating from that 2620 to a E5-2630 v4 and expecting to get a
good estimate is tricky (given that the latter has 25% more cores for
the same TDP!), let alone to any 26xxv4 CPU or the current-gen Skylake
chips which have different performance characteristics.

As Mark notes, there are some mdp option as well as some system
charateristics that will have a strong influence on performance -- if
tens of % is something you consider "strong" (some users are fine to
be within a 2x ballpark :).

What's worth considering is to try to avoid ending up strongly CPU or
GPU bound from start. That may admittedly could be a difficult task
you would run e.g. both biased MD with large pull groups and all-bonds
constraints with Amber FF on large-ish (>100k) systems as well as
vanilla MD with CHARMM FF with small-ish (<25k) systems. On the same
hardware the former will be more prone to be CPU-bound while the
latter will have relatively more GPU-heavy workload.

There are many factors that influence the performance of a run and
therefore giving a the one right answer to your question is not really
possible. What can say is that 7-10 "core-GHz" per fast Pascal GPU is
generally sufficient for "typical" protein simulations to run at >=85%
of peak.




Hi Szilárd,

many thanks, this alrady helps me a lot. Just to get it 100% clear what
you
mean with core-GHz: A 10-core E5-2630v4 with 2.2 GHz would have 22
core-GHz,
right?



Yes, that's what I was referring to; note that a 2630v4 won't be
running at a 2.2 GHz base clock if you run AVX code ;)



Okay, I didn't know this. What would be the base clock instead with AVX
code?



Short version: It's not easy to out details as Intel conveniently
omits it from the specs, but it's AFAIK 3-400 MHz lower; also note
that "turbo bins" change as a function of cores used (so you can't
just benchmark on a few cores leaving the rest idle). Also, the actual
clock speed (and overall performance) depend on other factors too so
benchmarking and extrapolation might require consider other factors
too.

Let me know if you are interested in more details.


Hi Szilárd,

many thanks, that helps a lot!

Best,
Jochen



--
Szilárd






Thanks,
Jochen




Cheers,
--
Szilárd


On Wed, Mar 28, 2018 at 4:31 AM, Mark Abraham 
wrote:



Hi,

On Tue, Mar 27, 2018 at 6:43 PM Jochen Hub  wrote:


Dear Gromacs community, dear Mark,

Mark showed in the webinar today that having more than 8 Xeon
E5-26XXv4
cores does not help when using a GTX 1080Ti and PME on the GPU.



... for that particular simulation system.



Unfortunately, there were no data points between 4 and 8 CPU cores,
hence it was not clear at which #cores the performance actually levels
off. With a GTX 1080 (not Ti) I once found that having more than 5
Xeon
cores does not help, if not having UB potentials, but I don't have a
1080Ti at hand to test for that.



Those data points may not have been run. Szilard might have the data -
this
was GLIC 2fs comparing 1080 with 1080Ti from the recent plots he
shared.



So my questions are:

- At which number of E5-26XXv4 cores does the performance for common
systems level off with a 1080Ti for your test system (GLIC)?

- Does the answer depend strongly on the mdp settings (in particular
on
the LJ cutoff)?



Longer LJ cutoff (e.g. from different forcefields) will certainly
require
more non-bonded work, so then fewer CPU cores would be needed to do the
remaining non-offloaded work. However any sweet spot for a particular
.tpr
would be highly dependent on other effects, such as the ratio of
solvent
(which typically has less LJ and simpler update) to solute, or the
density
of dihedral or U-B interactions. And doing pulling or FEP is very
different
again. The sweet spot for the next project will be elsewhere, sadly.

This would help us a lot when choosing the appropriate CPU for a
1080Ti.




Many thanks for any suggestions,
Jochen

--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany



.
Phone: +49-551-39-14189 <+49%20551%203914189>
http://cmb.bio.uni-goettingen.de/

[gmx-users] viscosity calculation using g_energy

2018-04-05 Thread Jo
Hello,

I would like to calculate bulk and shear viscosity on gromacs using
g_energy.  I have read a number of other emails about this topic but there
still seems to be no conclusive answer to how to use this tool.  I use the
following command:

gmx energy -f  prod_500.edr -vis viscosity.xvg

>From this, a file called 'viscosity.xvg' is produced with data on time
(ps), shear viscosity, and bulk viscosity.  However, these values fluctuate
wildly and are off from the correct viscosity for this model (SPCE) by at
least an order of magnitude.  I know the simulation trajectory is correct
as I have matched potential energy, density, and diffusion coefficition for
these runs with literature.  However, the viscosity numbers are far off
from the expected values.

Any suggesions on what I can try?  I assume there must be a way to do it by
hand via the pressure tensors.  Can someone directly me to how I can
practically take the pressure tensors to calculate via Green-Kubo method
the viscosity?

Thanks,

Shuwen
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.