Re: [gmx-users] GROMACS mailing-list will move to a forum

2020-05-07 Thread Paul bauer

Hello gmx users!

As announced in my previous message at the beginning of the week, we 
will be activating the new discussion forum today.


At the same time, the mailing list will move to read-only mode, so 
please don't be surprised when it is no longer possible to submit new 
messages to it. :)


As a reminder, you can sign up to the forum here: 
https://gromacs.bioexcel.eu/


Again, we are looking forward to interact with the community in this new 
way, and hope it will make it easier for you to solve issues that you 
have while using GROMACS.


Cheers

Paul

On 04/05/2020 15:51, Paul bauer wrote:

Hello again @ all gmx-users!

A month has passed and we have moved forward with our work to move the 
user mailing list to the [discussion forum] 
https://gromacs.bioexcel.eu/. The discussion forum will open Friday, 8 
May 2020.


We are now ready to accept people signing up to the forum at 
https://gromacs.bioexcel.eu/. We will start allowing new posts 
starting from Friday to give people some time to get used to the forum 
settings. At the same time (Friday), the gmx-users mailinglist will be 
set in read-only mode.


We are looking forward to engaging with you in this new format and 
hope it will help the community to solve user issues with GROMACS.


Cheers

Paul

On 01/04/2020 14:06, Paul bauer wrote:

Hello gmx-users!

We have been working behind the scenes the last few months on some 
changes to the organization of the GROMACS project, one of them is 
switching our gmx-users mailing list to a forum


We are continuously re-thinking about how we can best engage with you 
and how people can get the best help when it comes to questions about 
using GROMACS for different scientific problems. While the user list 
has worked well for this in the past, it has also shown some 
deficiencies, mostly in finding already existing questions and the 
correct answers for them. To hopefully change this in the future we 
are now in the process of moving the existing list to use a Discourse 
forum instead that will be made public in the beginning of May. At 
the switching point, the current list will be read only (and the 
archive will be kept alive) so people will still be able to search it 
in the future for existing answers, but won't be able to post any new 
questions. The forum will still support the pure email based 
conversation if you prefer this kind of interaction, but it will also 
allow ranking of responses and make it easier to see which answers 
are coming from whom.


You will have the opportunity to sign up for the forum when it goes 
live and I'll send a reminder for it when the time comes.


Cheers

Paul





--
Paul Bauer, PhD
GROMACS Development Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS mailing-list will move to a forum

2020-05-04 Thread Paul bauer

Hello again @ all gmx-users!

A month has passed and we have moved forward with our work to move the 
user mailing list to the [discussion forum] 
https://gromacs.bioexcel.eu/. The discussion forum will open Friday, 8 
May 2020.


We are now ready to accept people signing up to the forum at 
https://gromacs.bioexcel.eu/. We will start allowing new posts starting 
from Friday to give people some time to get used to the forum settings. 
At the same time (Friday), the gmx-users mailinglist will be set in 
read-only mode.


We are looking forward to engaging with you in this new format and hope 
it will help the community to solve user issues with GROMACS.


Cheers

Paul

On 01/04/2020 14:06, Paul bauer wrote:

Hello gmx-users!

We have been working behind the scenes the last few months on some 
changes to the organization of the GROMACS project, one of them is 
switching our gmx-users mailing list to a forum


We are continuously re-thinking about how we can best engage with you 
and how people can get the best help when it comes to questions about 
using GROMACS for different scientific problems. While the user list 
has worked well for this in the past, it has also shown some 
deficiencies, mostly in finding already existing questions and the 
correct answers for them. To hopefully change this in the future we 
are now in the process of moving the existing list to use a Discourse 
forum instead that will be made public in the beginning of May. At the 
switching point, the current list will be read only (and the archive 
will be kept alive) so people will still be able to search it in the 
future for existing answers, but won't be able to post any new 
questions. The forum will still support the pure email based 
conversation if you prefer this kind of interaction, but it will also 
allow ranking of responses and make it easier to see which answers are 
coming from whom.


You will have the opportunity to sign up for the forum when it goes 
live and I'll send a reminder for it when the time comes.


Cheers

Paul



--
Paul Bauer, PhD
GROMACS Development Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-27 Thread Jonathan D. Halverson
Hi Szilárd,

Our OS is RHEL 7.6.

Thank you for your test results. It's nice to see consistent results on a 
POWER9 system.

Your suggestion of allocating the whole node was a good one. I did this in two 
ways. The first was to bypass the Slurm scheduler by ssh-ing to an empty node 
and running the benchmark. The second way was through Slurm using the 
--exclusive directive (which allocates the entire node indepedent of job size). 
In both cases, which used 32 hardware threads and one V100 GPU for ADH (PME, 
cubic, 40k steps), the performance was about 132 ns/day which is significantly 
better than the 90 ns/day from before (without --exclusive). Links to the 
md.log files are below. Here is the Slurm script with --exclusive:

--
#!/bin/bash
#SBATCH --job-name=gmx   # create a short name for your job
#SBATCH --nodes=1# node count
#SBATCH --ntasks=1   # total number of tasks across all nodes
#SBATCH --cpus-per-task=32   # cpu-cores per task (>1 if multi-threaded 
tasks)
#SBATCH --mem=8G # memory per node (4G per cpu-core is default)
#SBATCH --time=00:10:00  # total run time limit (HH:MM:SS)
#SBATCH --gres=gpu:1 # number of gpus per node
#SBATCH --exclusive  # TASK AFFINITIES SET CORRECTLY BUT ENTIRE NODE ALLOCATED 
TO JOB

module purge
module load cudatoolkit/10.2

BCH=../adh_cubic
gmx grompp -f $BCH/pme_verlet.mdp -c $BCH/conf.gro -p $BCH/topol.top -o 
bench.tpr
srun gmx mdrun -nsteps 4 -pin on -ntmpi $SLURM_NTASKS -ntomp 
$SLURM_CPUS_PER_TASK -s bench.tpr
--

Here are the log files:

md.log with --exclusive:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.with-exclusive

md.log without --exclusive:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.without-exclusive

Szilárd, what is your reading of these two files?

This is a shared cluster so I can't use --exclusive for all jobs. Our nodes 
have four GPUs and 128 hardware threads (SMT4 so 32 cores over 2 sockets). Any 
thoughts on how to make a job behave like it is being run with --exclusive? The 
task affinities are apparently not being set properly in that case.

To solve this I tried experimenting with the --cpu-bind settings. When 
--exclusive is not used, I find a slight performance gain by using 
--cpu-bind=cores:
srun --cpu-bind=cores gmx mdrun -nsteps 4 -pin on -ntmpi $SLURM_NTASKS 
-ntomp $SLURM_CPUS_PER_TASK -s bench.tpr

In this case it still gives "NOTE: Thread affinity was not set" and performance 
is still poor.

The --exclusive result suggests that the failed hardware unit test can be 
ignored, I believe.

Here's a bit about our Slurm configuration:
$ grep -i affinity /etc/slurm/slurm.conf
TaskPlugin=affinity,cgroup

ldd shows that gmx is linked against libhwloc.so.5.

I have not heard from my contact at ORNL. All I can find online is that they 
offer GROMACS 5.1 (https://www.olcf.ornl.gov/software_package/gromacs/) and 
apparently nothing special is done about thread affinities.

Jon



From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Szilárd Páll 

Sent: Friday, April 24, 2020 6:06 PM
To: Discussion list for GROMACS users 
Cc: gromacs.org_gmx-users@maillist.sys.kth.se 

Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

Hi,

Affinity settings on the Talos II with Ubuntu 18.04 kernel 5.0 works fine.
I get threads pinned where they should be (hwloc confirmed) and consistent
results. I also get reasonable thread placement even without pinning (i.e.
the kernel scatters first until #threads <= #hwthreads). I see only a minor
penalty to not pinning -- not too surprising given that I have a single
NUMA node and the kernel is doing its job.

Here are my quick the test results run on an 8-core Talos II Power9 + a
GPU, using the adh_cubic input:

$ grep Perf *.log
test_1x1_rep1.log:Performance:   16.617
test_1x1_rep2.log:Performance:   16.479
test_1x1_rep3.log:Performance:   16.520
test_1x2_rep1.log:Performance:   32.034
test_1x2_rep2.log:Performance:   32.389
test_1x2_rep3.log:Performance:   32.340
test_1x4_rep1.log:Performance:   62.341
test_1x4_rep2.log:Performance:   62.569
test_1x4_rep3.log:Performance:   62.476
test_1x8_rep1.log:Performance:   97.049
test_1x8_rep2.log:Performance:   96.653
test_1x8_rep3.log:Performance:   96.889


This seems to point towards some issue with the OS or setup on the IBM
machines you have -- and the unit test error may be one of the symptoms of
it (as it suggests something is off with the hardware topology and a NUMA
node is missing from it). I'd still suggest checking if a full not
allocation with all threads, memory, etc passed to the job results in
suc

Re: [gmx-users] gromacs installation (2020&2019)

2020-04-27 Thread Netaly Khazanov
Thank you for trying to help me. I will take into account all your comments.

On Sun, Apr 26, 2020 at 6:18 PM lazaro monteserin <
lamonteserincastan...@gmail.com> wrote:

> Hey Netaly, are you trying to install Gromacs 2019 and 2020 at the same
> time? If not a couple of things to keep in mind.  Be sure before installing
> Gromacs you have all the utilities that Gromacs use, compilers, etc and
> that their versions are supported. I saw you are trying to install it for
> GPU, for that you need to install first the cuda toolkit versión for your
> linux. Now if you are installing Gromacs in an Ubuntu virtual machine on
> windows this is going to be a big problem. So for this case my
> recommendation is install it without GPU. Follow the instructions for
> installation in the website. Should work perfectly. Kindly, Lazaro
>
> On Sun, Apr 26, 2020 at 12:07 PM Yu Du  wrote:
>
> > Hi Netaly,
> >
> > Although I do not know the exact reason of the failure, after skimming
> > through your command, I think that you probably need to assign absolute
> > path to CMAKE_INSTALL_PREFIX and have access to the internet for
> > downloading REGRESSIONTEST and FFTW package.
> >
> > If you are new to GROMACS, I recommend installation from simple case,
> such
> > as only CPU no GPU. Only after successfully installing CPU only version
> > GROMACS, run to the next level CPU+GPU. This step-by-step installation
> > practice can give you a feeling of choosing CMake options.
> >
> > Cheers,
> >
> > Du, Yu
> > PhD Student,
> > Shanghai Institute of Organic Chemistry
> > 345 Ling Ling Rd., Shanghai, China.
> > Zip: 200032, Tel: (86) 021 5492 5275
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Netaly
> > Khazanov 
> > Sent: Sunday, April 26, 2020 15:38
> > To: Discussion list for GROMACS users 
> > Subject: [gmx-users] gromacs installation (2020&2019)
> >
> > Dear All,
> > I am trying to install gromacs 2020 and 2019 versions on CentOS release
> > 6.10 (Final) linux system.
> > I passed throuht cmake compilation. Using command
> > cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
> -DGMX_GPU=on
> > -DCMAKE_INSTALL_PREFIX=gromacs2020
> > -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-9.0 -DGMX_FFT_LIBRARY=fftw3
> > -DCMAKE_C_COMPILER=/opt/rh/devtoolset-6/root/usr/bin/cc
> > -DCMAKE_CXX_COMPILER=/opt/rh/devtoolset-6/root/usr/bin/g++
> > -DCUDA_HOST_COMPILER=/opt/rh/devtoolset-6/root/usr/bin/g++
> > I've used gcc 5 version (tried also 6 version)
> >
> > However, I am struggling through make execution :
> > in gromacs 2019 -
> >
> > [ 37%] Built target libgromacs_generated
> > [ 37%] Built target libgromacs_external
> > Scanning dependencies of target gpu_utilstest_cuda
> > [ 37%] Linking CXX shared library
> ../../../../lib/libgpu_utilstest_cuda.so
> > [ 37%] Built target gpu_utilstest_cuda
> >
> > in gromacs 2020-
> > [ 27%] Built target linearalgebra
> > [ 27%] Built target scanner
> > [ 27%] Built target tng_io_obj
> > [ 27%] Built target modularsimulator
> >
> > It just stuck on the line and doesn't continue to run.
> >
> > Any suggestions will be appreciated.
> > Thanks in advance.
> >
> >
> > --
> > Netaly
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.



-- 
Netaly
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs installation (2020&2019)

2020-04-26 Thread lazaro monteserin
Hey Netaly, are you trying to install Gromacs 2019 and 2020 at the same
time? If not a couple of things to keep in mind.  Be sure before installing
Gromacs you have all the utilities that Gromacs use, compilers, etc and
that their versions are supported. I saw you are trying to install it for
GPU, for that you need to install first the cuda toolkit versión for your
linux. Now if you are installing Gromacs in an Ubuntu virtual machine on
windows this is going to be a big problem. So for this case my
recommendation is install it without GPU. Follow the instructions for
installation in the website. Should work perfectly. Kindly, Lazaro

On Sun, Apr 26, 2020 at 12:07 PM Yu Du  wrote:

> Hi Netaly,
>
> Although I do not know the exact reason of the failure, after skimming
> through your command, I think that you probably need to assign absolute
> path to CMAKE_INSTALL_PREFIX and have access to the internet for
> downloading REGRESSIONTEST and FFTW package.
>
> If you are new to GROMACS, I recommend installation from simple case, such
> as only CPU no GPU. Only after successfully installing CPU only version
> GROMACS, run to the next level CPU+GPU. This step-by-step installation
> practice can give you a feeling of choosing CMake options.
>
> Cheers,
>
> Du, Yu
> PhD Student,
> Shanghai Institute of Organic Chemistry
> 345 Ling Ling Rd., Shanghai, China.
> Zip: 200032, Tel: (86) 021 5492 5275
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Netaly
> Khazanov 
> Sent: Sunday, April 26, 2020 15:38
> To: Discussion list for GROMACS users 
> Subject: [gmx-users] gromacs installation (2020&2019)
>
> Dear All,
> I am trying to install gromacs 2020 and 2019 versions on CentOS release
> 6.10 (Final) linux system.
> I passed throuht cmake compilation. Using command
> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on
> -DCMAKE_INSTALL_PREFIX=gromacs2020
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-9.0 -DGMX_FFT_LIBRARY=fftw3
> -DCMAKE_C_COMPILER=/opt/rh/devtoolset-6/root/usr/bin/cc
> -DCMAKE_CXX_COMPILER=/opt/rh/devtoolset-6/root/usr/bin/g++
> -DCUDA_HOST_COMPILER=/opt/rh/devtoolset-6/root/usr/bin/g++
> I've used gcc 5 version (tried also 6 version)
>
> However, I am struggling through make execution :
> in gromacs 2019 -
>
> [ 37%] Built target libgromacs_generated
> [ 37%] Built target libgromacs_external
> Scanning dependencies of target gpu_utilstest_cuda
> [ 37%] Linking CXX shared library ../../../../lib/libgpu_utilstest_cuda.so
> [ 37%] Built target gpu_utilstest_cuda
>
> in gromacs 2020-
> [ 27%] Built target linearalgebra
> [ 27%] Built target scanner
> [ 27%] Built target tng_io_obj
> [ 27%] Built target modularsimulator
>
> It just stuck on the line and doesn't continue to run.
>
> Any suggestions will be appreciated.
> Thanks in advance.
>
>
> --
> Netaly
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs installation (2020&2019)

2020-04-26 Thread Yu Du
Hi Netaly,

Although I do not know the exact reason of the failure, after skimming through 
your command, I think that you probably need to assign absolute path to 
CMAKE_INSTALL_PREFIX and have access to the internet for downloading 
REGRESSIONTEST and FFTW package.

If you are new to GROMACS, I recommend installation from simple case, such as 
only CPU no GPU. Only after successfully installing CPU only version GROMACS, 
run to the next level CPU+GPU. This step-by-step installation practice can give 
you a feeling of choosing CMake options.

Cheers,

Du, Yu
PhD Student,
Shanghai Institute of Organic Chemistry
345 Ling Ling Rd., Shanghai, China.
Zip: 200032, Tel: (86) 021 5492 5275

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Netaly 
Khazanov 
Sent: Sunday, April 26, 2020 15:38
To: Discussion list for GROMACS users 
Subject: [gmx-users] gromacs installation (2020&2019)

Dear All,
I am trying to install gromacs 2020 and 2019 versions on CentOS release
6.10 (Final) linux system.
I passed throuht cmake compilation. Using command
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on
-DCMAKE_INSTALL_PREFIX=gromacs2020
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-9.0 -DGMX_FFT_LIBRARY=fftw3
-DCMAKE_C_COMPILER=/opt/rh/devtoolset-6/root/usr/bin/cc
-DCMAKE_CXX_COMPILER=/opt/rh/devtoolset-6/root/usr/bin/g++
-DCUDA_HOST_COMPILER=/opt/rh/devtoolset-6/root/usr/bin/g++
I've used gcc 5 version (tried also 6 version)

However, I am struggling through make execution :
in gromacs 2019 -

[ 37%] Built target libgromacs_generated
[ 37%] Built target libgromacs_external
Scanning dependencies of target gpu_utilstest_cuda
[ 37%] Linking CXX shared library ../../../../lib/libgpu_utilstest_cuda.so
[ 37%] Built target gpu_utilstest_cuda

in gromacs 2020-
[ 27%] Built target linearalgebra
[ 27%] Built target scanner
[ 27%] Built target tng_io_obj
[ 27%] Built target modularsimulator

It just stuck on the line and doesn't continue to run.

Any suggestions will be appreciated.
Thanks in advance.


--
Netaly
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-24 Thread Alex
lt;
gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Kevin
Boyd 
Sent: Thursday, April 23, 2020 9:08 PM
To: gmx-us...@gromacs.org 
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

Hi,

Can you post the full log for the Intel system? I typically find the

real

cycle and time accounting section a better place to start debugging
performance issues.

A couple quick notes, but need a side-by-side comparison for more useful
analysis, and these points may apply to both systems so may not be your
root cause:
* At first glance, your Power system spends 1/3 of its time in

constraint

calculation, which is unusual. This can be reduced 2 ways - first, by
adding more CPU cores. It doesn't make a ton of sense to benchmark on

one

core if your applications will use more. Second, if you upgrade to

Gromacs

2020 you can probably put the constraint calculation on the GPU with
-update GPU.
* The Power system log has this line:



https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304

indicating
that threads perhaps were not actually pinned. Try adding -pinoffset 0

(or

some other core) to specify where you want the process pinned.

Kevin

On Thu, Apr 23, 2020 at 9:40 AM Jonathan D. Halverson <
halver...@princeton.edu> wrote:


*Message sent from a system outside of UConn.*


We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on

an

IBM POWER9/V100 node versus an Intel Broadwell/P100. Both are running

RHEL

7.7 and Slurm 19.05.5. We have no concerns about GROMACS on our Intel
nodes. Everything below is about of the POWER9/V100 node.

We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1
CPU-core and 1 GPU (
ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and
found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives

102

ns/day. The difference in performance is roughly the same for the

larger

ADH benchmark and when different numbers of CPU-cores are used. GROMACS

is

always underperforming on our POWER9/V100 nodes. We have pinning turned

on

(see Slurm script at bottom).

Below is our build procedure on the POWER9/V100 node:

version_gmx=2019.6
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-${version_gmx}.tar.gz
tar zxvf gromacs-${version_gmx}.tar.gz
cd gromacs-${version_gmx}
mkdir build && cd build

module purge
module load rh/devtoolset/7
module load cudatoolkit/10.2

OPTFLAGS="-Ofast -mcpu=power9 -mtune=power9 -mvsx -DNDEBUG"

cmake3 .. -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=gcc -DCMAKE_C_FLAGS_RELEASE="$OPTFLAGS" \
-DCMAKE_CXX_COMPILER=g++ -DCMAKE_CXX_FLAGS_RELEASE="$OPTFLAGS" \
-DGMX_BUILD_MDRUN_ONLY=OFF -DGMX_MPI=OFF -DGMX_OPENMP=ON \
-DGMX_SIMD=IBM_VSX -DGMX_DOUBLE=OFF \
-DGMX_BUILD_OWN_FFTW=ON \
-DGMX_GPU=ON -DGMX_CUDA_TARGET_SM=70 \
-DGMX_OPENMP_MAX_THREADS=128 \
-DCMAKE_INSTALL_PREFIX=$HOME/.local \
-DGMX_COOL_QUOTES=OFF -DREGRESSIONTEST_DOWNLOAD=ON

make -j 10
make check
make install

45 of the 46 tests pass with the exception being HardwareUnitTests.

There

are several posts about this and apparently it is not a concern. The

full

build log is here:


https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/build.log


Here is more info about our POWER9/V100 node:

$ lscpu
Architecture:  ppc64le
Byte Order:Little Endian
CPU(s):128
On-line CPU(s) list:   0-127
Thread(s) per core:4
Core(s) per socket:16
Socket(s): 2
NUMA node(s):  6
Model: 2.3 (pvr 004e 1203)
Model name:POWER9, altivec supported
CPU max MHz:   3800.
CPU min MHz:   2300.

You see that we have 4 hardware threads per physical core. If we use 4
hardware threads on the RNASE benchmark instead of 1 the performance

goes

to 119 ns/day which is still about 20% less than the Broadwell/P100

value.

When using multiple CPU-cores on the POWER9/V100 there is significant
variation in the execution time of the code.

There are four GPUs per POWER9/V100 node:

$ nvidia-smi -q
Driver Version  : 440.33.01
CUDA Version: 10.2
GPU 0004:04:00.0
  Product Name: Tesla V100-SXM2-32GB

The GPUs have been shown to perform as expected on other applications.




The following lines are found in md.log for the POWER9/V100 run:

Overriding thread affinity set outside gmx mdrun
Pinning threads with an auto-selected logical core stride of 128
NOTE: Thread affinity was not set.

The full md.log is available here:


https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log




Below are the MegaFlops Accounting for the POWER9/V100 versus
Broadwell/P100:

 IBM POWER9 WITH NVIDIA V100 
Computing:   M-Number M-Flops  %

Flops

-

   Pair Se

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-24 Thread Szilárd Páll
Hi,

Affinity settings on the Talos II with Ubuntu 18.04 kernel 5.0 works fine.
I get threads pinned where they should be (hwloc confirmed) and consistent
results. I also get reasonable thread placement even without pinning (i.e.
the kernel scatters first until #threads <= #hwthreads). I see only a minor
penalty to not pinning -- not too surprising given that I have a single
NUMA node and the kernel is doing its job.

Here are my quick the test results run on an 8-core Talos II Power9 + a
GPU, using the adh_cubic input:

$ grep Perf *.log
test_1x1_rep1.log:Performance:   16.617
test_1x1_rep2.log:Performance:   16.479
test_1x1_rep3.log:Performance:   16.520
test_1x2_rep1.log:Performance:   32.034
test_1x2_rep2.log:Performance:   32.389
test_1x2_rep3.log:Performance:   32.340
test_1x4_rep1.log:Performance:   62.341
test_1x4_rep2.log:Performance:   62.569
test_1x4_rep3.log:Performance:   62.476
test_1x8_rep1.log:Performance:   97.049
test_1x8_rep2.log:Performance:   96.653
test_1x8_rep3.log:Performance:   96.889


This seems to point towards some issue with the OS or setup on the IBM
machines you have -- and the unit test error may be one of the symptoms of
it (as it suggests something is off with the hardware topology and a NUMA
node is missing from it). I'd still suggest checking if a full not
allocation with all threads, memory, etc passed to the job results in
successful affinity settings i) in mdrun ii) in some other tool.

Please update this thread if you have further findings.

Cheers,
--
Szilárd


On Fri, Apr 24, 2020 at 10:52 PM Szilárd Páll 
wrote:

>
> The following lines are found in md.log for the POWER9/V100 run:
>>
>> Overriding thread affinity set outside gmx mdrun
>> Pinning threads with an auto-selected logical core stride of 128
>> NOTE: Thread affinity was not set.
>>
>> The full md.log is available here:
>> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log
>
>
> I glanced over that at first, will see if I can reproduce it, though I
> only have access to a Raptor Talos, not an IBM machine with Ubuntu.
>
> What OS are you using?
>
>
> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-24 Thread Szilárd Páll
> The following lines are found in md.log for the POWER9/V100 run:
>
> Overriding thread affinity set outside gmx mdrun
> Pinning threads with an auto-selected logical core stride of 128
> NOTE: Thread affinity was not set.
>
> The full md.log is available here:
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log


I glanced over that at first, will see if I can reproduce it, though I only
have access to a Raptor Talos, not an IBM machine with Ubuntu.

What OS are you using?


-- 
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-24 Thread Szilárd Páll
On Fri, Apr 24, 2020 at 5:55 AM Alex  wrote:

> Hi Kevin,
>
> We've been having issues with Power9/V100 very similar to what Jon
> described and basically settled on what I believe is sub-par
> performance. We tested it on systems with ~30-50K particles and threads
> simply cannot be pinned.


What does that mean, how did you verify that?
The Linux kernel can in general set affinities on ppc64el, whether that's
requested by mdrun or some other tool, so if you have observed that the
affinity mask is not respected (or it does not change), that more likely OS
/ setup issue, I'd think.

What is different compared to x86 is that the hardware thread layout is
different on Power9 (with default Linux kernel configs) and hardware
threads are exposed as consecutive "CPUs" by the OS rather than strided by
#cores.

I could try to sum up some details on how to sett affinities (with mdrun or
external tools), if that is of interest. However, it really should be
something that's possible to do even using the job scheduler (+ along
reasonable system configuration).


> As far as Gromacs is concerned, our brand-new
> Power9 nodes operate as if they were based on Intel CPUs (two threads
> per core)


Unless the hardware thread layout has been changed, that's perhaps not the
case, see above.


> and zero advantage of IBM parallelization is being taken.
>

You mean the SMT4?


> Other users of the same nodes reported similar issues with other
> software, which to me suggests that our sysadmins don't really know how
> to set these nodes up.
>
> At this point, if someone could figure out a clear set of build
> instructions in combination with slurm/mdrun inputs, it would be very
> much appreciated.
>

Have you checked  public documentation on ORNL's sites? GROMACS has been
used successfully on Summit. What about IBM support?

--
Szilárd


>
> Alex
>
> On 4/23/2020 9:37 PM, Kevin Boyd wrote:
> > I'm not entirely sure how thread-pinning plays with slurm allocations on
> > partial nodes. I always reserve the entire node when I use thread
> pinning,
> > and run a bunch of simulations by pinning to different cores manually,
> > rather than relying on slurm to divvy up resources for multiple jobs.
> >
> > Looking at both logs now, a few more points
> >
> > * Your benchmarks are short enough that little things like cores spinning
> > up frequencies can matter. I suggest running longer (increase nsteps in
> the
> > mdp or at the command line), and throwing away your initial benchmark
> data
> > (see -resetstep and -resethway) to avoid artifacts
> > * Your benchmark system is quite small for such a powerful GPU. I might
> > expect better performance running multiple simulations per-GPU if the
> > workflows being run can rely on replicates, and a larger system would
> > probably scale better to the V100.
> > * The P100/intel system appears to have pinned cores properly, it's
> > unclear whether it had a real impact on these benchmarks
> > * It looks like the CPU-based computations were the primary contributors
> to
> > the observed difference in performance. That should decrease or go away
> > with increased core counts and shifting the update phase to the GPU. It
> may
> > be (I have no prior experience to indicate either way) that the intel
> cores
> > are simply better on a 1-1 basis than the Power cores. If you have 4-8
> > cores per simulation (try -ntomp 4 and increasing the allocation of your
> > slurm job), the individual core performance shouldn't matter too
> > much, you're just certainly bottlenecked on one CPU core per GPU, which
> can
> > emphasize performance differences..
> >
> > Kevin
> >
> > On Thu, Apr 23, 2020 at 6:43 PM Jonathan D. Halverson <
> > halver...@princeton.edu> wrote:
> >
> >> *Message sent from a system outside of UConn.*
> >>
> >>
> >> Hi Kevin,
> >>
> >> md.log for the Intel run is here:
> >>
> >>
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100
> >>
> >> Thanks for the info on constraints with 2020. I'll try some runs with
> >> different values of -pinoffset for 2019.6.
> >>
> >> I know a group at NIST is having the same or similar problems with
> >> POWER9/V100.
> >>
> >> Jon
> >> 
> >> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> >> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Kevin
> >> Boyd 
> >> Sent: Thursday, April 23, 2020 9:08 PM
> >> To: gmx-us...@gromacs.org 
> >> Subject: Re: 

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-24 Thread Jonathan D. Halverson
I cannot force the pinning via GROMACS so I will look at what can be done with 
hwloc.

On the POWER9 the hardware appears to be detected correctly (only Intel gives 
note):
Running on 1 node with total 128 cores, 128 logical cores, 1 compatible GPU

But during the build it fails the HarwareUnitTests:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/build.log#L3338


Here are more benchmarks based on Kevin and Szilárd's suggestions:

ADH (134177 atoms, 
ftp://ftp.gromacs.org/pub/benchmarks/ADH_bench_systems.tar.gz)
2019.6, PME and cubic box
nsteps = 4

Intel Broadwell-NVIDIA P100
ntomp (rate, wall time)
1 (21 ns/day, 323 s)
4 (56 ns/day, 123 s)
8 (69 ns/day, 100 s)

IBM POWER9-NVIDIA V100
ntomp (rate, wall time)
 1 (14 ns/day, 500 s)
 1 (14 ns/day, 502 s)
 1 (14 ns/day, 510 s)
 4 (19 ns/day, 357 s)
 4 (17 ns/day, 397 s)
 4 (20 ns/day, 346 s)
 8 (30 ns/day, 232 s)
 8 (24 ns/day, 288 s)
 8 (31 ns/day, 222 s)
16 (59 ns/day, 117 s)
16 (65 ns/day, 107 s)
16 (63 ns/day, 110 s) [md.log on GitHub is https://bit.ly/3aCm1gw]
32 (89 ns/day,  76 s)
32 (93 ns/day,  75 s)
32 (89 ns/day,  78 s)
64 (57 ns/day, 122 s)
64 (43 ns/day, 159 s)
64 (46 ns/day, 152 s)

Yes, there is variability between identical runs for POWER9/V100.

For the Intel case, ntomp equals the number of physical cores. For the IBM 
case, ntomp is equal to the number of hardware threads (4 hardware threads per 
physical core). On a physical core basis, these number are looking better but 
clearly there are still problems.

I tried different values for -pinoffset but didn't see performance gains that 
could't be explained by the variation from run to run.

I've written to contacts at ORNL and IBM.

Jon


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Szilárd Páll 

Sent: Friday, April 24, 2020 10:23 AM
To: Discussion list for GROMACS users 
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

Using a single thread per GPU as the linked log files show is not
sufficient for GROMACS (and any modern machine should have more than that
anyway), but I imply from your mail that this only meant to debug
performance instability?

Your performance variations with Power9 may be related that you are either
not setting affinities or the affinity settings is not correct. However,
you also have some job scheduler in the way (that I suspect is either not
configured well or is not passed the required options to correctly assign
resources to jobs) and obfuscates machine layout and makes things look
weird to mdrun [1].

I suggest to simplify the problem and try to debug it step-by-step. Start
with allocating full nodes and test that you can pin (either with mdurun
-pin on or hwloc) and avoid [1], get an understanding of what should you
expect from the node sharing that seem to not work correctly. Building
GROMACS with hwloc may help as you get better reporting in the log.

[1]
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100#L58

--
Szilárd


On Fri, Apr 24, 2020 at 3:43 AM Jonathan D. Halverson <
halver...@princeton.edu> wrote:

> Hi Kevin,
>
> md.log for the Intel run is here:
>
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100
>
> Thanks for the info on constraints with 2020. I'll try some runs with
> different values of -pinoffset for 2019.6.
>
> I know a group at NIST is having the same or similar problems with
> POWER9/V100.
>
> Jon
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Kevin
> Boyd 
> Sent: Thursday, April 23, 2020 9:08 PM
> To: gmx-us...@gromacs.org 
> Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node
>
> Hi,
>
> Can you post the full log for the Intel system? I typically find the real
> cycle and time accounting section a better place to start debugging
> performance issues.
>
> A couple quick notes, but need a side-by-side comparison for more useful
> analysis, and these points may apply to both systems so may not be your
> root cause:
> * At first glance, your Power system spends 1/3 of its time in constraint
> calculation, which is unusual. This can be reduced 2 ways - first, by
> adding more CPU cores. It doesn't make a ton of sense to benchmark on one
> core if your applications will use more. Second, if you upgrade to Gromacs
> 2020 you can probably put the constraint calculation on the GPU with
> -update GPU.
> * The Power system log has this line:
>
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304
> indicating
> that threads perhaps were not actually pinned. Try adding -pinoffset 0 (or
> some other core) to specify where you want the process pinned.
>
&g

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-24 Thread Szilárd Páll
Using a single thread per GPU as the linked log files show is not
sufficient for GROMACS (and any modern machine should have more than that
anyway), but I imply from your mail that this only meant to debug
performance instability?

Your performance variations with Power9 may be related that you are either
not setting affinities or the affinity settings is not correct. However,
you also have some job scheduler in the way (that I suspect is either not
configured well or is not passed the required options to correctly assign
resources to jobs) and obfuscates machine layout and makes things look
weird to mdrun [1].

I suggest to simplify the problem and try to debug it step-by-step. Start
with allocating full nodes and test that you can pin (either with mdurun
-pin on or hwloc) and avoid [1], get an understanding of what should you
expect from the node sharing that seem to not work correctly. Building
GROMACS with hwloc may help as you get better reporting in the log.

[1]
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100#L58

--
Szilárd


On Fri, Apr 24, 2020 at 3:43 AM Jonathan D. Halverson <
halver...@princeton.edu> wrote:

> Hi Kevin,
>
> md.log for the Intel run is here:
>
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100
>
> Thanks for the info on constraints with 2020. I'll try some runs with
> different values of -pinoffset for 2019.6.
>
> I know a group at NIST is having the same or similar problems with
> POWER9/V100.
>
> Jon
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Kevin
> Boyd 
> Sent: Thursday, April 23, 2020 9:08 PM
> To: gmx-us...@gromacs.org 
> Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node
>
> Hi,
>
> Can you post the full log for the Intel system? I typically find the real
> cycle and time accounting section a better place to start debugging
> performance issues.
>
> A couple quick notes, but need a side-by-side comparison for more useful
> analysis, and these points may apply to both systems so may not be your
> root cause:
> * At first glance, your Power system spends 1/3 of its time in constraint
> calculation, which is unusual. This can be reduced 2 ways - first, by
> adding more CPU cores. It doesn't make a ton of sense to benchmark on one
> core if your applications will use more. Second, if you upgrade to Gromacs
> 2020 you can probably put the constraint calculation on the GPU with
> -update GPU.
> * The Power system log has this line:
>
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304
> indicating
> that threads perhaps were not actually pinned. Try adding -pinoffset 0 (or
> some other core) to specify where you want the process pinned.
>
> Kevin
>
> On Thu, Apr 23, 2020 at 9:40 AM Jonathan D. Halverson <
> halver...@princeton.edu> wrote:
>
> > *Message sent from a system outside of UConn.*
> >
> >
> > We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an
> > IBM POWER9/V100 node versus an Intel Broadwell/P100. Both are running
> RHEL
> > 7.7 and Slurm 19.05.5. We have no concerns about GROMACS on our Intel
> > nodes. Everything below is about of the POWER9/V100 node.
> >
> > We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1
> > CPU-core and 1 GPU (
> > ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and
> > found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives
> 102
> > ns/day. The difference in performance is roughly the same for the larger
> > ADH benchmark and when different numbers of CPU-cores are used. GROMACS
> is
> > always underperforming on our POWER9/V100 nodes. We have pinning turned
> on
> > (see Slurm script at bottom).
> >
> > Below is our build procedure on the POWER9/V100 node:
> >
> > version_gmx=2019.6
> > wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-${version_gmx}.tar.gz
> > tar zxvf gromacs-${version_gmx}.tar.gz
> > cd gromacs-${version_gmx}
> > mkdir build && cd build
> >
> > module purge
> > module load rh/devtoolset/7
> > module load cudatoolkit/10.2
> >
> > OPTFLAGS="-Ofast -mcpu=power9 -mtune=power9 -mvsx -DNDEBUG"
> >
> > cmake3 .. -DCMAKE_BUILD_TYPE=Release \
> > -DCMAKE_C_COMPILER=gcc -DCMAKE_C_FLAGS_RELEASE="$OPTFLAGS" \
> > -DCMAKE_CXX_COMPILER=g++ -DCMAKE_CXX_FLAGS_RELEASE="$OPTFLAGS" \
> > -DGMX_BUILD_MDRUN_ONLY=OFF -DGMX_MPI=OFF -DGMX_OPENMP=ON \
> > -DGMX_SIMD=IBM_VSX -DGMX_DOUBLE=OFF \
&

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-23 Thread Alex

Hi Kevin,

We've been having issues with Power9/V100 very similar to what Jon 
described and basically settled on what I believe is sub-par 
performance. We tested it on systems with ~30-50K particles and threads 
simply cannot be pinned. As far as Gromacs is concerned, our brand-new 
Power9 nodes operate as if they were based on Intel CPUs (two threads 
per core) and zero advantage of IBM parallelization is being taken. 
Other users of the same nodes reported similar issues with other 
software, which to me suggests that our sysadmins don't really know how 
to set these nodes up.


At this point, if someone could figure out a clear set of build 
instructions in combination with slurm/mdrun inputs, it would be very 
much appreciated.


Alex

On 4/23/2020 9:37 PM, Kevin Boyd wrote:

I'm not entirely sure how thread-pinning plays with slurm allocations on
partial nodes. I always reserve the entire node when I use thread pinning,
and run a bunch of simulations by pinning to different cores manually,
rather than relying on slurm to divvy up resources for multiple jobs.

Looking at both logs now, a few more points

* Your benchmarks are short enough that little things like cores spinning
up frequencies can matter. I suggest running longer (increase nsteps in the
mdp or at the command line), and throwing away your initial benchmark data
(see -resetstep and -resethway) to avoid artifacts
* Your benchmark system is quite small for such a powerful GPU. I might
expect better performance running multiple simulations per-GPU if the
workflows being run can rely on replicates, and a larger system would
probably scale better to the V100.
* The P100/intel system appears to have pinned cores properly, it's
unclear whether it had a real impact on these benchmarks
* It looks like the CPU-based computations were the primary contributors to
the observed difference in performance. That should decrease or go away
with increased core counts and shifting the update phase to the GPU. It may
be (I have no prior experience to indicate either way) that the intel cores
are simply better on a 1-1 basis than the Power cores. If you have 4-8
cores per simulation (try -ntomp 4 and increasing the allocation of your
slurm job), the individual core performance shouldn't matter too
much, you're just certainly bottlenecked on one CPU core per GPU, which can
emphasize performance differences..

Kevin

On Thu, Apr 23, 2020 at 6:43 PM Jonathan D. Halverson <
halver...@princeton.edu> wrote:


*Message sent from a system outside of UConn.*


Hi Kevin,

md.log for the Intel run is here:

https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100

Thanks for the info on constraints with 2020. I'll try some runs with
different values of -pinoffset for 2019.6.

I know a group at NIST is having the same or similar problems with
POWER9/V100.

Jon

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Kevin
Boyd 
Sent: Thursday, April 23, 2020 9:08 PM
To: gmx-us...@gromacs.org 
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

Hi,

Can you post the full log for the Intel system? I typically find the real
cycle and time accounting section a better place to start debugging
performance issues.

A couple quick notes, but need a side-by-side comparison for more useful
analysis, and these points may apply to both systems so may not be your
root cause:
* At first glance, your Power system spends 1/3 of its time in constraint
calculation, which is unusual. This can be reduced 2 ways - first, by
adding more CPU cores. It doesn't make a ton of sense to benchmark on one
core if your applications will use more. Second, if you upgrade to Gromacs
2020 you can probably put the constraint calculation on the GPU with
-update GPU.
* The Power system log has this line:

https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304
indicating
that threads perhaps were not actually pinned. Try adding -pinoffset 0 (or
some other core) to specify where you want the process pinned.

Kevin

On Thu, Apr 23, 2020 at 9:40 AM Jonathan D. Halverson <
halver...@princeton.edu> wrote:


*Message sent from a system outside of UConn.*


We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an
IBM POWER9/V100 node versus an Intel Broadwell/P100. Both are running

RHEL

7.7 and Slurm 19.05.5. We have no concerns about GROMACS on our Intel
nodes. Everything below is about of the POWER9/V100 node.

We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1
CPU-core and 1 GPU (
ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and
found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives

102

ns/day. The difference in performance is roughly the same for the larger
ADH benchmark and when different numbers of CPU-cores are used. GROMACS

is

alwa

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-23 Thread Kevin Boyd
I'm not entirely sure how thread-pinning plays with slurm allocations on
partial nodes. I always reserve the entire node when I use thread pinning,
and run a bunch of simulations by pinning to different cores manually,
rather than relying on slurm to divvy up resources for multiple jobs.

Looking at both logs now, a few more points

* Your benchmarks are short enough that little things like cores spinning
up frequencies can matter. I suggest running longer (increase nsteps in the
mdp or at the command line), and throwing away your initial benchmark data
(see -resetstep and -resethway) to avoid artifacts
* Your benchmark system is quite small for such a powerful GPU. I might
expect better performance running multiple simulations per-GPU if the
workflows being run can rely on replicates, and a larger system would
probably scale better to the V100.
* The P100/intel system appears to have pinned cores properly, it's
unclear whether it had a real impact on these benchmarks
* It looks like the CPU-based computations were the primary contributors to
the observed difference in performance. That should decrease or go away
with increased core counts and shifting the update phase to the GPU. It may
be (I have no prior experience to indicate either way) that the intel cores
are simply better on a 1-1 basis than the Power cores. If you have 4-8
cores per simulation (try -ntomp 4 and increasing the allocation of your
slurm job), the individual core performance shouldn't matter too
much, you're just certainly bottlenecked on one CPU core per GPU, which can
emphasize performance differences..

Kevin

On Thu, Apr 23, 2020 at 6:43 PM Jonathan D. Halverson <
halver...@princeton.edu> wrote:

> *Message sent from a system outside of UConn.*
>
>
> Hi Kevin,
>
> md.log for the Intel run is here:
>
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100
>
> Thanks for the info on constraints with 2020. I'll try some runs with
> different values of -pinoffset for 2019.6.
>
> I know a group at NIST is having the same or similar problems with
> POWER9/V100.
>
> Jon
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Kevin
> Boyd 
> Sent: Thursday, April 23, 2020 9:08 PM
> To: gmx-us...@gromacs.org 
> Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node
>
> Hi,
>
> Can you post the full log for the Intel system? I typically find the real
> cycle and time accounting section a better place to start debugging
> performance issues.
>
> A couple quick notes, but need a side-by-side comparison for more useful
> analysis, and these points may apply to both systems so may not be your
> root cause:
> * At first glance, your Power system spends 1/3 of its time in constraint
> calculation, which is unusual. This can be reduced 2 ways - first, by
> adding more CPU cores. It doesn't make a ton of sense to benchmark on one
> core if your applications will use more. Second, if you upgrade to Gromacs
> 2020 you can probably put the constraint calculation on the GPU with
> -update GPU.
> * The Power system log has this line:
>
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304
> indicating
> that threads perhaps were not actually pinned. Try adding -pinoffset 0 (or
> some other core) to specify where you want the process pinned.
>
> Kevin
>
> On Thu, Apr 23, 2020 at 9:40 AM Jonathan D. Halverson <
> halver...@princeton.edu> wrote:
>
> > *Message sent from a system outside of UConn.*
> >
> >
> > We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an
> > IBM POWER9/V100 node versus an Intel Broadwell/P100. Both are running
> RHEL
> > 7.7 and Slurm 19.05.5. We have no concerns about GROMACS on our Intel
> > nodes. Everything below is about of the POWER9/V100 node.
> >
> > We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1
> > CPU-core and 1 GPU (
> > ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and
> > found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives
> 102
> > ns/day. The difference in performance is roughly the same for the larger
> > ADH benchmark and when different numbers of CPU-cores are used. GROMACS
> is
> > always underperforming on our POWER9/V100 nodes. We have pinning turned
> on
> > (see Slurm script at bottom).
> >
> > Below is our build procedure on the POWER9/V100 node:
> >
> > version_gmx=2019.6
> > wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-${version_gmx}.tar.gz
> > tar zxvf gromacs-${version_gmx}.tar.gz
> > cd gromacs-${version_gmx}

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-23 Thread Jonathan D. Halverson
Hi Kevin,

md.log for the Intel run is here:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100

Thanks for the info on constraints with 2020. I'll try some runs with different 
values of -pinoffset for 2019.6.

I know a group at NIST is having the same or similar problems with POWER9/V100.

Jon

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Kevin Boyd 

Sent: Thursday, April 23, 2020 9:08 PM
To: gmx-us...@gromacs.org 
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

Hi,

Can you post the full log for the Intel system? I typically find the real
cycle and time accounting section a better place to start debugging
performance issues.

A couple quick notes, but need a side-by-side comparison for more useful
analysis, and these points may apply to both systems so may not be your
root cause:
* At first glance, your Power system spends 1/3 of its time in constraint
calculation, which is unusual. This can be reduced 2 ways - first, by
adding more CPU cores. It doesn't make a ton of sense to benchmark on one
core if your applications will use more. Second, if you upgrade to Gromacs
2020 you can probably put the constraint calculation on the GPU with
-update GPU.
* The Power system log has this line:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304
indicating
that threads perhaps were not actually pinned. Try adding -pinoffset 0 (or
some other core) to specify where you want the process pinned.

Kevin

On Thu, Apr 23, 2020 at 9:40 AM Jonathan D. Halverson <
halver...@princeton.edu> wrote:

> *Message sent from a system outside of UConn.*
>
>
> We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an
> IBM POWER9/V100 node versus an Intel Broadwell/P100. Both are running RHEL
> 7.7 and Slurm 19.05.5. We have no concerns about GROMACS on our Intel
> nodes. Everything below is about of the POWER9/V100 node.
>
> We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1
> CPU-core and 1 GPU (
> ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and
> found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives 102
> ns/day. The difference in performance is roughly the same for the larger
> ADH benchmark and when different numbers of CPU-cores are used. GROMACS is
> always underperforming on our POWER9/V100 nodes. We have pinning turned on
> (see Slurm script at bottom).
>
> Below is our build procedure on the POWER9/V100 node:
>
> version_gmx=2019.6
> wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-${version_gmx}.tar.gz
> tar zxvf gromacs-${version_gmx}.tar.gz
> cd gromacs-${version_gmx}
> mkdir build && cd build
>
> module purge
> module load rh/devtoolset/7
> module load cudatoolkit/10.2
>
> OPTFLAGS="-Ofast -mcpu=power9 -mtune=power9 -mvsx -DNDEBUG"
>
> cmake3 .. -DCMAKE_BUILD_TYPE=Release \
> -DCMAKE_C_COMPILER=gcc -DCMAKE_C_FLAGS_RELEASE="$OPTFLAGS" \
> -DCMAKE_CXX_COMPILER=g++ -DCMAKE_CXX_FLAGS_RELEASE="$OPTFLAGS" \
> -DGMX_BUILD_MDRUN_ONLY=OFF -DGMX_MPI=OFF -DGMX_OPENMP=ON \
> -DGMX_SIMD=IBM_VSX -DGMX_DOUBLE=OFF \
> -DGMX_BUILD_OWN_FFTW=ON \
> -DGMX_GPU=ON -DGMX_CUDA_TARGET_SM=70 \
> -DGMX_OPENMP_MAX_THREADS=128 \
> -DCMAKE_INSTALL_PREFIX=$HOME/.local \
> -DGMX_COOL_QUOTES=OFF -DREGRESSIONTEST_DOWNLOAD=ON
>
> make -j 10
> make check
> make install
>
> 45 of the 46 tests pass with the exception being HardwareUnitTests. There
> are several posts about this and apparently it is not a concern. The full
> build log is here:
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/build.log
>
>
>
> Here is more info about our POWER9/V100 node:
>
> $ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):128
> On-line CPU(s) list:   0-127
> Thread(s) per core:4
> Core(s) per socket:16
> Socket(s): 2
> NUMA node(s):  6
> Model: 2.3 (pvr 004e 1203)
> Model name:POWER9, altivec supported
> CPU max MHz:   3800.
> CPU min MHz:   2300.
>
> You see that we have 4 hardware threads per physical core. If we use 4
> hardware threads on the RNASE benchmark instead of 1 the performance goes
> to 119 ns/day which is still about 20% less than the Broadwell/P100 value.
> When using multiple CPU-cores on the POWER9/V100 there is significant
> variation in the execution time of the code.
>
> There are four GPUs per POWER9/V100 node:
>
> $ nvidia-smi -q
> Driver Version  : 440.33.01
> CUDA Version: 10.2
> GPU 0004:04:00.0
> Product Name: Tesla V

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-23 Thread Kevin Boyd
Hi,

Can you post the full log for the Intel system? I typically find the real
cycle and time accounting section a better place to start debugging
performance issues.

A couple quick notes, but need a side-by-side comparison for more useful
analysis, and these points may apply to both systems so may not be your
root cause:
* At first glance, your Power system spends 1/3 of its time in constraint
calculation, which is unusual. This can be reduced 2 ways - first, by
adding more CPU cores. It doesn't make a ton of sense to benchmark on one
core if your applications will use more. Second, if you upgrade to Gromacs
2020 you can probably put the constraint calculation on the GPU with
-update GPU.
* The Power system log has this line:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304
indicating
that threads perhaps were not actually pinned. Try adding -pinoffset 0 (or
some other core) to specify where you want the process pinned.

Kevin

On Thu, Apr 23, 2020 at 9:40 AM Jonathan D. Halverson <
halver...@princeton.edu> wrote:

> *Message sent from a system outside of UConn.*
>
>
> We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an
> IBM POWER9/V100 node versus an Intel Broadwell/P100. Both are running RHEL
> 7.7 and Slurm 19.05.5. We have no concerns about GROMACS on our Intel
> nodes. Everything below is about of the POWER9/V100 node.
>
> We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1
> CPU-core and 1 GPU (
> ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and
> found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives 102
> ns/day. The difference in performance is roughly the same for the larger
> ADH benchmark and when different numbers of CPU-cores are used. GROMACS is
> always underperforming on our POWER9/V100 nodes. We have pinning turned on
> (see Slurm script at bottom).
>
> Below is our build procedure on the POWER9/V100 node:
>
> version_gmx=2019.6
> wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-${version_gmx}.tar.gz
> tar zxvf gromacs-${version_gmx}.tar.gz
> cd gromacs-${version_gmx}
> mkdir build && cd build
>
> module purge
> module load rh/devtoolset/7
> module load cudatoolkit/10.2
>
> OPTFLAGS="-Ofast -mcpu=power9 -mtune=power9 -mvsx -DNDEBUG"
>
> cmake3 .. -DCMAKE_BUILD_TYPE=Release \
> -DCMAKE_C_COMPILER=gcc -DCMAKE_C_FLAGS_RELEASE="$OPTFLAGS" \
> -DCMAKE_CXX_COMPILER=g++ -DCMAKE_CXX_FLAGS_RELEASE="$OPTFLAGS" \
> -DGMX_BUILD_MDRUN_ONLY=OFF -DGMX_MPI=OFF -DGMX_OPENMP=ON \
> -DGMX_SIMD=IBM_VSX -DGMX_DOUBLE=OFF \
> -DGMX_BUILD_OWN_FFTW=ON \
> -DGMX_GPU=ON -DGMX_CUDA_TARGET_SM=70 \
> -DGMX_OPENMP_MAX_THREADS=128 \
> -DCMAKE_INSTALL_PREFIX=$HOME/.local \
> -DGMX_COOL_QUOTES=OFF -DREGRESSIONTEST_DOWNLOAD=ON
>
> make -j 10
> make check
> make install
>
> 45 of the 46 tests pass with the exception being HardwareUnitTests. There
> are several posts about this and apparently it is not a concern. The full
> build log is here:
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/build.log
>
>
>
> Here is more info about our POWER9/V100 node:
>
> $ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):128
> On-line CPU(s) list:   0-127
> Thread(s) per core:4
> Core(s) per socket:16
> Socket(s): 2
> NUMA node(s):  6
> Model: 2.3 (pvr 004e 1203)
> Model name:POWER9, altivec supported
> CPU max MHz:   3800.
> CPU min MHz:   2300.
>
> You see that we have 4 hardware threads per physical core. If we use 4
> hardware threads on the RNASE benchmark instead of 1 the performance goes
> to 119 ns/day which is still about 20% less than the Broadwell/P100 value.
> When using multiple CPU-cores on the POWER9/V100 there is significant
> variation in the execution time of the code.
>
> There are four GPUs per POWER9/V100 node:
>
> $ nvidia-smi -q
> Driver Version  : 440.33.01
> CUDA Version: 10.2
> GPU 0004:04:00.0
> Product Name: Tesla V100-SXM2-32GB
>
> The GPUs have been shown to perform as expected on other applications.
>
>
>
>
> The following lines are found in md.log for the POWER9/V100 run:
>
> Overriding thread affinity set outside gmx mdrun
> Pinning threads with an auto-selected logical core stride of 128
> NOTE: Thread affinity was not set.
>
> The full md.log is available here:
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log
>
>
>
>
> Below are the MegaFlops Accounting for the POWER9/V100 versus
> Broadwell/P100:
>
>  IBM POWER9 WITH NVIDIA V100 
> Computing:   M-Number M-Flops  % Flops
>
> -
>  Pair Search distance check 297.7638722679.875 0.0
>  NxN Ewald Elec. + LJ [F]244214.215808

Re: [gmx-users] GROMACS mdp file for doing a single point energy after acpype conversion

2020-04-23 Thread Justin Lemkul




On 4/23/20 5:42 AM, ABEL Stephane wrote:

Deal all,

I am using acpype to convert a set of glycolipids modeled with the GLYCAM06 
force fiedl  into the GROMACS format. acpype works well for this task. But I 
would like to check if the conversion is correctly done by performing single 
point energy (SPE) calculations with Amber and GROMACS codes and thus computes 
the energy differences  for the bonded and non bonded terms

For the former test I using the prmtop and inpcrd files generated with tleap 
and sander with the minimal commands below

| mdin Single point

imin=0,
maxcyc=0,
ntmin=2,
ntb=0,
igb=0,
cut=999
/

But For GROMACS vers > 5.0 and 2018.x , I did not find the equivalent mdp 
parameters that can be used for doing the same task. I I used  the minimal file 
below The bonded energy terms are very similar between the two codes but not the 
non bonded terms

integrator  = steep ; Algorithm (steep = steepest descent minimization)
emtol   = 1000.0; Stop minimization when the maximum force < 1000.0 
kJ/mol/nm
emstep  = 0.01  ; Minimization step size
nsteps  = 0   ; Maximum number of (minimization) steps to perform 
(should be 5)

; Parameters describing how to find the neighbors of each atom and how to 
calculate the interactions
nstlist = 1 ; Frequency to update the neighbor list and long 
range forces
cutoff-scheme   = Group   ; Buffered neighbor searching
ns_type = grid  ; Method to determine neighbor list (simple, grid)
coulombtype = Cut-off   ; Treatment of long range electrostatic interactions
rcoulomb= 0   ; Short-range electrostatic cut-off
rvdw= 0   ; Short-range Van der Waals cut-off
rlist   = 0
pbc = no   ; P
continuation = yes

I also also notice that a tpr generated with this mdp can not be used with 
-rerun argument so how I can compute a SPE equivalent to Sander


Why is it incompatible with mdrun -rerun? Do you get an error?

You also shouldn't use a minimizer when doing a zero-point energy. Use 
the md integrator.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS PBS GPU JOB submission

2020-04-16 Thread Yu Du
Hi Tuanan,

I think your problem can be separated into several parts:

First, use one PBS script contains 4 GMX commands, shown as follows, may solve 
your problems:

#PBS -l select=1:ncpus=40:mpiprocs=40:ompthreads=1:ngpus=1
mpirun -machinefile $PBS_NODEFILE -np 40 gmx_mpi mdrun -s nvt-prod1.tpr -deffnm 
TEST1 -ntomp 1 -gpu_id 0
mpirun -machinefile $PBS_NODEFILE -np 40 gmx_mpi mdrun -s nvt-prod2.tpr -deffnm 
TEST2 -ntomp 1 -gpu_id 1
mpirun -machinefile $PBS_NODEFILE -np 40 gmx_mpi mdrun -s nvt-prod3.tpr -deffnm 
TEST3 -ntomp 1 -gpu_id 2
mpirun -machinefile $PBS_NODEFILE -np 40 gmx_mpi mdrun -s nvt-prod4.tpr -deffnm 
TEST4 -ntomp 1 -gpu_id 3

Then, you should optimize the number of CPU used by one GPU to get the best 
performance.

Last, check mdrun in the GMX manual, use pin and related options to set the CPU 
range which each subjob uses to avoid the interference between them.

Cheers,

Du, Yu
PhD Student,
Shanghai Institute of Organic Chemistry
345 Ling Ling Rd., Shanghai, China.
Zip: 200032, Tel: (86) 021 5492 5275

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Tuanan 
Lourenço 
Sent: Thursday, April 16, 2020 21:34
To: gmx-us...@gromacs.org 
Subject: [gmx-users] GROMACS PBS GPU JOB submission

Hi everyone

I am using GROMACS 2018 in a node with 80 core and 4 TESLA V1, the queue
system is PBS. I am having some issues with the GPU selection, what I want
is to use 1 GPU per job but GROMACS is always using all the four GPUs.

My submission script is the following:

#PBS -l select=1:ncpus=40:mpiprocs=40:ompthreads=1:ngpus=1

mpirun -machinefile $PBS_NODEFILE -np 40 gmx_mpi mdrun -s nvt-prod.tpr
-deffnm TEST -ntomp 1


However, looking to the gromacs log file I see "On host gn01 4 GPUs
auto-selected for this run".

I know if I use the flag -gpu_id I can tell for GROMACS what GPU I want to
use and in this case, everything is ok, GROMACS does what I say.
But this can be a problem if I submit more than one job to the node,
because the jobs will use the same GPU card.

My question is; there is any way to say for GROMACS or PBS to use the GPU
that is available at that moment? I have 4 GPU in the server, thus, I want
to submit 4 jobs each one using one GPU.


Thank you very much.



--
__
Dr. Tuanan C. Lourenço
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs 2020, bug? high total energy

2020-04-14 Thread Tamas Hegedus

Hi,

It was not gromacs 2020, but my fault and mixing up various tools 
(grompp, mdrun) from 2019 and 2020 versions.

I am sorry.

Bests,
Tamas

On 4/14/20 4:44 PM, Tamas Hegedus wrote:

Hi,

There might be some bug(?) in gromacs 2020. I can not decide.

I just installed 2020.1 today using the same script (and libraries) 
what I have used for gromacs 2019.

After energy minimization, in the nvt equilibration run, it stops:
Fatal error:
Step 0: The total potential energy is 1.99295e+23, which is extremely 
high.
The LJ and electrostatic contributions to the energy are 73028.7 and 
-712615,

respectively. A very high potential energy can be caused by overlapping
interactions in bonded interactions or very large coordinate values. 
Usually

this is caused by a badly- or non-equilibrated initial configuration,
incorrect interactions or parameters in the topology.

I tried two different systems (two different soluble proteins, ff 
charmm-36m).


However, if I start the simulations with 2019.4, there is no problem 
at all.


Have a nice day,
Tamas



--
Tamas Hegedus, PhD
Senior Research Fellow
Department of Biophysics and Radiation Biology
Semmelweis University | phone: (36) 1-459 1500/60233
Tuzolto utca 37-47| mailto:ta...@hegelab.org
Budapest, 1094, Hungary   | http://www.hegelab.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS version issue

2020-04-10 Thread Justin Lemkul




On 4/10/20 11:04 AM, Yasaman KARAMI wrote:

Dear GROMACS developers,


I am performing classical MD simulations of a membrane protein system, using 
GROMACS 2018.6 version. I have just noticed that after few nano seconds, the 
box dimensions are changing. Meaning that the system shrinks along the z-axis, 
for example dimensions are changing from 98.4 x 98.4 x 299.1 (A^3) to 116.8 x 
116.8 x 212.9 (A^3).

After trying so many possibilities, I've realised it is a version specific 
problem. Trying GROMACS 2019.4 the problem is totally solved.

I was wondering if you could explain the reason.



If you're using the CHARMM force field, this was an issue related to 
incorrect treatment of CMAP terms when the protein crossed a periodic 
boundary. The issue was recently solved so you should use the newer 
GROMACS version.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS 2020.1 failed to pass make check

2020-04-06 Thread Wei-Tse Hsu
Hi Paul,
Thank you so much for your reply! That is very helpful.

Best,
Wei-Tse

On Mon, Apr 6, 2020 at 1:23 AM Paul bauer  wrote:

> Hello,
>
> you are using a plumed modified version that adds extra options to mdrun.
> Our unit tests check that the output from mdrun -h stays invariant, and
> this is no longer the case once you modify the code for plumed.
>
> You can ignore this failure in this case, but please mention in the
> future if you are using modified versions of the code.
>
> Cheers
>
> Paul
>
> On 05/04/2020 21:23, Wei-Tse Hsu wrote:
> > Dear gmx users,
> > Recently I've been trying to install GROMACS 20201. After successfully
> > compilng GROMACS 2020.1, when executing make check command, I
> > encountered the following error. Specifically, one out of 56 tests
> failed,
> > which was related to Mdrun Test.WritesHelp. Looking at the error message,
> > I'm still not sure what it means and how I should solve this problem. I
> > wonder if anyone had this before that could give me some insights about
> how
> > I could solve the problem. Any help would be appreciated!
> >
> > Here is the error message in the middle.
> >
> >
> >
> >
> >
> > *[--] 1 test from MdrunTest[ RUN  ]
> >
> MdrunTest.WritesHelp/home/wei-tse/Documents/Software/GROMACS/gromacs-2020.1/src/testutils/refdata.cpp:867:
> > Failure  In item: /Help string   Actual: 'SYNOPSIS*
> > And here is the error message in the end.
> >
> >
> >
> >
> >
> > *The following tests FAILED: 43 - MdrunTests
> > (Failed)CMakeFiles/run-ctest-nophys.dir/build.make:57: recipe for target
> > 'CMakeFiles/run-ctest-nophys' failedCMakeFiles/Makefile2:1467: recipe for
> > target 'CMakeFiles/run-ctest-nophys.dir/all'
> > failedCMakeFiles/Makefile2:445: recipe for target
> > 'CMakeFiles/check.dir/rule' failedMakefile:327: recipe for target 'check'
> > failed*
> >
> > And here is the whole STDOUT message of the command printed by make
> *check
> >> make_check.log*.
> >
> https://drive.google.com/file/d/1Nb2BLzA2Vl_cjS1b_M_HNk0wkrfR2WKt/view?usp=sharing
> >
> > Best,
> > Wei-Tse
>
>
> --
> Paul Bauer, PhD
> GROMACS Development Manager
> KTH Stockholm, SciLifeLab
> 0046737308594
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS 2020.1 failed to pass make check

2020-04-06 Thread Paul bauer

Hello,

you are using a plumed modified version that adds extra options to mdrun.
Our unit tests check that the output from mdrun -h stays invariant, and 
this is no longer the case once you modify the code for plumed.


You can ignore this failure in this case, but please mention in the 
future if you are using modified versions of the code.


Cheers

Paul

On 05/04/2020 21:23, Wei-Tse Hsu wrote:

Dear gmx users,
Recently I've been trying to install GROMACS 20201. After successfully
compilng GROMACS 2020.1, when executing make check command, I
encountered the following error. Specifically, one out of 56 tests failed,
which was related to Mdrun Test.WritesHelp. Looking at the error message,
I'm still not sure what it means and how I should solve this problem. I
wonder if anyone had this before that could give me some insights about how
I could solve the problem. Any help would be appreciated!

Here is the error message in the middle.





*[--] 1 test from MdrunTest[ RUN  ]
MdrunTest.WritesHelp/home/wei-tse/Documents/Software/GROMACS/gromacs-2020.1/src/testutils/refdata.cpp:867:
Failure  In item: /Help string   Actual: 'SYNOPSIS*
And here is the error message in the end.





*The following tests FAILED: 43 - MdrunTests
(Failed)CMakeFiles/run-ctest-nophys.dir/build.make:57: recipe for target
'CMakeFiles/run-ctest-nophys' failedCMakeFiles/Makefile2:1467: recipe for
target 'CMakeFiles/run-ctest-nophys.dir/all'
failedCMakeFiles/Makefile2:445: recipe for target
'CMakeFiles/check.dir/rule' failedMakefile:327: recipe for target 'check'
failed*

And here is the whole STDOUT message of the command printed by make *check

make_check.log*.

https://drive.google.com/file/d/1Nb2BLzA2Vl_cjS1b_M_HNk0wkrfR2WKt/view?usp=sharing

Best,
Wei-Tse



--
Paul Bauer, PhD
GROMACS Development Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS 2019-rc1 install issues

2020-03-30 Thread Paul bauer

Also,

please don't use Release candidate versions for anything serious. 
Instead use the latest official release that suits you.


Cheers

Paul

On 30/03/2020 08:22, Nicolás Marcelo Rozas Castro wrote:

Hi everyone,

I'm a beginner user of GROMACS, and I'm having some troubles with the
installation. I follow the instruction in
http://manual.gromacs.org/documentation/2019-rc1/install-guide/index.html ,
but when i run "make", appear the error shown in file attached.
Any advice will be appreciated.

Regards,

Nicolas Rozas Castro
Universidad de Chile, Faculty of Science.



--
Paul Bauer, PhD
GROMACS Development Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GROMACS 2019-rc1 install issues

2020-03-30 Thread Paul bauer

Hello,

you can't attach files to the list here, either paste the error directly 
from the terminal or upload the file somewhere and share the link.


Cheers

Paul

On 30/03/2020 08:22, Nicolás Marcelo Rozas Castro wrote:

Hi everyone,

I'm a beginner user of GROMACS, and I'm having some troubles with the
installation. I follow the instruction in
http://manual.gromacs.org/documentation/2019-rc1/install-guide/index.html ,
but when i run "make", appear the error shown in file attached.
Any advice will be appreciated.

Regards,

Nicolas Rozas Castro
Universidad de Chile, Faculty of Science.



--
Paul Bauer, PhD
GROMACS Development Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GROMACS Quick and Dirty Installation Error

2020-03-17 Thread Schulz, Roland
Hi,

You should not send any GROMACS developer personal emails but use gmx-users 
mailinglist.

You get the 404 because the comma at the end of the URL. Without it the link 
should work. As context the old archived email is: 
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2014-July/090827.html
The correct link is https://gerrit.gromacs.org/c/3764
The fix has been included since GROMACS 5. So any somewhat recent version 
shouldn't have that specific problem. But we don't regularly test GROMACS on 
CYGWIN. You might have more success if you use WSL (Windows subsystem for 
Linux) as a solution on Windows.

Roland

> -Original Message-
> From: Vinu Harihar 
> Sent: Tuesday, March 17, 2020 2:46 PM
> To: Schulz, Roland 
> Subject: GROMACS Quick and Dirty Installation Error
> 
> Dear Mr. Schulz,
> 
> 
> 
> I am a student trying to install GROMACS on my Windows laptop through
> Cygwin. When I try to use the "quick and dirty installation guide" on the
> GROMACS webpage, I run into an error when executing the "make"
> command:
> 
> /gromacs-2020/src/external/thread_mpi/src/tmpi_init.cpp:476:42: error:
> 'strdup' was not declared in this scope; did you mean 'strcmp'?
> 
>   476 | threads[i].argv[j] = strdup( (*argv)[j] );
> 
>   |  ^~
> 
>   |  strcmp
> 
> I found an old post of yours on the gmx-users forum where you describe a
> patch you developed for a similar issue. When I tried to access the patch on
> Gerrit, I got a 404 error. I have tried to follow the instructions posted by 
> other
> users in the  gmx-users thread but I still get this error. I was wondering if 
> you
> have any suggestions on how to fix the error or a newer version of the patch?
> 
> 
> 
> With Gratitude,
> 
> Vinu Harihar
> 
> 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs Install problem

2020-03-14 Thread Paul bauer

Hello,

if you have found a bug in the multi-dimensional array please report it 
on redmine.gromacs.org so we can fix it, together with the build 
configuration that causes it.


Cheers

Paul

On 14/03/2020 00:51, Alexander Tzanov wrote:

The new version from March 2020.1 is buggy. Try version 2020. There is another 
bug in multidimensional array ...

Alex

On Mar 13, 2020 1:54 PM, xuan Zhang  wrote:
Hi,

When I install the Gromacs on Linux, the cmake occurs error like below. I
am a newer on this software. I appreciate very much that you can help me.

Best regards,
Xuan

**/gromacs-2020.1/build$ cmake .. -DGMX_BUILD_OWN_FFTW=ON
-DREGRESSIONTEST_DOWNLOAD=ON
CMake Error at cmake/FindLibStdCpp.cmake:162 (message):
   GROMACS requires C++14, but a test of such functionality in the C++
   standard library failed to compile.  The g++ found at /usr/bin/g++ had a
   suitable version, so ;something else must be the problem
Call Stack (most recent call first):
   CMakeLists.txt:69 (find_package)


-- Configuring incomplete, errors occurred!
See also "/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeOutput.log".
See also "/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeError.log".


*#cmakeerror.log as below:*

Performing C++ SOURCE FILE Test CXX14_COMPILES failed with the following
output:
Change Dir: /data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/make cmTC_203a2/fast && /usr/bin/make -f
CMakeFiles/cmTC_203a2.dir/build.make CMakeFiles/cmTC_203a2.dir/build
make[1]: Entering directory
'/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_203a2.dir/src.cxx.o
/sw/workstations/apps/linux-ubuntu16.04-x86_64/openmpi/3.1.1/intel-18.0.1/lqdls6jj3oauixvl4rbbpnljxr6sd6zs/bin/mpic++
   -gcc-name=/usr/bin/g++ -DCXX14_COMPILES   -std=c++14 -o
CMakeFiles/cmTC_203a2.dir/src.cxx.o -c
/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx
/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx(2): error:
namespace "std" has no member "cbegin"
   int main() { int a[2]; std::cbegin(a); }
   ^

compilation aborted for
/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx (code 2)
CMakeFiles/cmTC_203a2.dir/build.make:82: recipe for target
'CMakeFiles/cmTC_203a2.dir/src.cxx.o' failed
make[1]: *** [CMakeFiles/cmTC_203a2.dir/src.cxx.o] Error 2
make[1]: Leaving directory
'/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp'
Makefile:138: recipe for target 'cmTC_203a2/fast' failed
make: *** [cmTC_203a2/fast] Error 2


Source file was:
#include 
int main() { int a[2]; std::cbegin(a); }

--
*Xuan Zhang*

PhD  Candidate
China University of Petroleum(East China)
School of Petroleum Engineering
No.66 Changjiang West Road, Qingdao


Visiting scholar (2017.12-2018.12)
The University of Texas at Austin
Cockrell School of Engineering
McKetta Department of Chemical Engineering
200 E Dean Keeton St. Stop C0400
CPE 5.428

*(+86) 15563945098 <(571)%20346-9770>* | zhangxuan7...@gmail.com
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.



--
Paul Bauer, PhD
GROMACS Development Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs Install problem

2020-03-13 Thread Alexander Tzanov
The new version from March 2020.1 is buggy. Try version 2020. There is another 
bug in multidimensional array ...

Alex

On Mar 13, 2020 1:54 PM, xuan Zhang  wrote:
Hi,

When I install the Gromacs on Linux, the cmake occurs error like below. I
am a newer on this software. I appreciate very much that you can help me.

Best regards,
Xuan

**/gromacs-2020.1/build$ cmake .. -DGMX_BUILD_OWN_FFTW=ON
-DREGRESSIONTEST_DOWNLOAD=ON
CMake Error at cmake/FindLibStdCpp.cmake:162 (message):
  GROMACS requires C++14, but a test of such functionality in the C++
  standard library failed to compile.  The g++ found at /usr/bin/g++ had a
  suitable version, so ;something else must be the problem
Call Stack (most recent call first):
  CMakeLists.txt:69 (find_package)


-- Configuring incomplete, errors occurred!
See also "/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeOutput.log".
See also "/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeError.log".


*#cmakeerror.log as below:*

Performing C++ SOURCE FILE Test CXX14_COMPILES failed with the following
output:
Change Dir: /data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/make cmTC_203a2/fast && /usr/bin/make -f
CMakeFiles/cmTC_203a2.dir/build.make CMakeFiles/cmTC_203a2.dir/build
make[1]: Entering directory
'/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_203a2.dir/src.cxx.o
/sw/workstations/apps/linux-ubuntu16.04-x86_64/openmpi/3.1.1/intel-18.0.1/lqdls6jj3oauixvl4rbbpnljxr6sd6zs/bin/mpic++
  -gcc-name=/usr/bin/g++ -DCXX14_COMPILES   -std=c++14 -o
CMakeFiles/cmTC_203a2.dir/src.cxx.o -c
/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx
/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx(2): error:
namespace "std" has no member "cbegin"
  int main() { int a[2]; std::cbegin(a); }
  ^

compilation aborted for
/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx (code 2)
CMakeFiles/cmTC_203a2.dir/build.make:82: recipe for target
'CMakeFiles/cmTC_203a2.dir/src.cxx.o' failed
make[1]: *** [CMakeFiles/cmTC_203a2.dir/src.cxx.o] Error 2
make[1]: Leaving directory
'/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp'
Makefile:138: recipe for target 'cmTC_203a2/fast' failed
make: *** [cmTC_203a2/fast] Error 2


Source file was:
#include 
int main() { int a[2]; std::cbegin(a); }

--
*Xuan Zhang*

PhD  Candidate
China University of Petroleum(East China)
School of Petroleum Engineering
No.66 Changjiang West Road, Qingdao


Visiting scholar (2017.12-2018.12)
The University of Texas at Austin
Cockrell School of Engineering
McKetta Department of Chemical Engineering
200 E Dean Keeton St. Stop C0400
CPE 5.428

*(+86) 15563945098 <(571)%20346-9770>* | zhangxuan7...@gmail.com
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs Install problem

2020-03-13 Thread Alexander Tzanov
For that error you may use normal and better Intel compiler or gcc above 6.3.

The 2020.1 compiles well after correcting stupid bug in multidimensional array 
routine and with use of Intel 19 and gcc above 6. I use 7.3

Your compiler is too old to support C++14 but there other problem as well

Alex




On Mar 13, 2020 1:54 PM, xuan Zhang  wrote:
Hi,

When I install the Gromacs on Linux, the cmake occurs error like below. I
am a newer on this software. I appreciate very much that you can help me.

Best regards,
Xuan

**/gromacs-2020.1/build$ cmake .. -DGMX_BUILD_OWN_FFTW=ON
-DREGRESSIONTEST_DOWNLOAD=ON
CMake Error at cmake/FindLibStdCpp.cmake:162 (message):
  GROMACS requires C++14, but a test of such functionality in the C++
  standard library failed to compile.  The g++ found at /usr/bin/g++ had a
  suitable version, so ;something else must be the problem
Call Stack (most recent call first):
  CMakeLists.txt:69 (find_package)


-- Configuring incomplete, errors occurred!
See also "/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeOutput.log".
See also "/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeError.log".


*#cmakeerror.log as below:*

Performing C++ SOURCE FILE Test CXX14_COMPILES failed with the following
output:
Change Dir: /data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/make cmTC_203a2/fast && /usr/bin/make -f
CMakeFiles/cmTC_203a2.dir/build.make CMakeFiles/cmTC_203a2.dir/build
make[1]: Entering directory
'/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_203a2.dir/src.cxx.o
/sw/workstations/apps/linux-ubuntu16.04-x86_64/openmpi/3.1.1/intel-18.0.1/lqdls6jj3oauixvl4rbbpnljxr6sd6zs/bin/mpic++
  -gcc-name=/usr/bin/g++ -DCXX14_COMPILES   -std=c++14 -o
CMakeFiles/cmTC_203a2.dir/src.cxx.o -c
/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx
/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx(2): error:
namespace "std" has no member "cbegin"
  int main() { int a[2]; std::cbegin(a); }
  ^

compilation aborted for
/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx (code 2)
CMakeFiles/cmTC_203a2.dir/build.make:82: recipe for target
'CMakeFiles/cmTC_203a2.dir/src.cxx.o' failed
make[1]: *** [CMakeFiles/cmTC_203a2.dir/src.cxx.o] Error 2
make[1]: Leaving directory
'/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp'
Makefile:138: recipe for target 'cmTC_203a2/fast' failed
make: *** [cmTC_203a2/fast] Error 2


Source file was:
#include 
int main() { int a[2]; std::cbegin(a); }

--
*Xuan Zhang*

PhD  Candidate
China University of Petroleum(East China)
School of Petroleum Engineering
No.66 Changjiang West Road, Qingdao


Visiting scholar (2017.12-2018.12)
The University of Texas at Austin
Cockrell School of Engineering
McKetta Department of Chemical Engineering
200 E Dean Keeton St. Stop C0400
CPE 5.428

*(+86) 15563945098 <(571)%20346-9770>* | zhangxuan7...@gmail.com
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs Install problem

2020-03-13 Thread xuan Zhang
Hi,

For now, I have added the compiler to cmake and it works. But in the
*make *step,
the errors were shown as below. Thank you very much.

Best regards,
Xuan

src/gromacs/CMakeFiles/lmfit_objlib.dir/build.make:79: recipe for target
'src/gromacs/CMakeFiles/lmfit_objlib.dir/__/external/lmfit/lmmin.cpp.o'
failed
make[2]: ***
[src/gromacs/CMakeFiles/lmfit_objlib.dir/__/external/lmfit/lmmin.cpp.o]
Error 1
CMakeFiles/Makefile2:4229: recipe for target
'src/gromacs/CMakeFiles/lmfit_objlib.dir/all' failed
make[1]: *** [src/gromacs/CMakeFiles/lmfit_objlib.dir/all] Error 2
Makefile:179: recipe for target 'all' failed
make: *** [all] Error 2
liy0i@kw60520:/data16/XUAN/gromacs-2020.1/build$ ./configure
-bash: ./configure: No such file or directory



On Sat, Mar 14, 2020 at 1:53 AM xuan Zhang  wrote:

> Hi,
>
> When I install the Gromacs on Linux, the cmake occurs error like below. I
> am a newer on this software. I appreciate very much that you can help me.
>
> Best regards,
> Xuan
>
> **/gromacs-2020.1/build$ cmake .. -DGMX_BUILD_OWN_FFTW=ON
> -DREGRESSIONTEST_DOWNLOAD=ON
> CMake Error at cmake/FindLibStdCpp.cmake:162 (message):
>   GROMACS requires C++14, but a test of such functionality in the C++
>   standard library failed to compile.  The g++ found at /usr/bin/g++ had a
>   suitable version, so ;something else must be the problem
> Call Stack (most recent call first):
>   CMakeLists.txt:69 (find_package)
>
>
> -- Configuring incomplete, errors occurred!
> See also "/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeOutput.log".
> See also "/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeError.log".
>
>
> *#cmakeerror.log as below:*
>
> Performing C++ SOURCE FILE Test CXX14_COMPILES failed with the following
> output:
> Change Dir: /data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp
>
> Run Build Command(s):/usr/bin/make cmTC_203a2/fast && /usr/bin/make -f
> CMakeFiles/cmTC_203a2.dir/build.make CMakeFiles/cmTC_203a2.dir/build
> make[1]: Entering directory
> '/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp'
> Building CXX object CMakeFiles/cmTC_203a2.dir/src.cxx.o
> /sw/workstations/apps/linux-ubuntu16.04-x86_64/openmpi/3.1.1/intel-18.0.1/lqdls6jj3oauixvl4rbbpnljxr6sd6zs/bin/mpic++
>   -gcc-name=/usr/bin/g++ -DCXX14_COMPILES   -std=c++14 -o
> CMakeFiles/cmTC_203a2.dir/src.cxx.o -c
> /data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx
> /data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx(2): error:
> namespace "std" has no member "cbegin"
>   int main() { int a[2]; std::cbegin(a); }
>   ^
>
> compilation aborted for
> /data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp/src.cxx (code 2)
> CMakeFiles/cmTC_203a2.dir/build.make:82: recipe for target
> 'CMakeFiles/cmTC_203a2.dir/src.cxx.o' failed
> make[1]: *** [CMakeFiles/cmTC_203a2.dir/src.cxx.o] Error 2
> make[1]: Leaving directory
> '/data16/XUAN/gromacs-2020.1/build/CMakeFiles/CMakeTmp'
> Makefile:138: recipe for target 'cmTC_203a2/fast' failed
> make: *** [cmTC_203a2/fast] Error 2
>
>
> Source file was:
> #include 
> int main() { int a[2]; std::cbegin(a); }
>
> --
> *Xuan Zhang*
>
> PhD  Candidate
> China University of Petroleum(East China)
> School of Petroleum Engineering
> No.66 Changjiang West Road, Qingdao
>
>
> Visiting scholar (2017.12-2018.12)
> The University of Texas at Austin
> Cockrell School of Engineering
> McKetta Department of Chemical Engineering
> 200 E Dean Keeton St. Stop C0400
> CPE 5.428
>
> *(+86) 15563945098 <(571)%20346-9770>* | zhangxuan7...@gmail.com
>


-- 
*Xuan Zhang*

PhD  Candidate
China University of Petroleum(East China)
School of Petroleum Engineering
No.66 Changjiang West Road, Qingdao


Visiting scholar (2017.12-2018.12)
The University of Texas at Austin
Cockrell School of Engineering
McKetta Department of Chemical Engineering
200 E Dean Keeton St. Stop C0400
CPE 5.428

*(+86) 15563945098 <(571)%20346-9770>* | zhangxuan7...@gmail.com
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GROMACS automation

2020-03-12 Thread John Whittaker
> The paper information is as below :
> Title: Elucidating the Spatial Arrangement of Emitter Molecules in Organic
> Light-Emitting Diode Films
> Authors: Claire Tonnel + , Martin Stroet + , Bertrand Caron, Andrew J.
> Clulow, Ravi C. R. Nagiri, Alpeshkumar K. Malde, Paul L. Burn,* Ian R.
> Gentle, Alan E. Mark,* and Benjamin J. Powell
> https://doi.org/10.1002/anie.201610727
> in the supporting documents page 9 they mentioned how they simulated the
> evaporation process.
>
> Regarding the automation and the script writing, I need to insert one
> molecule with the required velocity and then remove the atom if moving in
> the opposite direction and repeat the process until I have 3000 atom. you
> told me to write a script and make each step's output the input to the
> next
> step and so on. how can I do this part linking the step's output to the
> next step input.

Well, for example, you would insert a molecule using gmx insert-molecule
and specify the output file with the -o flag (we'll call the file
conf1.gro)

then, use gmx insert-molecule with conf1.gro as the input and specify the
output file as conf2.gro, and so on...

> for example I want:
> insert molecule
> remove the molecule if moving in the opposite direction

There's no need to remove the molecule. The supporting information you
mentioned says that the z-direction velocities were taken to be the
absolute value, which gets rid of the problem of molecules traveling in
the wrong direction (if the +z direction is where you want the molecules
to travel towards).

> insert
> remove if
> .
> .
> .
> Do I need just to write it and copy it 3000 time or how can I do it ?

Definitely do not copy it 3000 times. Use a simple for loop:

https://www.cyberciti.biz/faq/bash-for-loop/

On the other hand, maybe you should contact the corresponding author of
the paper you linked and ask if they have any advice or leftover scripts
they used to set up the system. It will probably save you some time,
although you won't necessarily learn something new.

Best,

- John

>
> Many Thankss
> Mohamed
>
> On Thu, Mar 12, 2020 at 11:51 AM John Whittaker <
> johnwhitt...@zedat.fu-berlin.de> wrote:
>
>> > I was planning to put the velocities in the .gro file from which I am
>> > inserting the molecules. If the gmx insert-molecules will ignore the
>> > velocities of the atoms, can you please guide me how can I insert the
>> > molecules with a velocity ?
>> > I have read a paper which mentions that they inserted the molecules
>> and
>> > the velocities of the ATOMS were sampled from a distribution with
>> standard
>> > deviation =xx and mean = xxx.
>>
>> Which paper?
>>
>> You probably have to add the velocities yourself with a script that
>> samples from the Maxwell-Boltzmann distribution at the appropriate
>> temperature and adds those velocities to the atoms in your .gro file.
>>
>> Perhaps there is an easier way that someone can shed some light on, but
>> that's what immediately comes to mind.
>>
>> - John
>>
>> >
>> >
>> > On Wed, Mar 11, 2020 at 7:05 PM Justin Lemkul  wrote:
>> >
>> >>
>> >>
>> >> On 3/11/20 9:56 AM, Mohamed Abdelaal wrote:
>> >> > I want to insert an atom with a velocity moving downwards toward
>> the
>> >> > graphene sheet in my box.
>> >> > Yes I need to remove any atom moving away from my substrate or the
>> >> > deposited atoms and far by 0.4 nm.
>> >> > Then repeat the process until I have inserted 3000 atoms.
>> >>
>> >> gmx insert-molecules has no knowledge of velocities, and atoms are
>> not
>> >> inserted with any velocity, only a position.
>> >>
>> >> Removal of atoms is handled with gmx trjconv and a suitable index
>> file
>> >> generated by gmx select (in this case, since you're specifying a
>> >> geometric criterion).
>> >>
>> >> -Justin
>> >>
>> >> > Thanks for your reply.
>> >> > Mohamed
>> >> >
>> >> > On Wed, Mar 11, 2020 at 13:43 John Whittaker <
>> >> > johnwhitt...@zedat.fu-berlin.de> wrote:
>> >> >
>> >> >> Write a script that calls gmx insert-molecules 3000 times and uses
>> >> the
>> >> >> previous output as input for each call.
>> >> >>
>> >> >>
>> >> >>
>> >>
>> http://manual.gromacs.org/documentation/current/onlinehelp/gmx-insert-molecules.html
>> >> >>
>> >> >> Is there something you have to do in between each insertion?
>> >> >>
>> >> >> - John
>> >> >>
>> >> >>> Hello everybody,
>> >> >>>
>> >> >>> I am trying to insert molecules into a box but I have to insert
>> one
>> >> >> single
>> >> >>> molecule at a time reaching 3000 molecule in total. Is there a
>> way
>> >> to
>> >> >>> automate this process ?
>> >> >>>
>> >> >>> Thanks
>> >> >>> Mohamed
>> >> >>> --
>> >> >>> Gromacs Users mailing list
>> >> >>>
>> >> >>> * Please search the archive at
>> >> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
>> before
>> >> >>> posting!
>> >> >>>
>> >> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >> >>>
>> >> >>> * For (un)subscribe requests visit
>> >> >>> 

Re: [gmx-users] GROMACS automation

2020-03-12 Thread Mohamed Abdelaal
The paper information is as below :
Title: Elucidating the Spatial Arrangement of Emitter Molecules in Organic
Light-Emitting Diode Films
Authors: Claire Tonnel + , Martin Stroet + , Bertrand Caron, Andrew J.
Clulow, Ravi C. R. Nagiri, Alpeshkumar K. Malde, Paul L. Burn,* Ian R.
Gentle, Alan E. Mark,* and Benjamin J. Powell
https://doi.org/10.1002/anie.201610727
in the supporting documents page 9 they mentioned how they simulated the
evaporation process.

Regarding the automation and the script writing, I need to insert one
molecule with the required velocity and then remove the atom if moving in
the opposite direction and repeat the process until I have 3000 atom. you
told me to write a script and make each step's output the input to the next
step and so on. how can I do this part linking the step's output to the
next step input.
for example I want:
insert molecule
remove the molecule if moving in the opposite direction
insert
remove if
.
.
.
Do I need just to write it and copy it 3000 time or how can I do it ?

Many Thankss
Mohamed

On Thu, Mar 12, 2020 at 11:51 AM John Whittaker <
johnwhitt...@zedat.fu-berlin.de> wrote:

> > I was planning to put the velocities in the .gro file from which I am
> > inserting the molecules. If the gmx insert-molecules will ignore the
> > velocities of the atoms, can you please guide me how can I insert the
> > molecules with a velocity ?
> > I have read a paper which mentions that they inserted the molecules  and
> > the velocities of the ATOMS were sampled from a distribution with
> standard
> > deviation =xx and mean = xxx.
>
> Which paper?
>
> You probably have to add the velocities yourself with a script that
> samples from the Maxwell-Boltzmann distribution at the appropriate
> temperature and adds those velocities to the atoms in your .gro file.
>
> Perhaps there is an easier way that someone can shed some light on, but
> that's what immediately comes to mind.
>
> - John
>
> >
> >
> > On Wed, Mar 11, 2020 at 7:05 PM Justin Lemkul  wrote:
> >
> >>
> >>
> >> On 3/11/20 9:56 AM, Mohamed Abdelaal wrote:
> >> > I want to insert an atom with a velocity moving downwards toward the
> >> > graphene sheet in my box.
> >> > Yes I need to remove any atom moving away from my substrate or the
> >> > deposited atoms and far by 0.4 nm.
> >> > Then repeat the process until I have inserted 3000 atoms.
> >>
> >> gmx insert-molecules has no knowledge of velocities, and atoms are not
> >> inserted with any velocity, only a position.
> >>
> >> Removal of atoms is handled with gmx trjconv and a suitable index file
> >> generated by gmx select (in this case, since you're specifying a
> >> geometric criterion).
> >>
> >> -Justin
> >>
> >> > Thanks for your reply.
> >> > Mohamed
> >> >
> >> > On Wed, Mar 11, 2020 at 13:43 John Whittaker <
> >> > johnwhitt...@zedat.fu-berlin.de> wrote:
> >> >
> >> >> Write a script that calls gmx insert-molecules 3000 times and uses
> >> the
> >> >> previous output as input for each call.
> >> >>
> >> >>
> >> >>
> >>
> http://manual.gromacs.org/documentation/current/onlinehelp/gmx-insert-molecules.html
> >> >>
> >> >> Is there something you have to do in between each insertion?
> >> >>
> >> >> - John
> >> >>
> >> >>> Hello everybody,
> >> >>>
> >> >>> I am trying to insert molecules into a box but I have to insert one
> >> >> single
> >> >>> molecule at a time reaching 3000 molecule in total. Is there a way
> >> to
> >> >>> automate this process ?
> >> >>>
> >> >>> Thanks
> >> >>> Mohamed
> >> >>> --
> >> >>> Gromacs Users mailing list
> >> >>>
> >> >>> * Please search the archive at
> >> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> >>> posting!
> >> >>>
> >> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> >>>
> >> >>> * For (un)subscribe requests visit
> >> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> >> or
> >> >> send
> >> >>> a mail to gmx-users-requ...@gromacs.org.
> >> >>>
> >> >>
> >> >> --
> >> >> Gromacs Users mailing list
> >> >>
> >> >> * Please search the archive at
> >> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> >> posting!
> >> >>
> >> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> >>
> >> >> * For (un)subscribe requests visit
> >> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> >> >> send a mail to gmx-users-requ...@gromacs.org.
> >> >>
> >>
> >> --
> >> ==
> >>
> >> Justin A. Lemkul, Ph.D.
> >> Assistant Professor
> >> Office: 301 Fralin Hall
> >> Lab: 303 Engel Hall
> >>
> >> Virginia Tech Department of Biochemistry
> >> 340 West Campus Dr.
> >> Blacksburg, VA 24061
> >>
> >> jalem...@vt.edu | (540) 231-3129
> >> http://www.thelemkullab.com
> >>
> >> ==
> >>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> 

Re: [gmx-users] GROMACS automation

2020-03-12 Thread John Whittaker
> I was planning to put the velocities in the .gro file from which I am
> inserting the molecules. If the gmx insert-molecules will ignore the
> velocities of the atoms, can you please guide me how can I insert the
> molecules with a velocity ?
> I have read a paper which mentions that they inserted the molecules  and
> the velocities of the ATOMS were sampled from a distribution with standard
> deviation =xx and mean = xxx.

Which paper?

You probably have to add the velocities yourself with a script that
samples from the Maxwell-Boltzmann distribution at the appropriate
temperature and adds those velocities to the atoms in your .gro file.

Perhaps there is an easier way that someone can shed some light on, but
that's what immediately comes to mind.

- John

>
>
> On Wed, Mar 11, 2020 at 7:05 PM Justin Lemkul  wrote:
>
>>
>>
>> On 3/11/20 9:56 AM, Mohamed Abdelaal wrote:
>> > I want to insert an atom with a velocity moving downwards toward the
>> > graphene sheet in my box.
>> > Yes I need to remove any atom moving away from my substrate or the
>> > deposited atoms and far by 0.4 nm.
>> > Then repeat the process until I have inserted 3000 atoms.
>>
>> gmx insert-molecules has no knowledge of velocities, and atoms are not
>> inserted with any velocity, only a position.
>>
>> Removal of atoms is handled with gmx trjconv and a suitable index file
>> generated by gmx select (in this case, since you're specifying a
>> geometric criterion).
>>
>> -Justin
>>
>> > Thanks for your reply.
>> > Mohamed
>> >
>> > On Wed, Mar 11, 2020 at 13:43 John Whittaker <
>> > johnwhitt...@zedat.fu-berlin.de> wrote:
>> >
>> >> Write a script that calls gmx insert-molecules 3000 times and uses
>> the
>> >> previous output as input for each call.
>> >>
>> >>
>> >>
>> http://manual.gromacs.org/documentation/current/onlinehelp/gmx-insert-molecules.html
>> >>
>> >> Is there something you have to do in between each insertion?
>> >>
>> >> - John
>> >>
>> >>> Hello everybody,
>> >>>
>> >>> I am trying to insert molecules into a box but I have to insert one
>> >> single
>> >>> molecule at a time reaching 3000 molecule in total. Is there a way
>> to
>> >>> automate this process ?
>> >>>
>> >>> Thanks
>> >>> Mohamed
>> >>> --
>> >>> Gromacs Users mailing list
>> >>>
>> >>> * Please search the archive at
>> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> >>> posting!
>> >>>
>> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >>>
>> >>> * For (un)subscribe requests visit
>> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
>> or
>> >> send
>> >>> a mail to gmx-users-requ...@gromacs.org.
>> >>>
>> >>
>> >> --
>> >> Gromacs Users mailing list
>> >>
>> >> * Please search the archive at
>> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> >> posting!
>> >>
>> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >>
>> >> * For (un)subscribe requests visit
>> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> >> send a mail to gmx-users-requ...@gromacs.org.
>> >>
>>
>> --
>> ==
>>
>> Justin A. Lemkul, Ph.D.
>> Assistant Professor
>> Office: 301 Fralin Hall
>> Lab: 303 Engel Hall
>>
>> Virginia Tech Department of Biochemistry
>> 340 West Campus Dr.
>> Blacksburg, VA 24061
>>
>> jalem...@vt.edu | (540) 231-3129
>> http://www.thelemkullab.com
>>
>> ==
>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.
>


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS automation

2020-03-12 Thread Mohamed Abdelaal
I was planning to put the velocities in the .gro file from which I am
inserting the molecules. If the gmx insert-molecules will ignore the
velocities of the atoms, can you please guide me how can I insert the
molecules with a velocity ?
I have read a paper which mentions that they inserted the molecules  and
the velocities of the ATOMS were sampled from a distribution with standard
deviation =xx and mean = xxx.


On Wed, Mar 11, 2020 at 7:05 PM Justin Lemkul  wrote:

>
>
> On 3/11/20 9:56 AM, Mohamed Abdelaal wrote:
> > I want to insert an atom with a velocity moving downwards toward the
> > graphene sheet in my box.
> > Yes I need to remove any atom moving away from my substrate or the
> > deposited atoms and far by 0.4 nm.
> > Then repeat the process until I have inserted 3000 atoms.
>
> gmx insert-molecules has no knowledge of velocities, and atoms are not
> inserted with any velocity, only a position.
>
> Removal of atoms is handled with gmx trjconv and a suitable index file
> generated by gmx select (in this case, since you're specifying a
> geometric criterion).
>
> -Justin
>
> > Thanks for your reply.
> > Mohamed
> >
> > On Wed, Mar 11, 2020 at 13:43 John Whittaker <
> > johnwhitt...@zedat.fu-berlin.de> wrote:
> >
> >> Write a script that calls gmx insert-molecules 3000 times and uses the
> >> previous output as input for each call.
> >>
> >>
> >>
> http://manual.gromacs.org/documentation/current/onlinehelp/gmx-insert-molecules.html
> >>
> >> Is there something you have to do in between each insertion?
> >>
> >> - John
> >>
> >>> Hello everybody,
> >>>
> >>> I am trying to insert molecules into a box but I have to insert one
> >> single
> >>> molecule at a time reaching 3000 molecule in total. Is there a way to
> >>> automate this process ?
> >>>
> >>> Thanks
> >>> Mohamed
> >>> --
> >>> Gromacs Users mailing list
> >>>
> >>> * Please search the archive at
> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >>> posting!
> >>>
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send
> >>> a mail to gmx-users-requ...@gromacs.org.
> >>>
> >>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS automation

2020-03-11 Thread Justin Lemkul




On 3/11/20 9:56 AM, Mohamed Abdelaal wrote:

I want to insert an atom with a velocity moving downwards toward the
graphene sheet in my box.
Yes I need to remove any atom moving away from my substrate or the
deposited atoms and far by 0.4 nm.
Then repeat the process until I have inserted 3000 atoms.


gmx insert-molecules has no knowledge of velocities, and atoms are not 
inserted with any velocity, only a position.


Removal of atoms is handled with gmx trjconv and a suitable index file 
generated by gmx select (in this case, since you're specifying a 
geometric criterion).


-Justin


Thanks for your reply.
Mohamed

On Wed, Mar 11, 2020 at 13:43 John Whittaker <
johnwhitt...@zedat.fu-berlin.de> wrote:


Write a script that calls gmx insert-molecules 3000 times and uses the
previous output as input for each call.


http://manual.gromacs.org/documentation/current/onlinehelp/gmx-insert-molecules.html

Is there something you have to do in between each insertion?

- John


Hello everybody,

I am trying to insert molecules into a box but I have to insert one

single

molecule at a time reaching 3000 molecule in total. Is there a way to
automate this process ?

Thanks
Mohamed
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or

send

a mail to gmx-users-requ...@gromacs.org.



--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS automation

2020-03-11 Thread Mohamed Abdelaal
I want to insert an atom with a velocity moving downwards toward the
graphene sheet in my box.
Yes I need to remove any atom moving away from my substrate or the
deposited atoms and far by 0.4 nm.
Then repeat the process until I have inserted 3000 atoms.

Thanks for your reply.
Mohamed

On Wed, Mar 11, 2020 at 13:43 John Whittaker <
johnwhitt...@zedat.fu-berlin.de> wrote:

> Write a script that calls gmx insert-molecules 3000 times and uses the
> previous output as input for each call.
>
>
> http://manual.gromacs.org/documentation/current/onlinehelp/gmx-insert-molecules.html
>
> Is there something you have to do in between each insertion?
>
> - John
>
> > Hello everybody,
> >
> > I am trying to insert molecules into a box but I have to insert one
> single
> > molecule at a time reaching 3000 molecule in total. Is there a way to
> > automate this process ?
> >
> > Thanks
> > Mohamed
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send
> > a mail to gmx-users-requ...@gromacs.org.
> >
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS automation

2020-03-11 Thread John Whittaker
Write a script that calls gmx insert-molecules 3000 times and uses the
previous output as input for each call.

http://manual.gromacs.org/documentation/current/onlinehelp/gmx-insert-molecules.html

Is there something you have to do in between each insertion?

- John

> Hello everybody,
>
> I am trying to insert molecules into a box but I have to insert one single
> molecule at a time reaching 3000 molecule in total. Is there a way to
> automate this process ?
>
> Thanks
> Mohamed
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.
>


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2020

2020-03-09 Thread Mark Abraham
Hi,

The code is tested extensively on a range of compilers, so we believe it is
correct and compliant. In particular, you're using gcc 6.1.0 and GROMACS
tests with 6.4.0, so the issue might have been fixed in the meantime. As
newer versions of gcc and cuda will give better performance, I suggest you
get cuda 10.2 and gcc 8 and try again. Sorry!

Mark

On Mon, 9 Mar 2020 at 10:56, Turega, Simon  wrote:

> Hi all
>
> After attempting to install GROMACS 2020 on new system, I get the error
> below. The system details &  other details are below the error.
>
>  cmake .. -DGMX_BUILD_OWN_FFTW=ON
> -DREGRESSIONTEST_DOWNLOAD=ON
> -DCMAKE_INSTALL_PREFIX=/home/b5018993/src/gromacs-2020
> -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=gcc -DGMX_GPU=ON -Wno-dev
>
> make
>
> [ 84%] Building CXX object
> src/gromacs/CMakeFiles/libgromacs.dir/mdrun/runner.cpp.o
> In file included from
> /home/b5018993/src/gromacs-2020/src/gromacs/mdrun/simulatorbuilder.h:48:0,
>  from
> /home/b5018993/src/gromacs-2020/src/gromacs/mdrun/runner.cpp:161:
> /home/b5018993/src/gromacs-2020/src/gromacs/mdrun/legacysimulator.h: In
> instantiation of 'std::unique_ptr
> gmx::SimulatorBuilder::build(bool, Args&& ...) [with Args = {_IO_FILE*&,
> t_commrec*&, gmx_multisim_t*&, gmx::MDLogger&, int, const t_filenm*,
> gmx_output_env_t*&, gmx::MdrunOptions&, gmx::StartingBehavior&,
> gmx_vsite_t*, gmx::Constraints*, gmx_enfrot*, gmx::BoxDeformation*,
> gmx::IMDOutputProvider*, const gmx::MdModulesNotifier&, t_inputrec*&,
> gmx::ImdSession*, pull_t*&, t_swap*&, gmx_mtop_t*, t_fcdata*&, t_state*,
> ObservablesHistory*, gmx::MDAtoms*, t_nrnb*, gmx_wallcycle*&, t_forcerec*&,
> gmx_enerdata_t*, gmx_ekindata_t*, gmx::MdrunScheduleWorkload*,
> ReplicaExchangeParameters&, gmx_membed_t*&, gmx_walltime_accounting*&,
> std::unique_ptr std::default_delete >, const bool&}]':
> /home/b5018993/src/gromacs-2020/src/gromacs/mdrun/runner.cpp:1612:77:
>  required from here
> /home/b5018993/src/gromacs-2020/src/gromacs/mdrun/legacysimulator.h:94:23:
> error: use of deleted function 'std::unique_ptr<_Tp, _Dp>::unique_ptr(const
> std::unique_ptr<_Tp, _Dp>&) [with _Tp = gmx::StopHandlerBuilder; _Dp =
> std::default_delete]'
>  using ISimulator::ISimulator;
>^~
> In file included from
> /cm/local/apps/gcc/6.1.0/include/c++/6.1.0/memory:81:0,
>  from
> /home/b5018993/src/gromacs-2020/src/gromacs/mdrun/runner.h:48,
>  from
> /home/b5018993/src/gromacs-2020/src/gromacs/mdrun/runner.cpp:46:
> /cm/local/apps/gcc/6.1.0/include/c++/6.1.0/bits/unique_ptr.h:356:7: note:
> declared here
>unique_ptr(const unique_ptr&) = delete;
>^~
> In file included from
> /home/b5018993/src/gromacs-2020/src/gromacs/mdrun/runner.cpp:161:0:
> /home/b5018993/src/gromacs-2020/src/gromacs/mdrun/simulatorbuilder.h:123:45:
> note: synthesized method 'gmx::LegacySimulator::LegacySimulator(FILE*,
> t_commrec*, const gmx_multisim_t*, const gmx::MDLogger&, int, const
> t_filenm*, const gmx_output_env_t*, const gmx::MdrunOptions&,
> gmx::StartingBehavior, gmx_vsite_t*, gmx::Constraints*, gmx_enfrot*,
> gmx::BoxDeformation*, gmx::IMDOutputProvider*, const
> gmx::MdModulesNotifier&, t_inputrec*, gmx::ImdSession*, pull_t*, t_swap*,
> gmx_mtop_t*, t_fcdata*, t_state*, ObservablesHistory*, gmx::MDAtoms*,
> t_nrnb*, gmx_wallcycle*, t_forcerec*, gmx_enerdata_t*, gmx_ekindata_t*,
> gmx::MdrunScheduleWorkload*, const ReplicaExchangeParameters&,
> gmx_membed_t*, gmx_walltime_accounting*,
> std::unique_ptr, bool)' first required here
>  return std::unique_ptr(new
> LegacySimulator(std::forward(args)...));
>
>  ^~~~
> make[2]: *** [src/gromacs/CMakeFiles/libgromacs.dir/mdrun/runner.cpp.o]
> Error 1
> make[1]: *** [src/gromacs/CMakeFiles/libgromacs.dir/all] Error 2
> make: *** [all] Error 2
>
> System details
>
> Architecture: x86_64
> CPU op modes: 32-bit, 64-bit
>
> OS details
>
> CentOS Linux release 7.3.1611 (Core)
> Linux 3.10.0-327.22.2.el7.x86_64
>
> Compiler/ nvcc details
>
> openmpi/intel-ofed/gcc/64/1.10.4
> Cuda 90 toolkit/9.0.176
> gcc version 6.1.0
>
> Thanks in advance for your help
>
> Simon Turega
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2020 fails to run adh_cubic_vsites

2020-03-07 Thread David van der Spoel

Den 2020-03-07 kl. 17:04, skrev Rolly Ng:

Dear Gromacs developers,

  


I am new to Gromacs and I have recently compiled Gromacs 2020.

  


I tried to run the ADH benchmarks from
ftp://ftp.gromacs.org/pub/benchmarks/ADH_bench_systems.tar.gz

  


The adh_cubic and adh_dodec completed successfully, but the adh_cubic_vsites
and adh_dodec_vsites failed at grompp.

  


Could you please have a look at the following output log?


It actually tells you what is wrong in the input. -maxwarn 1 is your friend.
  


Thanks,

Rolly Ng

  


PhD, Former Research Fellow,

Department of Materials Science and Engineering

City University of Hong Kong

  


   :-) GROMACS - gmx grompp, 2020-UNCHECKED (-:

  


 GROMACS is written by:

  Emile Apol  Rossen Apostolov  Paul Bauer Herman J.C.
Berendsen

 Par Bjelkmar  Christian Blau   Viacheslav Bolnykh Kevin Boyd

Aldert van Buuren   Rudi van Drunen Anton Feenstra   Alan Gray

   Gerrit Groenhof Anca HamuraruVincent Hindriksen  M. Eric Irrgang

   Aleksei Iupinov   Christoph Junghans Joe Jordan Dimitrios
Karkoulis

 Peter KassonJiri Kraus  Carsten Kutzner  Per Larsson

   Justin A. LemkulViveca LindahlMagnus Lundborg Erik Marklund

 Pascal Merz Pieter MeulenhoffTeemu Murtola   Szilard Pall

 Sander Pronk  Roland Schulz  Michael ShirtsAlexey Shvetsov

Alfons Sijbers Peter Tieleman  Jon Vincent  Teemu Virolainen

Christian WennbergMaarten Wolf  Artem Zhmurov

and the project leaders:

 Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

  


Copyright (c) 1991-2000, University of Groningen, The Netherlands.

Copyright (c) 2001-2019, The GROMACS development team at

Uppsala University, Stockholm University and

the Royal Institute of Technology, Sweden.

check out http://www.gromacs.org for more information.

  


GROMACS is free software; you can redistribute it and/or modify it

under the terms of the GNU Lesser General Public License

as published by the Free Software Foundation; either version 2.1

of the License, or (at your option) any later version.

  


GROMACS:  gmx grompp, version 2020-UNCHECKED

Executable:   /home/rolly/Gromacs/gromacs-2020/install/bin/gmx_mpi

Data prefix:  /home/rolly/Gromacs/gromacs-2020/install

Working dir:  /home/rolly/Gromacs/ADH_bench/adh_cubic_vsites

Command line:

   gmx_mpi grompp -f pme_verlet_vsites.mdp -c conf.gro -p topol.top

  


Ignoring obsolete mdp entry 'ns_type'

Replacing old mdp entry 'nstxtcout' by 'nstxout-compressed'

Setting the LD random seed to -1974193353

Generated 2145 of the 2145 non-bonded parameter combinations

Generating 1-4 interactions: fudge = 0.5

Generated 2145 of the 2145 1-4 parameter combinations

Excluding 3 bonded neighbours molecule type 'Protein_chain_A'

turning all bonds into constraints...

Excluding 3 bonded neighbours molecule type 'Protein_chain_B'

turning all bonds into constraints...

Excluding 3 bonded neighbours molecule type 'Protein_chain_C'

turning all bonds into constraints...

Excluding 3 bonded neighbours molecule type 'Protein_chain_D'

turning all bonds into constraints...

Excluding 2 bonded neighbours molecule type 'SOL'

turning all bonds into constraints...

Excluding 2 bonded neighbours molecule type 'SOL'

Excluding 2 bonded neighbours molecule type 'SOL'

Excluding 2 bonded neighbours molecule type 'SOL'

Excluding 2 bonded neighbours molecule type 'SOL'

Excluding 1 bonded neighbours molecule type 'NA'

turning all bonds into constraints...

  


WARNING 1 [file topol.top, line 55]:

   The following macros were defined in the 'define' mdp field with the -D

   prefix, but were not used in the topology:

   VSITE

   If you haven't made a spelling error, either use the macro you defined,

   or don't define the macro

  


Cleaning up constraints and constant bonded interactions with virtual sites

Removed   1683   Angles with virtual sites, 8136 left

Removed   1587 Proper Dih.s with virtual sites, 16689 left

Converted 2918  Constraints with virtual sites to connections, 2473 left

Warning: removed 896 Constraints with vsite with Virtual site 3out
construction

  This vsite construction does not guarantee constant bond-length

  If the constructions were generated by pdb2gmx ignore this warning

Cleaning up constraints and constant bonded interactions with virtual sites

Removed   1683   Angles with virtual sites, 8136 left

Removed   1587 Proper Dih.s with virtual sites, 16689 left

Converted 2918  Constraints with virtual sites to connections, 2473 left

Warning: removed 896 Constraints with vsite with Virtual site 3out
construction

  This vsite construction does not guarantee constant bond-length

  If the constructions were generated by pdb2gmx ignore this 

Re: [gmx-users] Gromacs question

2020-02-06 Thread David van der Spoel

Den 2020-02-06 kl. 15:15, skrev Kneller, Daniel:

Hi Dr. van der Spoel,

I had hoped to reply using the user-lists but I realize now that I had 
the user-lists in digest mode and did not want to start another thread.

CC-ing there anyway just such that it gets archived.


Thank you for responding to my question about using gmx dos and the 
vibrational power spectrum option for gmx velacc for protein.
I am interested in quantifying only the vibrational frequency of protein 
secondary structure (10-60 cm^-1 ).
In this case, would gmx dos be viable given no limitations on computing 
resources?


I understand that in order to calculate density of states, one needs a 
simulation with integration steps every 1-2 fs and saved trajectories 
about every 2-4 fs. What would be considered a long enough simulation to 
be meaningful? What would be required to show sufficient convergence? Is 
it simply a matter of replicates or something more?


From my experience with liquids, 100 ps is enough for a liquid with low 
viscosity, but not for a liquid with high viscosity. Then there are 
typically hundreds of molecules to average over. What that means for a 
protein is hard to tell, but for sure you need 100 times the time scale 
(of 10 ps) to come up to the same level of sampling, then the protein is 
very viscous compared to a liquid, structure and dynamics have a long 
correlation time.


If you have a truly stable protein you may be better of doing a normal 
mode analysis.


Thank you,

Daniel


*Daniel Kneller*
Postdoctoral Research Associate
Neutron Scattering Division
Neutron Sciences Directorate
Oak Ridge National Laboratory





--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs bug related to rms and atommass.dat interplay (Can not find mass in database for atom MG in residue)

2020-02-05 Thread Justin Lemkul



On 2/5/20 4:58 AM, Vedat Durmaz wrote:

Hi there,

I'm pretty sure it's not a feature but a bug which I've faced in the GMX
versions 2018.7, 2019.5 an 2020.

When I try to calculate RMSD values for a protein system including a
catalytic magnesium ion "Mg" using the command

gmx rms -s mol.pdb -f mol.xtc -f2 mol.xtc -o mol-rmsd.xvg -debug

I get this error:


Can not find mass in database for atom MG in residue 1370 MG

---
Program: gmx rms, version 2019.5
Source file: src/gromacs/fileio/confio.cpp (line 517)

Fatal error:
Masses were requested, but for some atom(s) masses could not be found in the
database. Use a tpr file as input, if possible, or add these atoms to
the mass
database.

Let's accept for the moment that using a .tpr file with -s (rather than
a .pdb file) is no option for me. Consequently, gmx retrieves atom
masses from atommass.dat which actually contains the Mg ion:


; NOTE: longest names match
; General atoms
; '???' or '*' matches any residue name
???  H  1.00790
???  He 4.00260
???  Li 6.94100
???  Be 9.01220
???  B 10.81100
???  C 12.01070
???  N 14.00670
???  O 15.99940
???  F 18.99840
???  Ne    20.17970
???  Na    22.98970
???  Mg    24.30500    <<< in this line
???  Al    26.98150
???  Si    28.08550
???  P 30.97380


Having a look at the log file of "gmx rms -debug ...", I see some
strange output. Here's a snippet:


searching residue:  ??? atom:    H
  not successful
searching residue:  ??? atom:   He
  match:  ???    H
searching residue:  ??? atom:   Li
  not successful
searching residue:  ??? atom:   Be
  not successful
searching residue:  ??? atom:    B
  not successful
searching residue:  ??? atom:    C
  not successful
searching residue:  ??? atom:    N
  not successful
searching residue:  ??? atom:    O
  not successful
searching residue:  ??? atom:    F
  not successful
searching residue:  ??? atom:   Ne
  match:  ???    N
searching residue:  ??? atom:   Na
  match:  ???    N
searching residue:  ??? atom:   Mg
  not successful
searching residue:  ??? atom:   Al
  not successful
searching residue:  ??? atom:   Si
  not successful
searching residue:  ??? atom:    P
  not successful
searching residue:  ??? atom:    S
  not successful
searching residue:  ??? atom:   Cl
  match:  ???    C
  ...
searching residue:  ??? atom:   Cu
  match:  ???    C
...


I don't know what exactly the rms command is doing in the scenes and
also don't want to agonize over the cpp code. But to me it seems as if
the element assignment is based only on the FIRST LETTER of each element
name. If I use copper (Cu) instead of magnesium, the program runs fine.
Now let's compare the Cu lines with the Mg lines in the debugging output
above.

searching residue:  ??? atom:   Cu
  match:  ???    C

searching residue:  ??? atom:   Mg
  not successful

Obviously, Cu is found because the first letter of its element
abbreviation, C (because Cu[index 0] is C) does exist as an element, but
there is no element M, the first letter of Mg. A little test
(respectively an improper workaround). If I add a line for the element
"M" to atommass.dat like this

???  Na    22.98970
???  Mg    24.30500
???  M 24.30500  <<< new line
???  Al    26.98150

then the rms command executed on the protein with Mg runs without any
error. But this observation also implies that masses, the rms command
retrieves from atommass.dat, are wrong because, e.g., to each of Cl, Cr,
Cu and Co, the mass of C is assigned.


I see 2 options for GMX developers. Either you check the
rms<->atommass.dat interplay, or you disable the possibility to use PDB
files with the -s option. However, I would strongly discourage from the
latter decision since there are cases where you have an xtc trajectory
possibly generated with another tool along with a pdb file, but you
don't want to spend too much time in the generation of a proper tpr file.


Please file a bug report on redmine.gromacs.org.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs-2020 build gcc/nvcc error

2020-01-31 Thread Szilárd Páll
Dear Ryan,


On Thu, Jan 30, 2020 at 11:31 PM Ryan Woltz  wrote:

> Dear Szilárd,
>
>  Thank you so much for your help. I performed the following steps
> and it seems to have built successfully, I'll let you know if it does not
> run correctly as well.
>
> rm -r gromacs-2020/
> sudo apt-get install gcc-8 g++-8
> tar -xvzf gromacs-2020.tar.gz
> cd gromacs-2020/
> mkdir build
> cd build
> CMAKE_PREFIX_PATH=/usr/:/usr/local/cuda/ cmake ../ -DGMX_BUILD_OWN_FFTW=ON
> -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=release
>

Note that you are still using the _default_ gcc installation, that is gcc 7
on Ubuntu 18.04. You should see on the first line on the console when you
run cmake something like:
-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
which will clearly indicate the version of the compiler detected.

Unless you tell cmake to use the apt-get installed gcc-8, it will not use
it (and you can also verify that by inspecting the CMAKE_CXX_COMPILER
string in the CMakeCache.txt file).

make
> make check
> sudo make install
> source /usr/local/gromacs/bin/GMXRC
>
> Lastly, when I was building in default and ran into trouble I like to build
> in debug so it gives details about building and helps me identify source of
> the problems or identify relevant information to pass to you so you can
> better help me. I appreciate your comment about not building in debugging
> mode, but is there a way to run release in verbose mode? When I had
> problems with other programs I'd usually build my first time in debugger
> mode so I can monitor the process, then make clean and rebuild in default.
> Is there a better way to do this?
>

A "Debug" build (i.e. when you use CMAKE_BUILD_TYPE=Debug), is useful to
compile a program for running in a debugger (like gdb). You seem to instead
want a way to "debug" build-time issues. A debug build will not help in
that, you will not get additional information about compilation issues. You
can run "make VERBOSE=1" (regadless of the build type) with makefiles
generated by cmake to get a detailed information on the commands executed
during the build, but unless you have a compile- or link-time failure that
you want to track down that sea of output is generally not too useful.

Configure-time errors are stored by cmake in files listed after the usual
"-- Configuring incomplete, errors occurred!" error (files called
CMakeOutput/CMakeError.txt).


> Once again you help was greatly appreciated,
>
> Ryan
>
> PS again a few notes (if you have time to comment on anything incorrect) I
> have for people needing a fix in the future and maybe myself if I do this
> again in a few years and forget how.
>
> CUDA version (nvcc --version) is 9.1. This is a little confusing to me
> because you referenced CUDA 10.1 and I completely rebuilt this computer in
> September 2019, so unless there is a new driver since then it should be
> 10.1? I grabbed the newest drivers I could find but my computer is
> outputting 9.1 so I guess that is my version.
>

CUDA is not (just) the drivers, it is a number of software components that
allow compiling for and runnig computation on the GPU:
- the CUDA toolkit, latest of which is version 10.2 (as you can see here:
https://developer.nvidia.com/cuda-downloads?target_os=Linux), but the
Ubuntu 18.04 repositories seem to only have 9.1
- a display driver, confusingly versioned with numbers like 418.88 or
430.35, the Ubuntu packages are called "nvidia-driver-VERSION"

These two of course have to be compatible, so if you decide to download the
CUDA 10.2 installer from NVIDIA, this will include a compatible driver. Be
careful to completely remove the NVIDIA drivers installed from the Ubuntu
repositories prior to installing sofware with the NVIDIA installer!


when building gromacs and I specify gcc/g++ verison 5, 8, or 9 it fails
> with the original error message regarding glibc 23.2.
>
> CMAKE_PREFIX_PATH=/usr/:/usr/local/cuda/ cmake ../
> -DGMX_GPLUSPLUS_PATH=/usr/bin/g++-8 -DCUDA_HOST_COMPILER=gcc-8
> -DCMAKE_CXX_COMPILER=g++-8 -DCMAKE_C_COMPILER=/usr/bin/gcc-8
> -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=release
>

Not sure, but if you want to investigate further, I suggest to start with
installing a more recent CUDA toolkit version, 9.1 is  already about two
years old.

Cheers,
--
Szilárd


> Do you know why this is? When I started this adventure I just had sudo
> apt-get install gcc g++ build-essentials. Then I used gcc-5 g++-5 and
> specified the version in the build step, which failed. after taking that
> out and running sudo apt-get install gcc-9 g++-9 it passes "CMAKE" but
> fails in "make". Based on your suggestions I ran the commands at the top of
> the email to which then worked. Would this have worked if I had just
> installed gcc-8 g++-8 from the beginning and ran CMAKE with no 

Re: [gmx-users] gromacs-2020 build gcc/nvcc error

2020-01-30 Thread Ryan Woltz
Dear Szilárd,

 Thank you so much for your help. I performed the following steps
and it seems to have built successfully, I'll let you know if it does not
run correctly as well.

rm -r gromacs-2020/
sudo apt-get install gcc-8 g++-8
tar -xvzf gromacs-2020.tar.gz
cd gromacs-2020/
mkdir build
cd build
CMAKE_PREFIX_PATH=/usr/:/usr/local/cuda/ cmake ../ -DGMX_BUILD_OWN_FFTW=ON
-DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=release
make
make check
sudo make install
source /usr/local/gromacs/bin/GMXRC

Lastly, when I was building in default and ran into trouble I like to build
in debug so it gives details about building and helps me identify source of
the problems or identify relevant information to pass to you so you can
better help me. I appreciate your comment about not building in debugging
mode, but is there a way to run release in verbose mode? When I had
problems with other programs I'd usually build my first time in debugger
mode so I can monitor the process, then make clean and rebuild in default.
Is there a better way to do this?

Once again you help was greatly appreciated,

Ryan

PS again a few notes (if you have time to comment on anything incorrect) I
have for people needing a fix in the future and maybe myself if I do this
again in a few years and forget how.

CUDA version (nvcc --version) is 9.1. This is a little confusing to me
because you referenced CUDA 10.1 and I completely rebuilt this computer in
September 2019, so unless there is a new driver since then it should be
10.1? I grabbed the newest drivers I could find but my computer is
outputting 9.1 so I guess that is my version.

when building gromacs and I specify gcc/g++ verison 5, 8, or 9 it fails
with the original error message regarding glibc 23.2.

CMAKE_PREFIX_PATH=/usr/:/usr/local/cuda/ cmake ../
-DGMX_GPLUSPLUS_PATH=/usr/bin/g++-8 -DCUDA_HOST_COMPILER=gcc-8
-DCMAKE_CXX_COMPILER=g++-8 -DCMAKE_C_COMPILER=/usr/bin/gcc-8
-DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=release

Do you know why this is? When I started this adventure I just had sudo
apt-get install gcc g++ build-essentials. Then I used gcc-5 g++-5 and
specified the version in the build step, which failed. after taking that
out and running sudo apt-get install gcc-9 g++-9 it passes "CMAKE" but
fails in "make". Based on your suggestions I ran the commands at the top of
the email to which then worked. Would this have worked if I had just
installed gcc-8 g++-8 from the beginning and ran CMAKE with no version
specification?


On Thu, Jan 30, 2020 at 5:50 AM Szilárd Páll  wrote:

> Dear Ryan,
>
> On Wed, Jan 29, 2020 at 10:35 PM Ryan Woltz  wrote:
>
> > Dear Szilárd,
> >
> >  Thank you for your quick response. You are correct, after
> > issuing sudo apt-get install gcc-9 g++-9 CMake was run with:
> >
>
> gcc 9 is not supported with CUDA, as far as I know version 8 is the latest
> supported gcc in CUDA 10.2 (officially "native support" whatever they mean
> by that is for 7.3 on Ubuntu 18.04.3, see
> https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html)
>
> CMAKE_PREFIX_PATH=/usr/:/usr/local/cuda/ cmake ../ -DGMX_BUILD_OWN_FFTW=ON
> > -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> > -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=Debug
> >
>
> Don't use a Debug build unless you want to debug the GROMACS tools (it's
> slow).
>
> Make sure that you cmake configuration does actually use the gcc version
> you intend to use. The default invocation as above will pick up the default
> compiler toolchain (e.g. /us/bin/gcc in your case, you can verify that by
> opening the CMakeCache.txt file or using ccmake) -- and I think the lack of
> proper AVX512 support in your default gcc 5 (which you are stil; using) is
> the source of the issues you report below.
>
> You can explicitly set the compiler by passing CMAKE_CXX_COMPILER at the
> configure step; for details see
>
> http://manual.gromacs.org/current/install-guide/index.html?highlight=cxx%20compiler#typical-installation
>
> Cheers,
> --
> Szilárd
>
>
> > However now I'm getting an error in make
> >
> > make VERBOSE=1
> >
> > error:
> >
> > [ 25%] Building CXX object
> >
> >
> src/gromacs/CMakeFiles/libgromacs.dir/nbnxm/kernels_simd_2xmm/kernel_ElecEwTwinCut_VdwLJEwCombGeom_F.cpp.o
> > In file included from
> >
> >
> /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/impl_x86_avx_512/impl_x86_avx_512.h:46:0,
> >  from
> > /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/simd.h:146,
> >  from
> >
> >
> /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/nbnxm/nbnxm_simd.h:40,
> >  from
> >
> >
> /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/nbnxm/kernels_simd_2xmm/kernel_ElecEwTwinCut_VdwLJEwCombGeom_F.cpp:49:
> >
> >
> 

Re: [gmx-users] gromacs-2020 build gcc/nvcc error

2020-01-30 Thread Szilárd Páll
Dear Ryan,

On Wed, Jan 29, 2020 at 10:35 PM Ryan Woltz  wrote:

> Dear Szilárd,
>
>  Thank you for your quick response. You are correct, after
> issuing sudo apt-get install gcc-9 g++-9 CMake was run with:
>

gcc 9 is not supported with CUDA, as far as I know version 8 is the latest
supported gcc in CUDA 10.2 (officially "native support" whatever they mean
by that is for 7.3 on Ubuntu 18.04.3, see
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html)

CMAKE_PREFIX_PATH=/usr/:/usr/local/cuda/ cmake ../ -DGMX_BUILD_OWN_FFTW=ON
> -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=Debug
>

Don't use a Debug build unless you want to debug the GROMACS tools (it's
slow).

Make sure that you cmake configuration does actually use the gcc version
you intend to use. The default invocation as above will pick up the default
compiler toolchain (e.g. /us/bin/gcc in your case, you can verify that by
opening the CMakeCache.txt file or using ccmake) -- and I think the lack of
proper AVX512 support in your default gcc 5 (which you are stil; using) is
the source of the issues you report below.

You can explicitly set the compiler by passing CMAKE_CXX_COMPILER at the
configure step; for details see
http://manual.gromacs.org/current/install-guide/index.html?highlight=cxx%20compiler#typical-installation

Cheers,
--
Szilárd


> However now I'm getting an error in make
>
> make VERBOSE=1
>
> error:
>
> [ 25%] Building CXX object
>
> src/gromacs/CMakeFiles/libgromacs.dir/nbnxm/kernels_simd_2xmm/kernel_ElecEwTwinCut_VdwLJEwCombGeom_F.cpp.o
> In file included from
>
> /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/impl_x86_avx_512/impl_x86_avx_512.h:46:0,
>  from
> /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/simd.h:146,
>  from
>
> /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/nbnxm/nbnxm_simd.h:40,
>  from
>
> /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/nbnxm/kernels_simd_2xmm/kernel_ElecEwTwinCut_VdwLJEwCombGeom_F.cpp:49:
>
> /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/impl_x86_avx_512/impl_x86_avx_512_util_float.h:
> In function ‘void gmx::gatherLoadTransposeHsimd(const float*, const float*,
> const int32_t*, gmx::SimdFloat*, gmx::SimdFloat*) [with int align = 2;
> int32_t = int]’:
>
> /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/impl_x86_avx_512/impl_x86_avx_512_util_float.h:422:28:
> error: the last argument must be scale 1, 2, 4, 8
>  tmp1 = _mm512_castpd_ps(
> ^
>
> /home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/impl_x86_avx_512/impl_x86_avx_512_util_float.h:424:28:
> error: the last argument must be scale 1, 2, 4, 8
>  tmp2 = _mm512_castpd_ps(
> ^
> src/gromacs/CMakeFiles/libgromacs.dir/build.make:13881: recipe for target
>
> 'src/gromacs/CMakeFiles/libgromacs.dir/nbnxm/kernels_simd_2xmm/kernel_ElecEwTwinCut_VdwLJEwCombGeom_F.cpp.o'
> failed
> make[2]: ***
>
> [src/gromacs/CMakeFiles/libgromacs.dir/nbnxm/kernels_simd_2xmm/kernel_ElecEwTwinCut_VdwLJEwCombGeom_F.cpp.o]
> Error 1
> CMakeFiles/Makefile2:2910: recipe for target
> 'src/gromacs/CMakeFiles/libgromacs.dir/all' failed
> make[1]: *** [src/gromacs/CMakeFiles/libgromacs.dir/all] Error 2
> Makefile:162: recipe for target 'all' failed
> make: *** [all] Error 2
>
> after doing a 1 hour google I found discussions saying that the error
> (Makefile:162: recipe for target 'all' failed) is too vague with no general
> solution. I found fixes for headers and other files for other programs but
> not fixes for this file. The fix linked below is for gromacs-2018 and a
> different file but the general problem seems to suggest it still is a
> gcc/g++ version compatibility error correct? Any suggestions for this
> error?
>
> https://redmine.gromacs.org/issues/2312
>
>
> Thank you so much,
>
> Ryan
>
>  PS Just to document for anyone else going through what I did for
> Gromacs-2020 these were my steps.
>
> sudo apt-get install gcc g++
> cmake ../ -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
>
>  I then received multiple errors complaining about nvcc/C++
> incompatibility. After researching found errors for previous gromacs
> versions suggesting to use gcc-5 (but as you suggested this error has been
> patched).
>
> sudo apt-get install gcc-5 g++-5
> CMAKE_PREFIX_PATH=/usr/:/usr/local/cuda/ cmake ../
> -DGMX_GPLUSPLUS_PATH=/usr/bin/g++-5 -DCUDA_HOST_COMPILER=gcc-5
> -DCMAKE_CXX_COMPILER=g++-5 -DCMAKE_C_COMPILER=/usr/bin/gcc-5
> -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=Debug
> -D_FORCE_INLINES=OFF
>
> Received different error described in previous email and solved with your
> suggested solution. The key might be to specifically install latest version
> number i.e.
>
> sudo apt-get install gcc-X 

Re: [gmx-users] gromacs-2020 build gcc/nvcc error

2020-01-29 Thread Ryan Woltz
Dear Szilárd,

 Thank you for your quick response. You are correct, after
issuing sudo apt-get install gcc-9 g++-9 CMake was run with:

CMAKE_PREFIX_PATH=/usr/:/usr/local/cuda/ cmake ../ -DGMX_BUILD_OWN_FFTW=ON
-DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=Debug

However now I'm getting an error in make

make VERBOSE=1

error:

[ 25%] Building CXX object
src/gromacs/CMakeFiles/libgromacs.dir/nbnxm/kernels_simd_2xmm/kernel_ElecEwTwinCut_VdwLJEwCombGeom_F.cpp.o
In file included from
/home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/impl_x86_avx_512/impl_x86_avx_512.h:46:0,
 from
/home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/simd.h:146,
 from
/home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/nbnxm/nbnxm_simd.h:40,
 from
/home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/nbnxm/kernels_simd_2xmm/kernel_ElecEwTwinCut_VdwLJEwCombGeom_F.cpp:49:
/home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/impl_x86_avx_512/impl_x86_avx_512_util_float.h:
In function ‘void gmx::gatherLoadTransposeHsimd(const float*, const float*,
const int32_t*, gmx::SimdFloat*, gmx::SimdFloat*) [with int align = 2;
int32_t = int]’:
/home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/impl_x86_avx_512/impl_x86_avx_512_util_float.h:422:28:
error: the last argument must be scale 1, 2, 4, 8
 tmp1 = _mm512_castpd_ps(
^
/home/rlwoltz/protein_modeling/gromacs-2020/src/gromacs/simd/impl_x86_avx_512/impl_x86_avx_512_util_float.h:424:28:
error: the last argument must be scale 1, 2, 4, 8
 tmp2 = _mm512_castpd_ps(
^
src/gromacs/CMakeFiles/libgromacs.dir/build.make:13881: recipe for target
'src/gromacs/CMakeFiles/libgromacs.dir/nbnxm/kernels_simd_2xmm/kernel_ElecEwTwinCut_VdwLJEwCombGeom_F.cpp.o'
failed
make[2]: ***
[src/gromacs/CMakeFiles/libgromacs.dir/nbnxm/kernels_simd_2xmm/kernel_ElecEwTwinCut_VdwLJEwCombGeom_F.cpp.o]
Error 1
CMakeFiles/Makefile2:2910: recipe for target
'src/gromacs/CMakeFiles/libgromacs.dir/all' failed
make[1]: *** [src/gromacs/CMakeFiles/libgromacs.dir/all] Error 2
Makefile:162: recipe for target 'all' failed
make: *** [all] Error 2

after doing a 1 hour google I found discussions saying that the error
(Makefile:162: recipe for target 'all' failed) is too vague with no general
solution. I found fixes for headers and other files for other programs but
not fixes for this file. The fix linked below is for gromacs-2018 and a
different file but the general problem seems to suggest it still is a
gcc/g++ version compatibility error correct? Any suggestions for this error?

https://redmine.gromacs.org/issues/2312


Thank you so much,

Ryan

 PS Just to document for anyone else going through what I did for
Gromacs-2020 these were my steps.

sudo apt-get install gcc g++
cmake ../ -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON

 I then received multiple errors complaining about nvcc/C++
incompatibility. After researching found errors for previous gromacs
versions suggesting to use gcc-5 (but as you suggested this error has been
patched).

sudo apt-get install gcc-5 g++-5
CMAKE_PREFIX_PATH=/usr/:/usr/local/cuda/ cmake ../
-DGMX_GPLUSPLUS_PATH=/usr/bin/g++-5 -DCUDA_HOST_COMPILER=gcc-5
-DCMAKE_CXX_COMPILER=g++-5 -DCMAKE_C_COMPILER=/usr/bin/gcc-5
-DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=Debug
-D_FORCE_INLINES=OFF

Received different error described in previous email and solved with your
suggested solution. The key might be to specifically install latest version
number i.e.

sudo apt-get install gcc-X g++-X (with X being the largest number
available).




On Wed, Jan 29, 2020 at 2:05 AM Szilárd Páll  wrote:

> Hi Ryan,
>
> The issue you linked has been worked around in the build system, so my
> guess is that the issue you are seeing is not related.
>
> I would recommend that you update your software stack to the latest version
> (both CUDA 9.1 and gcc 5 are a few years old). On Ubuntu 18.04 you should
> be able to get gcc 8 through the package manager. Together with
> upgrading to the latest CUDA might well solve your issues.
>
> Let us know if that worked!
>
> Cheers,
> --
> Szilárd
>
>
> On Wed, Jan 29, 2020 at 12:14 AM Ryan Woltz  wrote:
>
> > Hello Gromacs experts,
> >
> >   First things first, I apologize for any double post but I just
> > joined the community so I'm very new and only found 1-2 posts related to
> my
> > problem but the solutions did not work. I have been doing MD for about
> > 6-months using NAMD but want to also try out Gromacs. That being said I
> am
> > slightly familiar with CPU modeling programs like Rosetta, but I am
> totally
> > lost when it comes to fixing errors using GPU accelerated code for CUDA.
> I
> > did find that at one point my error was fixed for an earlier version of
> 

Re: [gmx-users] gromacs-2020 build gcc/nvcc error

2020-01-29 Thread Szilárd Páll
Hi Ryan,

The issue you linked has been worked around in the build system, so my
guess is that the issue you are seeing is not related.

I would recommend that you update your software stack to the latest version
(both CUDA 9.1 and gcc 5 are a few years old). On Ubuntu 18.04 you should
be able to get gcc 8 through the package manager. Together with
upgrading to the latest CUDA might well solve your issues.

Let us know if that worked!

Cheers,
--
Szilárd


On Wed, Jan 29, 2020 at 12:14 AM Ryan Woltz  wrote:

> Hello Gromacs experts,
>
>   First things first, I apologize for any double post but I just
> joined the community so I'm very new and only found 1-2 posts related to my
> problem but the solutions did not work. I have been doing MD for about
> 6-months using NAMD but want to also try out Gromacs. That being said I am
> slightly familiar with CPU modeling programs like Rosetta, but I am totally
> lost when it comes to fixing errors using GPU accelerated code for CUDA. I
> did find that at one point my error was fixed for an earlier version of
> Gromacs but Gromacs-2020 may have resurfaced the same error again, here is
> what I think my error is:
>
> https://redmine.gromacs.org/issues/1982
>
> I am running Ubuntu 18.04.03 LTS, and gromacs-2020 I did initially have
> the gcc/nvcc incompatible but I think installing and using gcc-5/g++-5
> version command in cmake has fixed that issue. I have a NVIDIA card with
> CUDA-9.1 driver when I type nvcc --version.
>
> my cmake command is as follows:
>
> CMAKE_PREFIX_PATH=/usr/:/usr/local/cuda/ cmake ../
> -DGMX_GPLUSPLUS_PATH=/usr/bin/g++-5 -DCUDA_HOST_COMPILER=gcc-5
> -DCMAKE_CXX_COMPILER=g++-5 -DCMAKE_C_COMPILER=/usr/bin/gcc-5
> -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=Debug (I did
> try adding -D_FORCE_INLINES= based on the link above in my running command
> but it did not work). I did look at the error log but it is way over my
> head. I have in addition deleted the CMakeCache.txt file or the unpacked
> Gromacs and re-unzipped it to restart the cmake process to make sure it was
> starting "clean". Is there any additional information I could provide? Does
> anyone have a suggestion? Again I'm sorry if this is a duplicate,
> everything I found on other sites was way over my head and I generally
> understand what is going on but the forums I read on possible solutions
> seem way over my head and I'm afraid I will break the driver if I attempt
> them (which has happened to me already and the computer required a full
> reinstall).
>
> here is last lines from the build:
>
> -- Found HWLOC: /usr/lib/x86_64-linux-gnu/libhwloc.so (found suitable
> version "1.11.6", minimum required is "1.5")
> -- Looking for C++ include pthread.h
> -- Looking for C++ include pthread.h - found
> -- Atomic operations found
> -- Performing Test PTHREAD_SETAFFINITY
> -- Performing Test PTHREAD_SETAFFINITY - Success
> -- Adding work-around for issue compiling CUDA code with glibc 2.23
> string.h
> -- Check for working NVCC/C++ compiler combination with nvcc
> '/usr/local/cuda/bin/nvcc'
> -- Check for working NVCC/C compiler combination - broken
> -- /usr/local/cuda/bin/nvcc standard output: ''
> -- /usr/local/cuda/bin/nvcc standard error:
>  '/home/rlwoltz/protein_modeling/gromacs-2020/build/gcc-5: No such file or
> directory
> '
> CMake Error at cmake/gmxManageNvccConfig.cmake:189 (message):
>   CUDA compiler does not seem to be functional.
> Call Stack (most recent call first):
>   cmake/gmxManageGPU.cmake:207 (include)
>   CMakeLists.txt:577 (gmx_gpu_setup)
>
>
> -- Configuring incomplete, errors occurred!
> See also
>
> "/home/rlwoltz/protein_modeling/gromacs-2020/build/CMakeFiles/CMakeOutput.log".
> See also
>
> "/home/rlwoltz/protein_modeling/gromacs-2020/build/CMakeFiles/CMakeError.log".
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs 2019 - Ryzen Architecture

2020-01-09 Thread Szilárd Páll
Good catch Kevin, that is likely an issue -- at least part of it.

Note that you can also use the mdrun -multidir functionality to avoid
having to manually manage mdrun process placement and pinning.

Another aspect is that if you leave half of the CPU cores unused, the cores
in use can boost to a higher clock rate and therefore can complete the work
on the CPU quicker which, as part of this work does not overlap with the
GPU, will impact the fraction of the time the GPU will be idle (and hence
also the time the GPU will be busy). For a fair comparison, run something
on those otherwise idle cores (at least a "stress -c 8" or possibly a
CPU-only mdrun); generally this is how we evaluate performance as a
function of CPU cores per GPU).

Cheers,
--
Szilárd


On Sat, Jan 4, 2020 at 9:11 PM Kevin Boyd  wrote:

> Hi,
>
> A few things besides any Ryzen-specific issues. First, your pinoffset for
> the second one should be 16, not 17. The way yours is set up, you're
> running on cores 0-15, then Gromacs will detect that your second
> simulation parameters are invalid (because from cores 17-32, core 32 does
> not exist) and turn off core pinning. You can verify that in the log file.
>
> Second, 16 threads per simulation is overkill, and you can get gains from
> stealing from GPU down-time by running 2 simulations per GPU. So I would
> suggest something like
>
> mdrun -nt 8 -pin on -pinoffset 0 -gpu_id 0 &
> mdrun -nt 8 -pin on -pinoffset 8 -gpu_id 0 &
> mdrun -nt 8 -pin on -pinoffset 16 -gpu_id 1 &
> mdrun -nt 8 -pin on -pinoffset 24 -gpu_id 1
>
> might give you close to optimal performance.
>
> On Thu, Jan 2, 2020 at 5:32 AM Paul bauer  wrote:
>
> > Hello,
> >
> > we only added full detection and support for the newer Rizen chip-sets
> > with GROMACS 2019.5, so please try if the update to this version solves
> > your issue.
> > If not, please open an issue on redmine.gromacs.org so we can track the
> > problem and try to solve it.
> >
> > Cheers
> >
> > Paul
> >
> > On 02/01/2020 13:26, Sandro Wrzalek wrote:
> > > Hi,
> > >
> > > happy new year!
> > >
> > > Now to my problem:
> > >
> > > I use Gromacs 2019.3 and to try to run some simulations (roughly 30k
> > > atoms per system) on my PC which has the following configuration:
> > >
> > > CPU: Ryzen 3950X (overclocked to 4.1 GHz)
> > >
> > > GPU #1: Nvidia RTX 2080 Ti
> > >
> > > GPU #2: Nvidia RTX 2080 Ti
> > >
> > > RAM: 64 GB
> > >
> > > PSU: 1600 Watts
> > >
> > >
> > > Each run uses one GPU and 16 of 32 logical cores. Doing only one run
> > > at time (gmx mdrun -deffnm rna0 -gpu_id 0 -nb gpu -pme gpu) the GPU
> > > utilization is roughly around 84% but if I add a second run, the
> > > utilization of both GPUs drops to roughly 20%, while leaving logical
> > > cores 17-32 idle (I changed parameter gpu_id, accordingly).
> > >
> > > Adding additional parameters for each run:
> > >
> > > gmx mdrun -deffnm rna0 -nt 16 -pin on -pinoffset 0 -gpu_id 0 -nb gpu
> > > -pme gpu
> > >
> > > gmx mdrun -deffnm rna0 -nt 16 -pin on -pinoffset 17 -gpu_id 1 -nb gpu
> > > -pme gpu
> > >
> > > I get a utilization of 78% per GPU, which is nice but not near the 84%
> > > I got with only one run. In theory, however, it should come at least
> > > close to that utilization.
> > >
> > > I suspect, the Ryzen Chiplet design as the culprit since Gromacs seems
> > > to prefer the the first Chiplet, even if two simultaneous simulations
> > > have the resources to occupy both. The reason for the 78% utilization
> > > could be because of overhead between the two Chiplets via the infinity
> > > band. However, I have no proof, nor am I able to explain why gmx mdrun
> > > -deffnm rna0 -nt 16 -gpu_id 0 & 1 -nb gpu -pme gpu works as well -
> > > seems to occupy free logical cores then.
> > >
> > > Long story short:
> > >
> > > Are there any workarounds to squeeze the last bit out of my setup? Is
> > > it possible to choose the logical cores manually (I did not found
> > > anything in the docs so far)?
> > >
> > >
> > > Thank you for your help!
> > >
> > >
> > > Best,
> > >
> > > Sandro
> > >
> >
> > --
> > Paul Bauer, PhD
> > GROMACS Development Manager
> > KTH Stockholm, SciLifeLab
> > 0046737308594
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing 

Re: [gmx-users] Gromacs 2019 - Ryzen Architecture

2020-01-04 Thread Kevin Boyd
Hi,

A few things besides any Ryzen-specific issues. First, your pinoffset for
the second one should be 16, not 17. The way yours is set up, you're
running on cores 0-15, then Gromacs will detect that your second
simulation parameters are invalid (because from cores 17-32, core 32 does
not exist) and turn off core pinning. You can verify that in the log file.

Second, 16 threads per simulation is overkill, and you can get gains from
stealing from GPU down-time by running 2 simulations per GPU. So I would
suggest something like

mdrun -nt 8 -pin on -pinoffset 0 -gpu_id 0 &
mdrun -nt 8 -pin on -pinoffset 8 -gpu_id 0 &
mdrun -nt 8 -pin on -pinoffset 16 -gpu_id 1 &
mdrun -nt 8 -pin on -pinoffset 24 -gpu_id 1

might give you close to optimal performance.

On Thu, Jan 2, 2020 at 5:32 AM Paul bauer  wrote:

> Hello,
>
> we only added full detection and support for the newer Rizen chip-sets
> with GROMACS 2019.5, so please try if the update to this version solves
> your issue.
> If not, please open an issue on redmine.gromacs.org so we can track the
> problem and try to solve it.
>
> Cheers
>
> Paul
>
> On 02/01/2020 13:26, Sandro Wrzalek wrote:
> > Hi,
> >
> > happy new year!
> >
> > Now to my problem:
> >
> > I use Gromacs 2019.3 and to try to run some simulations (roughly 30k
> > atoms per system) on my PC which has the following configuration:
> >
> > CPU: Ryzen 3950X (overclocked to 4.1 GHz)
> >
> > GPU #1: Nvidia RTX 2080 Ti
> >
> > GPU #2: Nvidia RTX 2080 Ti
> >
> > RAM: 64 GB
> >
> > PSU: 1600 Watts
> >
> >
> > Each run uses one GPU and 16 of 32 logical cores. Doing only one run
> > at time (gmx mdrun -deffnm rna0 -gpu_id 0 -nb gpu -pme gpu) the GPU
> > utilization is roughly around 84% but if I add a second run, the
> > utilization of both GPUs drops to roughly 20%, while leaving logical
> > cores 17-32 idle (I changed parameter gpu_id, accordingly).
> >
> > Adding additional parameters for each run:
> >
> > gmx mdrun -deffnm rna0 -nt 16 -pin on -pinoffset 0 -gpu_id 0 -nb gpu
> > -pme gpu
> >
> > gmx mdrun -deffnm rna0 -nt 16 -pin on -pinoffset 17 -gpu_id 1 -nb gpu
> > -pme gpu
> >
> > I get a utilization of 78% per GPU, which is nice but not near the 84%
> > I got with only one run. In theory, however, it should come at least
> > close to that utilization.
> >
> > I suspect, the Ryzen Chiplet design as the culprit since Gromacs seems
> > to prefer the the first Chiplet, even if two simultaneous simulations
> > have the resources to occupy both. The reason for the 78% utilization
> > could be because of overhead between the two Chiplets via the infinity
> > band. However, I have no proof, nor am I able to explain why gmx mdrun
> > -deffnm rna0 -nt 16 -gpu_id 0 & 1 -nb gpu -pme gpu works as well -
> > seems to occupy free logical cores then.
> >
> > Long story short:
> >
> > Are there any workarounds to squeeze the last bit out of my setup? Is
> > it possible to choose the logical cores manually (I did not found
> > anything in the docs so far)?
> >
> >
> > Thank you for your help!
> >
> >
> > Best,
> >
> > Sandro
> >
>
> --
> Paul Bauer, PhD
> GROMACS Development Manager
> KTH Stockholm, SciLifeLab
> 0046737308594
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS 2020 official release

2020-01-04 Thread Paul bauer

Hello again,

I got notified that there is an issue for the first link, it needs to be 
without the final dot:


http://manual.gromacs.org/2020/release-notes/index.html

Sorry for the inconvenience and happy new year!

Cheers

Paul

On 01/01/2020 18:11, Paul bauer wrote:

Hello GROMACS users!

The official release of GROMACS 2020 is now available.

What new things can you expect? Please see the release notes 
highlights at

http://manual.gromacs.org/2020/release-notes/index.html.

You can find the code, manual, release notes, installation 
instructions and

test suite at the links below.

Code: ftp://ftp.gromacs.org/pub/gromacs/gromacs-2020.tar.gz
Documentation: http://manual.gromacs.org/2020/index.html
(includes install guide, user guide, reference manual, and release notes)
Test Suite: 
http://gerrit.gromacs.org/download/regressiontests-2020.tar.gz


Happy simulating!



--
Paul Bauer, PhD
GROMACS Development Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2019 - Ryzen Architecture

2020-01-02 Thread Paul bauer

Hello,

we only added full detection and support for the newer Rizen chip-sets 
with GROMACS 2019.5, so please try if the update to this version solves 
your issue.
If not, please open an issue on redmine.gromacs.org so we can track the 
problem and try to solve it.


Cheers

Paul

On 02/01/2020 13:26, Sandro Wrzalek wrote:

Hi,

happy new year!

Now to my problem:

I use Gromacs 2019.3 and to try to run some simulations (roughly 30k 
atoms per system) on my PC which has the following configuration:


CPU: Ryzen 3950X (overclocked to 4.1 GHz)

GPU #1: Nvidia RTX 2080 Ti

GPU #2: Nvidia RTX 2080 Ti

RAM: 64 GB

PSU: 1600 Watts


Each run uses one GPU and 16 of 32 logical cores. Doing only one run 
at time (gmx mdrun -deffnm rna0 -gpu_id 0 -nb gpu -pme gpu) the GPU 
utilization is roughly around 84% but if I add a second run, the 
utilization of both GPUs drops to roughly 20%, while leaving logical 
cores 17-32 idle (I changed parameter gpu_id, accordingly).


Adding additional parameters for each run:

gmx mdrun -deffnm rna0 -nt 16 -pin on -pinoffset 0 -gpu_id 0 -nb gpu 
-pme gpu


gmx mdrun -deffnm rna0 -nt 16 -pin on -pinoffset 17 -gpu_id 1 -nb gpu 
-pme gpu


I get a utilization of 78% per GPU, which is nice but not near the 84% 
I got with only one run. In theory, however, it should come at least 
close to that utilization.


I suspect, the Ryzen Chiplet design as the culprit since Gromacs seems 
to prefer the the first Chiplet, even if two simultaneous simulations 
have the resources to occupy both. The reason for the 78% utilization 
could be because of overhead between the two Chiplets via the infinity 
band. However, I have no proof, nor am I able to explain why gmx mdrun 
-deffnm rna0 -nt 16 -gpu_id 0 & 1 -nb gpu -pme gpu works as well - 
seems to occupy free logical cores then.


Long story short:

Are there any workarounds to squeeze the last bit out of my setup? Is 
it possible to choose the logical cores manually (I did not found 
anything in the docs so far)?



Thank you for your help!


Best,

Sandro



--
Paul Bauer, PhD
GROMACS Development Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS 2020 release candidate

2019-12-23 Thread Paul bauer

Hello,

we have not been able to port everything from the group scheme to verlet 
yet, so some things will be unsupported in 2020.
For those things I would advice people to stick to 2019 for the time 
being, while we are working on porting the remaining things over for the 
2021 release next year.


Cheers

Paul

On 23/12/2019 12:47, Tafelmeier, Stefanie wrote:

Dear Paul,

I just installed the Gomacs 2020 rc1 for testing and would like to give some 
feedback.

As we are working partly with the flexible Williams force field, it is 
nessecary to use Buckingham potential.
Until now this was only possible when using group cutoff scheme.

I was excited to here, that it will be gone - as it takes ages - with the new 
gromacs verions and verlet should support all.
But when trying it, it says:

---
Program: gmx mdrun, version 2020-rc1
Source file: src/gromacs/mdlib/forcerec.cpp (line 1313)

Fatal error:
Verlet cutoff-scheme is not supported with Buckingham

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

But group cannot be used anymore.

Is there any other way to use Buckingham now?

Many thanks for your answer in advance.
Best wishes,
Steffi





-Ursprüngliche Nachricht-
Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 Im Auftrag von Paul bauer
Gesendet: Freitag, 20. Dezember 2019 15:54
An: gromacs.org_gmx-users@maillist.sys.kth.se; gmx-annou...@gromacs.org
Betreff: [gmx-users] GROMACS 2020 release candidate

Hi GROMACS users,

The GROMACS 2020 release candidate is now out and available!

As before, we are making the testing versions available for you to be able to 
get feedback on how well things are working, and what could be improved or if 
there are any bugs in the code we have missed ourselves.

We really appreciate your testing of the new release with your kinds of simulation on 
your hardware, both for correctness and performance. This is particularly important if 
you are using "interesting" hardware or compilers, because we can't test all of 
them!

As before, please do not use this version for doing science you plan to publish 
- even though it should be stable now, we still want to use the last weeks to 
iron out any remaining issues that might show up. Similarly, please don’t use 
this version as a base for a project that bundles or forks GROMACS.

What new things can you expect? (See the release notes for more details.)
* Running all parts of a simulation on the GPU by offloading the update and 
constraint calculations
* Fitting structures into experimental density maps
* The improved Python API

There’s lots of other new things, and a few old things removed - please see the 
release notes for the complete list. All the content of GROMACS 2019.4 is 
present, apart from features that have been removed.

If all goes to plan, we hope to ship the final 2020 release in time for the New 
Year, but that relies on people joining in and helping us test! We hope you 
will consider making that contribution, so that we can continue to deliver 
high-quality free simulation software that will be useful to you on January 1.

You can find the code, manual, release notes, installation instructions and 
testsuite at the links below.

Code: ftp://ftp.gromacs.org/pub/gromacs/gromacs-2020-rc1.tar.gz
Documentation: http://manual.gromacs.org/2020-rc1/index.html
(includes install guide, user guide, reference manual) Release Notes:
http://manual.gromacs.org/2020-rc1/release-notes/index.html
Test Suite:
http://gerrit.gromacs.org/download/regressiontests-2020-rc1.tar.gz

Happy testing!

--
Paul Bauer, PhD
GROMACS Release Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.



--
Paul Bauer, PhD
GROMACS Release Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GROMACS 2020 release candidate

2019-12-23 Thread Tafelmeier, Stefanie
Dear Paul,

I just installed the Gomacs 2020 rc1 for testing and would like to give some 
feedback.

As we are working partly with the flexible Williams force field, it is 
nessecary to use Buckingham potential.
Until now this was only possible when using group cutoff scheme.

I was excited to here, that it will be gone - as it takes ages - with the new 
gromacs verions and verlet should support all.
But when trying it, it says:

---
Program: gmx mdrun, version 2020-rc1
Source file: src/gromacs/mdlib/forcerec.cpp (line 1313)

Fatal error:
Verlet cutoff-scheme is not supported with Buckingham

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

But group cannot be used anymore.

Is there any other way to use Buckingham now?

Many thanks for your answer in advance.
Best wishes,
Steffi





-Ursprüngliche Nachricht-
Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 Im Auftrag von Paul bauer
Gesendet: Freitag, 20. Dezember 2019 15:54
An: gromacs.org_gmx-users@maillist.sys.kth.se; gmx-annou...@gromacs.org
Betreff: [gmx-users] GROMACS 2020 release candidate

Hi GROMACS users,

The GROMACS 2020 release candidate is now out and available!

As before, we are making the testing versions available for you to be able to 
get feedback on how well things are working, and what could be improved or if 
there are any bugs in the code we have missed ourselves.

We really appreciate your testing of the new release with your kinds of 
simulation on your hardware, both for correctness and performance. This is 
particularly important if you are using "interesting" hardware or compilers, 
because we can't test all of them!

As before, please do not use this version for doing science you plan to publish 
- even though it should be stable now, we still want to use the last weeks to 
iron out any remaining issues that might show up. Similarly, please don’t use 
this version as a base for a project that bundles or forks GROMACS.

What new things can you expect? (See the release notes for more details.)
* Running all parts of a simulation on the GPU by offloading the update and 
constraint calculations
* Fitting structures into experimental density maps
* The improved Python API

There’s lots of other new things, and a few old things removed - please see the 
release notes for the complete list. All the content of GROMACS 2019.4 is 
present, apart from features that have been removed.

If all goes to plan, we hope to ship the final 2020 release in time for the New 
Year, but that relies on people joining in and helping us test! We hope you 
will consider making that contribution, so that we can continue to deliver 
high-quality free simulation software that will be useful to you on January 1.

You can find the code, manual, release notes, installation instructions and 
testsuite at the links below.

Code: ftp://ftp.gromacs.org/pub/gromacs/gromacs-2020-rc1.tar.gz
Documentation: http://manual.gromacs.org/2020-rc1/index.html
(includes install guide, user guide, reference manual) Release Notes:
http://manual.gromacs.org/2020-rc1/release-notes/index.html
Test Suite:
http://gerrit.gromacs.org/download/regressiontests-2020-rc1.tar.gz

Happy testing!

--
Paul Bauer, PhD
GROMACS Release Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs compilation problem

2019-12-20 Thread Mark Abraham
Hi,

The fftw build you are trying to do uses gcc by default, and your gcc is so
old that AVX512 didn't exist yet, so it cant compile for it. Since you're
using the Intel compiler, it's easiest to also use its fft library with
cmake -DGMX_BUILD_OWN_FFTW=OFF -DGMX_FFT_LIBRARY=mkl

Harder is to update gcc or get the own fftw build to use the Intel compilers

Mark

On Thu., 19 Dec. 2019, 19:31 Tuanan Lourenço, 
wrote:

> Hi everyone,
>
> I am having some issues with the installation of Gromacs using the
> optimization flag -DGMX_SIMD=AVX_512.
>
> If I install without the optimization, everything goes ok and the software
> works fine. But, as recommended by Gromacs in the log file, I should
> compile using the optimization flag -DGMX_SIMD=AVX_512, however, when I try
> I get this error:
>
> configure: error: Need a version of gcc with -mavx512f
>
> I am using icc compiler version 19.1.0.166 and gcc 4.8.5 that I think
> should be enough.
>
> My cmake command line is:
>
> cmake ..
>
> -DCMAKE_CXX_COMPILER=/opt/intel/compilers_and_libraries_2020.0.166/linux/bin/intel64/icc
>
> -DCMAKE_C_COMPILER=/opt/intel/compilers_and_libraries_2020.0.166/linux/bin/intel64/icc
> -DCMAKE_INSTALL_PREFIX=~/software/gromacs -DGMX_BUILD_OWN_FFTW=ON
> -DGMX_SIMD=AVX_512 -DGMX_STDLIB_CXX_FLAGS=/usr/bin/gcc
>
>
> There is any tip to fix it?
>
> Thanks
>
> --
> __
> Dr. Tuanan C. Lourenço
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs 2019.4 compliantion with GPU support

2019-12-12 Thread Mark Abraham
Hi,

I suspect that you have multiple versions of hwloc on your system, and
somehow the environment is different at cmake time and make time (e.g.
different modules loaded?). If so, don't do that. Otherwise, cmake
-DGMX_HWLOC=off will work well enough. I've proposed a probably fix for
future 2019 versions.

Mark

On Thu, 12 Dec 2019 at 12:43, bonjour899  wrote:

> Hello,
>
>
> I'm trying to install gromacs-2019.4 with GPU support, but was always
> wrong.
> I ran cmake as
> cmake3 .. -DCMAKE_INSTALL_PREFIX=~/gromacs/gmx2019.4
> -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_HWLOC=ON
> It works but when installing always get error messages. (Attached please
> find the messages I got after running cmake and the error messages for
> installing.)
> Sorry for posting so much information, I really want to know how to solve
> this.
> Thanks.
>
>
> Wen--
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2019.4 - cudaStreamSynchronize failed issue

2019-12-05 Thread Szilárd Páll
Can you please file an issue on redmine.gromacs.org and attach the inputs
that reproduce the behavior described?

--
Szilárd

On Wed, Dec 4, 2019, 21:35 Chenou Zhang  wrote:

> We did test that.
> Our cluster has total 11 GPU nodes and I ran 20 tests over all of them. 7
> out of the 20 tests did have the potential energy jump issue and they were
> running on 5 different nodes.
> So I tend to believe this issue happens on any of those nodes.
>
> On Wed, Dec 4, 2019 at 1:14 PM Szilárd Páll 
> wrote:
>
> > The fact that you are observing errors alo the energies to be off by so
> > much and that it reproduces with multiple inputs suggest that this may
> not
> > a code issue. Did you do all runs that failed on the same hardware? Have
> > you excluded the option that one of those GeForce cards may be flaky?
> >
> > --
> > Szilárd
> >
> >
> > On Wed, Dec 4, 2019 at 7:47 PM Chenou Zhang  wrote:
> >
> > > We tried the same gmx settings in 2019.4 with different protein
> systems.
> > > And we got the same weird potential energy jump  within 1000 steps.
> > >
> > > ```
> > >
> > > Step   Time
> > >   00.0
> > >  Energies (kJ/mol)
> > >BondU-BProper Dih.  Improper Dih.  CMAP
> > Dih.
> > > 2.08204e+049.92358e+046.53063e+041.06706e+03
> >  -2.75672e+02
> > >   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul.
> > recip.
> > > 1.50031e+04   -4.86857e+043.10386e+04   -1.09745e+06
> > 4.81832e+03
> > >   PotentialKinetic En.   Total Energy  Conserved En.
> > Temperature
> > >-9.09123e+052.80635e+05   -6.28487e+05   -6.28428e+05
> > 3.04667e+02
> > >  Pressure (bar)   Constr. rmsd
> > >-1.56013e+003.60634e-06
> > >
> > > DD  step 999 load imb.: force 14.6%  pme mesh/force 0.581
> > >Step   Time
> > >10002.0
> > >
> > > Energies (kJ/mol)
> > >BondU-BProper Dih.  Improper Dih.  CMAP
> > Dih.
> > > 2.04425e+049.92768e+046.52873e+041.02016e+03
> >  -2.45851e+02
> > >   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul.
> > recip.
> > > 1.49863e+04   -4.91092e+043.10572e+04   -1.09508e+06
> > 4.97942e+03
> > >   PotentialKinetic En.   Total Energy  Conserved En.
> > Temperature
> > > 1.35726e+352.77598e+051.35726e+351.35726e+35
> > 3.01370e+02
> > >  Pressure (bar)   Constr. rmsd
> > >-7.55250e+013.63239e-06
> > >
> > >  DD  step 1999 load imb.: force 16.1%  pme mesh/force 0.598
> > >Step   Time
> > >20004.0
> > >
> > > Energies (kJ/mol)
> > >BondU-BProper Dih.  Improper Dih.  CMAP
> > Dih.
> > > 1.99521e+049.97482e+046.49595e+041.00798e+03
> >  -2.42567e+02
> > >   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul.
> > recip.
> > > 1.50156e+04   -4.85324e+043.01944e+04   -1.09620e+06
> > 4.82958e+03
> > >   PotentialKinetic En.   Total Energy  Conserved En.
> > Temperature
> > > 1.35726e+352.79206e+051.35726e+351.35726e+35
> > 3.03115e+02
> > >  Pressure (bar)   Constr. rmsd
> > >-5.50508e+013.64353e-06
> > >
> > > DD  step 2999 load imb.: force 16.6%  pme mesh/force 0.602
> > >Step   Time
> > >30006.0
> > >
> > >
> > > Energies (kJ/mol)
> > >BondU-BProper Dih.  Improper Dih.  CMAP
> > Dih.
> > > 1.98590e+049.88100e+046.50934e+041.07048e+03
> >  -2.38831e+02
> > >   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul.
> > recip.
> > > 1.49609e+04   -4.93079e+043.12273e+04   -1.09582e+06
> > 4.83209e+03
> > >   PotentialKinetic En.   Total Energy  Conserved En.
> > Temperature
> > > 1.35726e+352.79438e+051.35726e+351.35726e+35
> > 3.03367e+02
> > >  Pressure (bar)   Constr. rmsd
> > > 7.62438e+013.61574e-06
> > >
> > > ```
> > >
> > > On Mon, Dec 2, 2019 at 2:13 PM Mark Abraham 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > What driver version is reported in the respective log files? Does the
> > > error
> > > > persist if mdrun -notunepme is used?
> > > >
> > > > Mark
> > > >
> > > > On Mon., 2 Dec. 2019, 21:18 Chenou Zhang,  wrote:
> > > >
> > > > > Hi Gromacs developers,
> > > > >
> > > > > I'm currently running gromacs 2019.4 on our university's HPC
> cluster.
> > > To
> > > > > fully utilize the GPU nodes, I followed notes on
> > > > >
> > > > >
> > > >
> > >
> >
> http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html
> > > > > .
> > > > >
> > > > >
> > > > > And here is the command I used for my runs.
> > > > > ```
> > > > > gmx mdrun -v -s $TPR -deffnm md_seed_fixed -ntmpi 8 -pin on -nb gpu
> > > > -ntomp
> > > > > 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh $HOURS -cpt
> > 60
> > > > -cpi
> > > > > -noappend
> > > > > 

Re: [gmx-users] Gromacs 2019.4 - cudaStreamSynchronize failed issue

2019-12-04 Thread Chenou Zhang
We did test that.
Our cluster has total 11 GPU nodes and I ran 20 tests over all of them. 7
out of the 20 tests did have the potential energy jump issue and they were
running on 5 different nodes.
So I tend to believe this issue happens on any of those nodes.

On Wed, Dec 4, 2019 at 1:14 PM Szilárd Páll  wrote:

> The fact that you are observing errors alo the energies to be off by so
> much and that it reproduces with multiple inputs suggest that this may not
> a code issue. Did you do all runs that failed on the same hardware? Have
> you excluded the option that one of those GeForce cards may be flaky?
>
> --
> Szilárd
>
>
> On Wed, Dec 4, 2019 at 7:47 PM Chenou Zhang  wrote:
>
> > We tried the same gmx settings in 2019.4 with different protein systems.
> > And we got the same weird potential energy jump  within 1000 steps.
> >
> > ```
> >
> > Step   Time
> >   00.0
> >  Energies (kJ/mol)
> >BondU-BProper Dih.  Improper Dih.  CMAP
> Dih.
> > 2.08204e+049.92358e+046.53063e+041.06706e+03
>  -2.75672e+02
> >   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul.
> recip.
> > 1.50031e+04   -4.86857e+043.10386e+04   -1.09745e+06
> 4.81832e+03
> >   PotentialKinetic En.   Total Energy  Conserved En.
> Temperature
> >-9.09123e+052.80635e+05   -6.28487e+05   -6.28428e+05
> 3.04667e+02
> >  Pressure (bar)   Constr. rmsd
> >-1.56013e+003.60634e-06
> >
> > DD  step 999 load imb.: force 14.6%  pme mesh/force 0.581
> >Step   Time
> >10002.0
> >
> > Energies (kJ/mol)
> >BondU-BProper Dih.  Improper Dih.  CMAP
> Dih.
> > 2.04425e+049.92768e+046.52873e+041.02016e+03
>  -2.45851e+02
> >   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul.
> recip.
> > 1.49863e+04   -4.91092e+043.10572e+04   -1.09508e+06
> 4.97942e+03
> >   PotentialKinetic En.   Total Energy  Conserved En.
> Temperature
> > 1.35726e+352.77598e+051.35726e+351.35726e+35
> 3.01370e+02
> >  Pressure (bar)   Constr. rmsd
> >-7.55250e+013.63239e-06
> >
> >  DD  step 1999 load imb.: force 16.1%  pme mesh/force 0.598
> >Step   Time
> >20004.0
> >
> > Energies (kJ/mol)
> >BondU-BProper Dih.  Improper Dih.  CMAP
> Dih.
> > 1.99521e+049.97482e+046.49595e+041.00798e+03
>  -2.42567e+02
> >   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul.
> recip.
> > 1.50156e+04   -4.85324e+043.01944e+04   -1.09620e+06
> 4.82958e+03
> >   PotentialKinetic En.   Total Energy  Conserved En.
> Temperature
> > 1.35726e+352.79206e+051.35726e+351.35726e+35
> 3.03115e+02
> >  Pressure (bar)   Constr. rmsd
> >-5.50508e+013.64353e-06
> >
> > DD  step 2999 load imb.: force 16.6%  pme mesh/force 0.602
> >Step   Time
> >30006.0
> >
> >
> > Energies (kJ/mol)
> >BondU-BProper Dih.  Improper Dih.  CMAP
> Dih.
> > 1.98590e+049.88100e+046.50934e+041.07048e+03
>  -2.38831e+02
> >   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul.
> recip.
> > 1.49609e+04   -4.93079e+043.12273e+04   -1.09582e+06
> 4.83209e+03
> >   PotentialKinetic En.   Total Energy  Conserved En.
> Temperature
> > 1.35726e+352.79438e+051.35726e+351.35726e+35
> 3.03367e+02
> >  Pressure (bar)   Constr. rmsd
> > 7.62438e+013.61574e-06
> >
> > ```
> >
> > On Mon, Dec 2, 2019 at 2:13 PM Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > What driver version is reported in the respective log files? Does the
> > error
> > > persist if mdrun -notunepme is used?
> > >
> > > Mark
> > >
> > > On Mon., 2 Dec. 2019, 21:18 Chenou Zhang,  wrote:
> > >
> > > > Hi Gromacs developers,
> > > >
> > > > I'm currently running gromacs 2019.4 on our university's HPC cluster.
> > To
> > > > fully utilize the GPU nodes, I followed notes on
> > > >
> > > >
> > >
> >
> http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html
> > > > .
> > > >
> > > >
> > > > And here is the command I used for my runs.
> > > > ```
> > > > gmx mdrun -v -s $TPR -deffnm md_seed_fixed -ntmpi 8 -pin on -nb gpu
> > > -ntomp
> > > > 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh $HOURS -cpt
> 60
> > > -cpi
> > > > -noappend
> > > > ```
> > > >
> > > > And for some of those runs, they might fail with the following error:
> > > > ```
> > > > ---
> > > >
> > > > Program: gmx mdrun, version 2019.4
> > > >
> > > > Source file: src/gromacs/gpu_utils/cudautils.cuh (line 229)
> > > >
> > > > MPI rank:3 (out of 8)
> > > >
> > > >
> > > >
> > > > Fatal error:
> > > >
> > > > cudaStreamSynchronize failed: an illegal memory access was
> 

Re: [gmx-users] Gromacs 2019.4 - cudaStreamSynchronize failed issue

2019-12-04 Thread Szilárd Páll
The fact that you are observing errors alo the energies to be off by so
much and that it reproduces with multiple inputs suggest that this may not
a code issue. Did you do all runs that failed on the same hardware? Have
you excluded the option that one of those GeForce cards may be flaky?

--
Szilárd


On Wed, Dec 4, 2019 at 7:47 PM Chenou Zhang  wrote:

> We tried the same gmx settings in 2019.4 with different protein systems.
> And we got the same weird potential energy jump  within 1000 steps.
>
> ```
>
> Step   Time
>   00.0
>  Energies (kJ/mol)
>BondU-BProper Dih.  Improper Dih.  CMAP Dih.
> 2.08204e+049.92358e+046.53063e+041.06706e+03   -2.75672e+02
>   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.
> 1.50031e+04   -4.86857e+043.10386e+04   -1.09745e+064.81832e+03
>   PotentialKinetic En.   Total Energy  Conserved En.Temperature
>-9.09123e+052.80635e+05   -6.28487e+05   -6.28428e+053.04667e+02
>  Pressure (bar)   Constr. rmsd
>-1.56013e+003.60634e-06
>
> DD  step 999 load imb.: force 14.6%  pme mesh/force 0.581
>Step   Time
>10002.0
>
> Energies (kJ/mol)
>BondU-BProper Dih.  Improper Dih.  CMAP Dih.
> 2.04425e+049.92768e+046.52873e+041.02016e+03   -2.45851e+02
>   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.
> 1.49863e+04   -4.91092e+043.10572e+04   -1.09508e+064.97942e+03
>   PotentialKinetic En.   Total Energy  Conserved En.Temperature
> 1.35726e+352.77598e+051.35726e+351.35726e+353.01370e+02
>  Pressure (bar)   Constr. rmsd
>-7.55250e+013.63239e-06
>
>  DD  step 1999 load imb.: force 16.1%  pme mesh/force 0.598
>Step   Time
>20004.0
>
> Energies (kJ/mol)
>BondU-BProper Dih.  Improper Dih.  CMAP Dih.
> 1.99521e+049.97482e+046.49595e+041.00798e+03   -2.42567e+02
>   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.
> 1.50156e+04   -4.85324e+043.01944e+04   -1.09620e+064.82958e+03
>   PotentialKinetic En.   Total Energy  Conserved En.Temperature
> 1.35726e+352.79206e+051.35726e+351.35726e+353.03115e+02
>  Pressure (bar)   Constr. rmsd
>-5.50508e+013.64353e-06
>
> DD  step 2999 load imb.: force 16.6%  pme mesh/force 0.602
>Step   Time
>30006.0
>
>
> Energies (kJ/mol)
>BondU-BProper Dih.  Improper Dih.  CMAP Dih.
> 1.98590e+049.88100e+046.50934e+041.07048e+03   -2.38831e+02
>   LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.
> 1.49609e+04   -4.93079e+043.12273e+04   -1.09582e+064.83209e+03
>   PotentialKinetic En.   Total Energy  Conserved En.Temperature
> 1.35726e+352.79438e+051.35726e+351.35726e+353.03367e+02
>  Pressure (bar)   Constr. rmsd
> 7.62438e+013.61574e-06
>
> ```
>
> On Mon, Dec 2, 2019 at 2:13 PM Mark Abraham 
> wrote:
>
> > Hi,
> >
> > What driver version is reported in the respective log files? Does the
> error
> > persist if mdrun -notunepme is used?
> >
> > Mark
> >
> > On Mon., 2 Dec. 2019, 21:18 Chenou Zhang,  wrote:
> >
> > > Hi Gromacs developers,
> > >
> > > I'm currently running gromacs 2019.4 on our university's HPC cluster.
> To
> > > fully utilize the GPU nodes, I followed notes on
> > >
> > >
> >
> http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html
> > > .
> > >
> > >
> > > And here is the command I used for my runs.
> > > ```
> > > gmx mdrun -v -s $TPR -deffnm md_seed_fixed -ntmpi 8 -pin on -nb gpu
> > -ntomp
> > > 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh $HOURS -cpt 60
> > -cpi
> > > -noappend
> > > ```
> > >
> > > And for some of those runs, they might fail with the following error:
> > > ```
> > > ---
> > >
> > > Program: gmx mdrun, version 2019.4
> > >
> > > Source file: src/gromacs/gpu_utils/cudautils.cuh (line 229)
> > >
> > > MPI rank:3 (out of 8)
> > >
> > >
> > >
> > > Fatal error:
> > >
> > > cudaStreamSynchronize failed: an illegal memory access was encountered
> > >
> > >
> > >
> > > For more information and tips for troubleshooting, please check the
> > GROMACS
> > >
> > > website at http://www.gromacs.org/Documentation/Errors
> > > ```
> > >
> > > we also had a different error from slurm system:
> > > ```
> > > ^Mstep 4400: timed with pme grid 96 96 60, coulomb cutoff 1.446: 467.9
> > > M-cycles
> > > ^Mstep 4600: timed with pme grid 96 96 64, coulomb cutoff 1.372: 451.4
> > > M-cycles
> > > /var/spool/slurmd/job2321134/slurm_script: line 44: 29866 Segmentation
> > > fault  gmx mdrun -v -s $TPR -deffnm 

Re: [gmx-users] Gromacs 2019.4 - cudaStreamSynchronize failed issue

2019-12-04 Thread Chenou Zhang
We tried the same gmx settings in 2019.4 with different protein systems.
And we got the same weird potential energy jump  within 1000 steps.

```

Step   Time
  00.0
 Energies (kJ/mol)
   BondU-BProper Dih.  Improper Dih.  CMAP Dih.
2.08204e+049.92358e+046.53063e+041.06706e+03   -2.75672e+02
  LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.
1.50031e+04   -4.86857e+043.10386e+04   -1.09745e+064.81832e+03
  PotentialKinetic En.   Total Energy  Conserved En.Temperature
   -9.09123e+052.80635e+05   -6.28487e+05   -6.28428e+053.04667e+02
 Pressure (bar)   Constr. rmsd
   -1.56013e+003.60634e-06

DD  step 999 load imb.: force 14.6%  pme mesh/force 0.581
   Step   Time
   10002.0

Energies (kJ/mol)
   BondU-BProper Dih.  Improper Dih.  CMAP Dih.
2.04425e+049.92768e+046.52873e+041.02016e+03   -2.45851e+02
  LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.
1.49863e+04   -4.91092e+043.10572e+04   -1.09508e+064.97942e+03
  PotentialKinetic En.   Total Energy  Conserved En.Temperature
1.35726e+352.77598e+051.35726e+351.35726e+353.01370e+02
 Pressure (bar)   Constr. rmsd
   -7.55250e+013.63239e-06

 DD  step 1999 load imb.: force 16.1%  pme mesh/force 0.598
   Step   Time
   20004.0

Energies (kJ/mol)
   BondU-BProper Dih.  Improper Dih.  CMAP Dih.
1.99521e+049.97482e+046.49595e+041.00798e+03   -2.42567e+02
  LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.
1.50156e+04   -4.85324e+043.01944e+04   -1.09620e+064.82958e+03
  PotentialKinetic En.   Total Energy  Conserved En.Temperature
1.35726e+352.79206e+051.35726e+351.35726e+353.03115e+02
 Pressure (bar)   Constr. rmsd
   -5.50508e+013.64353e-06

DD  step 2999 load imb.: force 16.6%  pme mesh/force 0.602
   Step   Time
   30006.0


Energies (kJ/mol)
   BondU-BProper Dih.  Improper Dih.  CMAP Dih.
1.98590e+049.88100e+046.50934e+041.07048e+03   -2.38831e+02
  LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.
1.49609e+04   -4.93079e+043.12273e+04   -1.09582e+064.83209e+03
  PotentialKinetic En.   Total Energy  Conserved En.Temperature
1.35726e+352.79438e+051.35726e+351.35726e+353.03367e+02
 Pressure (bar)   Constr. rmsd
7.62438e+013.61574e-06

```

On Mon, Dec 2, 2019 at 2:13 PM Mark Abraham 
wrote:

> Hi,
>
> What driver version is reported in the respective log files? Does the error
> persist if mdrun -notunepme is used?
>
> Mark
>
> On Mon., 2 Dec. 2019, 21:18 Chenou Zhang,  wrote:
>
> > Hi Gromacs developers,
> >
> > I'm currently running gromacs 2019.4 on our university's HPC cluster. To
> > fully utilize the GPU nodes, I followed notes on
> >
> >
> http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html
> > .
> >
> >
> > And here is the command I used for my runs.
> > ```
> > gmx mdrun -v -s $TPR -deffnm md_seed_fixed -ntmpi 8 -pin on -nb gpu
> -ntomp
> > 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh $HOURS -cpt 60
> -cpi
> > -noappend
> > ```
> >
> > And for some of those runs, they might fail with the following error:
> > ```
> > ---
> >
> > Program: gmx mdrun, version 2019.4
> >
> > Source file: src/gromacs/gpu_utils/cudautils.cuh (line 229)
> >
> > MPI rank:3 (out of 8)
> >
> >
> >
> > Fatal error:
> >
> > cudaStreamSynchronize failed: an illegal memory access was encountered
> >
> >
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> >
> > website at http://www.gromacs.org/Documentation/Errors
> > ```
> >
> > we also had a different error from slurm system:
> > ```
> > ^Mstep 4400: timed with pme grid 96 96 60, coulomb cutoff 1.446: 467.9
> > M-cycles
> > ^Mstep 4600: timed with pme grid 96 96 64, coulomb cutoff 1.372: 451.4
> > M-cycles
> > /var/spool/slurmd/job2321134/slurm_script: line 44: 29866 Segmentation
> > fault  gmx mdrun -v -s $TPR -deffnm md_seed_fixed -ntmpi 8 -pin on
> -nb
> > gpu -ntomp 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh $HOURS
> > -cpt 60 -cpi -noappend
> > ```
> >
> > We first thought this could due to compiler issue and tried different
> > settings as following:
> > ===test1===
> > 
> > module load cuda/9.2.88.1
> > module load gcc/7.3.0
> > . /home/rsexton2/Library/gromacs/2019.4/test1/bin/GMXRC
> > 
> > ===test2===
> > 
> > module load cuda/9.2.88.1
> > module load gcc/6x
> > . /home/rsexton2/Library/gromacs/2019.4/test2/bin/GMXRC
> > 
> > ===test3===
> > 
> > module load cuda/9.2.148
> > module 

Re: [gmx-users] Gromacs 2019.4 - cudaStreamSynchronize failed issue

2019-12-03 Thread Chenou Zhang
Hi,

I've run 30 tests with the -notunepme option. I got the following error
from one of them(which is still the same *cudaStreamSynchronize failed*
error):


```
DD  step 1422999  vol min/aver 0.639  load imb.: force  1.1%  pme
mesh/force 1.079
   Step   Time

1423000 2846.0



   Energies (kJ/mol)

   BondU-BProper Dih.  Improper Dih.  CMAP Dih.

3.79755e+041.78943e+051.22798e+052.83835e+03   -9.19303e+02

  LJ-14 Coulomb-14LJ (SR)   Coulomb (SR)   Coul. recip.

2.56547e+045.11714e+059.77218e+03   -2.07148e+068.64504e+03

  PotentialKinetic En.   Total Energy  Conserved En.Temperature

7.64126e+134.79398e+057.64126e+137.64126e+133.58009e+02

 Pressure (bar)   Constr. rmsd

   -6.03201e+014.56399e-06





---

Program: gmx mdrun, version 2019.4

Source file: src/gromacs/gpu_utils/cudautils.cuh (line 229)

MPI rank:2 (out of 8)



Fatal error:

cudaStreamSynchronize failed: an illegal memory access was encountered



For more information and tips for troubleshooting, please check the GROMACS

website at http://www.gromacs.org/Documentation/Errors

---
```

Here is the command and the driver info:

```
Command line:

  gmx mdrun -v -s md_seed_fixed.tpr -deffnm md_seed_fixed -ntmpi 8 -pin on
-nb gpu -ntomp 3 -pme gpu -pmefft gpu -notunepme -npme 1 -gputasks 00112233
-maxh 2 -cpt 60 -cpi -noappend


GROMACS version:2019.4

Precision:  single

Memory model:   64 bit

MPI library:thread_mpi

OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)

GPU support:CUDA

SIMD instructions:  AVX2_256

FFT library:fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512

RDTSCP usage:   enabled

TNG support:enabled

Hwloc support:  hwloc-1.11.2

Tracing support:disabled

C compiler: /packages/7x/gcc/gcc-7.3.0/bin/gcc GNU 7.3.0

C compiler flags:-mavx2 -mfma -O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
C++ compiler:   /packages/7x/gcc/gcc-7.3.0/bin/g++ GNU 7.3.0

C++ compiler flags:  -mavx2 -mfma-std=c++11   -O3 -DNDEBUG
-funroll-all-loops -fexcess-precision=fast
CUDA compiler:  /packages/7x/cuda/9.2.88.1/bin/nvcc nvcc: NVIDIA (R)
Cuda compiler driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on
Wed_Apr_11_23:16:29_CDT_2018;Cuda compilation tools, release 9.2, V9.2.88
CUDA compiler
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;;;
;-mavx2;-mfma;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:9.20

CUDA runtime:   9.20





Running on 1 node with total 24 cores, 24 logical cores, 4 compatible GPUs

Hardware detected:

  CPU info:

Vendor: Intel

Brand:  Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz

Family: 6   Model: 79   Stepping: 1

Features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma hle htt intel
lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp
rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
  Hardware topology: Full, with devices

Sockets, cores, and logical processors:

  Socket  0: [   0] [   1] [   2] [   3] [   4] [   5] [   6] [   7] [
  8] [   9] [  10] [  11]
  Socket  1: [  12] [  13] [  14] [  15] [  16] [  17] [  18] [  19] [
 20] [  21] [  22] [  23]
Numa nodes:

  Node  0 (34229563392 bytes mem):   0   1   2   3   4   5   6   7   8
  9  10  11
  Node  1 (34359738368 bytes mem):  12  13  14  15  16  17  18  19  20
 21  22  23
  Latency:

   0 1

 0  1.00  2.10

 1  2.10  1.00

Caches:

  L1: 32768 bytes, linesize 64 bytes, assoc. 8, shared 1 ways

  L2: 262144 bytes, linesize 64 bytes, assoc. 8, shared 1 ways

  L3: 31457280 bytes, linesize 64 bytes, assoc. 20, shared 12 ways
 PCI devices:

  :01:00.0  Id: 15b3:1007  Class: 0x0200  Numa: 0

  :02:00.0  Id: 10de:1b06  Class: 0x0300  Numa: 0

  :03:00.0  Id: 10de:1b06  Class: 0x0300  Numa: 0

  :00:11.4  Id: 8086:8d62  Class: 0x0106  Numa: 0

  :06:00.0  Id: 1a03:2000  Class: 0x0300  Numa: 0

  :00:1f.2  Id: 8086:8d02  Class: 0x0106  Numa: 0

  :81:00.0  Id: 8086:1521  Class: 0x0200  Numa: 1

  :81:00.1  Id: 8086:1521  Class: 0x0200  Numa: 1

  :82:00.0  Id: 15b3:1007  Class: 0x0280  Numa: 1

  :83:00.0  Id: 10de:1b06  Class: 0x0300  Numa: 1

  :84:00.0  Id: 10de:1b06  Class: 0x0300  Numa: 1

  GPU info:

Number of GPUs detected: 4

#0: NVIDIA GeForce 

Re: [gmx-users] Gromacs 2019.4 - cudaStreamSynchronize failed issue

2019-12-02 Thread Chenou Zhang
For the error:
```
^Mstep 4400: timed with pme grid 96 96 60, coulomb cutoff 1.446: 467.9
M-cycles
^Mstep 4600: timed with pme grid 96 96 64, coulomb cutoff 1.372: 451.4
M-cycles
/var/spool/slurmd/job2321134/slurm_script: line 44: 29866 Segmentation
fault  gmx mdrun -v -s $TPR -deffnm md_seed_fixed -ntmpi 8 -pin on -nb
gpu -ntomp 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh $HOURS
-cpt 60 -cpi -noappend
```
I got these driver info:
```
GROMACS:  gmx mdrun, version 2019.4

Executable:   /home/rsexton2/Library/gromacs/2019.4/test1/bin/gmx

Data prefix:  /home/rsexton2/Library/gromacs/2019.4/test1

Working dir:  /scratch/czhan178/project/NapA-2019.4/gromacs_test_1/test_9

Process ID:   29866

Command line:

  gmx mdrun -v -s md_seed_fixed.tpr -deffnm md_seed_fixed -ntmpi 8 -pin on
-nb gpu -ntomp 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh 2
-cpt 60 -cpi -noappend


GROMACS version:2019.4

Precision:  single

Memory model:   64 bit

MPI library:thread_mpi

OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)

GPU support:CUDA

SIMD instructions:  AVX2_256

FFT library:fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512

RDTSCP usage:   enabled

TNG support:enabled

Hwloc support:  hwloc-1.11.2

Tracing support:disabled

C compiler: /packages/7x/gcc/gcc-7.3.0/bin/gcc GNU 7.3.0

C compiler flags:-mavx2 -mfma -O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
C++ compiler:   /packages/7x/gcc/gcc-7.3.0/bin/g++ GNU 7.3.0

C++ compiler flags:  -mavx2 -mfma-std=c++11   -O3 -DNDEBUG
-funroll-all-loops -fexcess-precision=fast
CUDA compiler:  /packages/7x/cuda/9.2.88.1/bin/nvcc nvcc: NVIDIA (R)
Cuda compiler driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on
Wed_Apr_11_23:16:29_CDT_2018;Cuda compilation tools, release 9.2, V9.2.88
CUDA compiler
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;;;
;-mavx2;-mfma;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:9.20

CUDA runtime:   9.20
```

I'll run -notunepme option and get you updated.

Chenou

On Mon, Dec 2, 2019 at 2:13 PM Mark Abraham 
wrote:

> Hi,
>
> What driver version is reported in the respective log files? Does the error
> persist if mdrun -notunepme is used?
>
> Mark
>
> On Mon., 2 Dec. 2019, 21:18 Chenou Zhang,  wrote:
>
> > Hi Gromacs developers,
> >
> > I'm currently running gromacs 2019.4 on our university's HPC cluster. To
> > fully utilize the GPU nodes, I followed notes on
> >
> >
> http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html
> > .
> >
> >
> > And here is the command I used for my runs.
> > ```
> > gmx mdrun -v -s $TPR -deffnm md_seed_fixed -ntmpi 8 -pin on -nb gpu
> -ntomp
> > 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh $HOURS -cpt 60
> -cpi
> > -noappend
> > ```
> >
> > And for some of those runs, they might fail with the following error:
> > ```
> > ---
> >
> > Program: gmx mdrun, version 2019.4
> >
> > Source file: src/gromacs/gpu_utils/cudautils.cuh (line 229)
> >
> > MPI rank:3 (out of 8)
> >
> >
> >
> > Fatal error:
> >
> > cudaStreamSynchronize failed: an illegal memory access was encountered
> >
> >
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> >
> > website at http://www.gromacs.org/Documentation/Errors
> > ```
> >
> > we also had a different error from slurm system:
> > ```
> > ^Mstep 4400: timed with pme grid 96 96 60, coulomb cutoff 1.446: 467.9
> > M-cycles
> > ^Mstep 4600: timed with pme grid 96 96 64, coulomb cutoff 1.372: 451.4
> > M-cycles
> > /var/spool/slurmd/job2321134/slurm_script: line 44: 29866 Segmentation
> > fault  gmx mdrun -v -s $TPR -deffnm md_seed_fixed -ntmpi 8 -pin on
> -nb
> > gpu -ntomp 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh $HOURS
> > -cpt 60 -cpi -noappend
> > ```
> >
> > We first thought this could due to compiler issue and tried different
> > settings as following:
> > ===test1===
> > 
> > module load cuda/9.2.88.1
> > module load gcc/7.3.0
> > . /home/rsexton2/Library/gromacs/2019.4/test1/bin/GMXRC
> > 
> > ===test2===
> > 
> > module load cuda/9.2.88.1
> > module load gcc/6x
> > . /home/rsexton2/Library/gromacs/2019.4/test2/bin/GMXRC
> > 
> > ===test3===
> > 
> > module load cuda/9.2.148
> > module load gcc/7.3.0
> > . /home/rsexton2/Library/gromacs/2019.4/test3/bin/GMXRC
> > 
> > ===test4===
> > 
> > module load cuda/9.2.148
> > module load gcc/6x
> > . /home/rsexton2/Library/gromacs/2019.4/test4/bin/GMXRC
> > 
> > ===test5===
> > 
> > module load cuda/9.1.85
> > module load 

Re: [gmx-users] Gromacs 2019.4 - cudaStreamSynchronize failed issue

2019-12-02 Thread Mark Abraham
Hi,

What driver version is reported in the respective log files? Does the error
persist if mdrun -notunepme is used?

Mark

On Mon., 2 Dec. 2019, 21:18 Chenou Zhang,  wrote:

> Hi Gromacs developers,
>
> I'm currently running gromacs 2019.4 on our university's HPC cluster. To
> fully utilize the GPU nodes, I followed notes on
>
> http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html
> .
>
>
> And here is the command I used for my runs.
> ```
> gmx mdrun -v -s $TPR -deffnm md_seed_fixed -ntmpi 8 -pin on -nb gpu -ntomp
> 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh $HOURS -cpt 60 -cpi
> -noappend
> ```
>
> And for some of those runs, they might fail with the following error:
> ```
> ---
>
> Program: gmx mdrun, version 2019.4
>
> Source file: src/gromacs/gpu_utils/cudautils.cuh (line 229)
>
> MPI rank:3 (out of 8)
>
>
>
> Fatal error:
>
> cudaStreamSynchronize failed: an illegal memory access was encountered
>
>
>
> For more information and tips for troubleshooting, please check the GROMACS
>
> website at http://www.gromacs.org/Documentation/Errors
> ```
>
> we also had a different error from slurm system:
> ```
> ^Mstep 4400: timed with pme grid 96 96 60, coulomb cutoff 1.446: 467.9
> M-cycles
> ^Mstep 4600: timed with pme grid 96 96 64, coulomb cutoff 1.372: 451.4
> M-cycles
> /var/spool/slurmd/job2321134/slurm_script: line 44: 29866 Segmentation
> fault  gmx mdrun -v -s $TPR -deffnm md_seed_fixed -ntmpi 8 -pin on -nb
> gpu -ntomp 3 -pme gpu -pmefft gpu -npme 1 -gputasks 00112233 -maxh $HOURS
> -cpt 60 -cpi -noappend
> ```
>
> We first thought this could due to compiler issue and tried different
> settings as following:
> ===test1===
> 
> module load cuda/9.2.88.1
> module load gcc/7.3.0
> . /home/rsexton2/Library/gromacs/2019.4/test1/bin/GMXRC
> 
> ===test2===
> 
> module load cuda/9.2.88.1
> module load gcc/6x
> . /home/rsexton2/Library/gromacs/2019.4/test2/bin/GMXRC
> 
> ===test3===
> 
> module load cuda/9.2.148
> module load gcc/7.3.0
> . /home/rsexton2/Library/gromacs/2019.4/test3/bin/GMXRC
> 
> ===test4===
> 
> module load cuda/9.2.148
> module load gcc/6x
> . /home/rsexton2/Library/gromacs/2019.4/test4/bin/GMXRC
> 
> ===test5===
> 
> module load cuda/9.1.85
> module load gcc/6x
> . /home/rsexton2/Library/gromacs/2019.4/test5/bin/GMXRC
> 
> ===test6===
> 
> module load cuda/9.0.176
> module load gcc/6x
> . /home/rsexton2/Library/gromacs/2019.4/test6/bin/GMXRC
> 
> ===test7===
> 
> module load cuda/9.2.88.1
> module load gccgpu/7.4.0
> . /home/rsexton2/Library/gromacs/2019.4/test7/bin/GMXRC
> 
>
> However we still ended up with the same errors showed above. Does anyone
> know where does the cudaStreamSynchronize come in? Or am I wrongly using
> those gmx gpu commands?
>
> Any input will be appreciated!
>
> Thanks!
> Chenou
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs - Difference between target temperature and pressure in mdp and simulation

2019-11-26 Thread Fabio Bologna
Dear Dr. Warren,
 thank you for your answer. If I check the pressure graph I can see strong 
oscillations (100-150 bar) but they are centered around 1 bar. I assumed that 
after 69 ns I had sampled enough to reach an average value closer to 1 bar than 
what I got (I used to work with Amber and I remember that it reach the target 
average pressure a bit faster). Thank you for your explanation.

Regarding the LINCS errors and the use of tau-t=0.01, can anybody help me 
understand, respectively, if I have to change some LINCS parameters and if it's 
ok to use it during production runs?

Thank you again for your time, have a nice day!

<<<<<<<<<<<<<<<<
Fabio Bologna
PhD student in Nanoscience for Medicine and the Environment
Department of Chemistry 'G. Ciamician'
University of Bologna
>>>>>>>>>>>>>>>

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Justin Lemkul 

Sent: Monday, November 25, 2019 11:03:03 PM
To: gmx-us...@gromacs.org 
Subject: Re: [gmx-users] Gromacs - Difference between target temperature and 
pressure in mdp and simulation



On 11/25/19 2:15 PM, Fabio Bologna wrote:
> Dear Gromacs user-base and experts,
>
>   I need your help, as a Gromacs rookie. I've been trying to simulate the 
> folding of a 20 residues mini-protein inside a dodecahedric explicit solvent 
> box (30542 atoms) using Gromacs 2016.1. Because I need to start from the 
> linear structure, my box is quite big ; I had to use the group cut-off scheme 
> without explicit buffering instead of the default Verlet, in order to obtain 
> acceptable performances. Also, I’m using the V-rescale thermostat (tau-t = 
> 0.1) and Berendsen barostat (tau-p = 1.0). As in the title, my simulations 
> don’t reach the intended target values of temperature and pressure. These are 
> 300 K and 1 bar respectively.
>

Please be aware that the Berendsen thermostat and barostat produce
incorrect ensembles and are generally never used for production
simulations any more.

>
> Because I didn’t notice earlier these issues, I let 6 production md run for 
> 69 ns. The averages temperatures ranged from 302 to 303 K and the average 
> pressures ranged from 0.86 to 1.1 bar. In my humble opinion, I don’t think it 
> is caused by a lack of equilibration, given the small size of the system and 
> the long simulation time.
>
>
>
> I checked out the previous 1ns NVT and 1ns NPT equilibration phases (each 
> using the very same thermostat and barostat, when applicable) of those 6 
> simulations. The average temperatures among them ranged from 300 to 302 K and 
> average pressures from 0.018 bar to 1.4 bar. The temperature averages seems 
> to have risen from equilibration to production MD (which completely puzzles 
> me) while at least the pressure got closer to the target 1 bar (will it reach 
> 1 bar later on, meaning that there are not enough points to make the mean 
> value close to the target?)
>

Your pressure values are as good as any I've seen; it's a pressure that
oscillates wildly (see Dallas' post for specifics on this). I've never
seen a simulation with an exact 1.0 bar average.

>
> I ran again the NVT and NPT equilibration phases, experimenting with 
> different values of tau-t and tau-p. The following results are the averages 
> outputted by gmx energy (more accurate than manually averaging the values in 
> the .xvg file you get from gmx energy itself, if I have understood correctly) 
> at the end of the NPT equilibration.
>
>
> Regarding temperature, I got:
>
>*   tau-t = 0.5 --> 310-315 K
>*   tau-t = 0.1 ••• 300-302 K
>*   tau-t = 0.01 --> basically only 300 K
>
> Shouldn’t tau-t just affect the amplitude of temperature oscillations? 
> Clearly tau-t = 0.01 is the right value for my system, but as far as I know 
> it is not used in productions MDs and only in equilibration phases. Is it 
> still a

tau-t affects the response time of the thermostat, and I would argue
that tau-t = 0.01 is *not* right for your system. The goal of the
thermostat is not to *fix* the temperature at a given value, but to
sample a correct, canonical kinetic energy distribution. If tau-t is too
tight, you're getting the "right" temperature but the wrong ensemble.

> safe value for production or does it create artifacts of any kind? Also, why 
> does the most used values of 0.5 and 0.1 not give reasonable results? I've 
> test it with the Verlet scheme and the simulation reach the correct target 
> temperature in a total of 2 ns. Also, why did the average temperature rise 
> during production from 300-302 to 302-303 K? The thermostat setting are the 
> same.
>
>
>
> Regarding the pressure, I got:
>
>*   tau

Re: [gmx-users] Gromacs - Difference between target temperature and pressure in mdp and simulation

2019-11-25 Thread Justin Lemkul




On 11/25/19 2:15 PM, Fabio Bologna wrote:

Dear Gromacs user-base and experts,

  I need your help, as a Gromacs rookie. I've been trying to simulate the 
folding of a 20 residues mini-protein inside a dodecahedric explicit solvent 
box (30542 atoms) using Gromacs 2016.1. Because I need to start from the linear 
structure, my box is quite big ; I had to use the group cut-off scheme without 
explicit buffering instead of the default Verlet, in order to obtain acceptable 
performances. Also, I’m using the V-rescale thermostat (tau-t = 0.1) and 
Berendsen barostat (tau-p = 1.0). As in the title, my simulations don’t reach 
the intended target values of temperature and pressure. These are 300 K and 1 
bar respectively.



Please be aware that the Berendsen thermostat and barostat produce 
incorrect ensembles and are generally never used for production 
simulations any more.




Because I didn’t notice earlier these issues, I let 6 production md run for 69 
ns. The averages temperatures ranged from 302 to 303 K and the average 
pressures ranged from 0.86 to 1.1 bar. In my humble opinion, I don’t think it 
is caused by a lack of equilibration, given the small size of the system and 
the long simulation time.



I checked out the previous 1ns NVT and 1ns NPT equilibration phases (each using 
the very same thermostat and barostat, when applicable) of those 6 simulations. 
The average temperatures among them ranged from 300 to 302 K and average 
pressures from 0.018 bar to 1.4 bar. The temperature averages seems to have 
risen from equilibration to production MD (which completely puzzles me) while 
at least the pressure got closer to the target 1 bar (will it reach 1 bar later 
on, meaning that there are not enough points to make the mean value close to 
the target?)



Your pressure values are as good as any I've seen; it's a pressure that 
oscillates wildly (see Dallas' post for specifics on this). I've never 
seen a simulation with an exact 1.0 bar average.




I ran again the NVT and NPT equilibration phases, experimenting with different 
values of tau-t and tau-p. The following results are the averages outputted by 
gmx energy (more accurate than manually averaging the values in the .xvg file 
you get from gmx energy itself, if I have understood correctly) at the end of 
the NPT equilibration.


Regarding temperature, I got:

   *   tau-t = 0.5 --> 310-315 K
   *   tau-t = 0.1 ••• 300-302 K
   *   tau-t = 0.01 --> basically only 300 K

Shouldn’t tau-t just affect the amplitude of temperature oscillations? Clearly 
tau-t = 0.01 is the right value for my system, but as far as I know it is not 
used in productions MDs and only in equilibration phases. Is it still a


tau-t affects the response time of the thermostat, and I would argue 
that tau-t = 0.01 is *not* right for your system. The goal of the 
thermostat is not to *fix* the temperature at a given value, but to 
sample a correct, canonical kinetic energy distribution. If tau-t is too 
tight, you're getting the "right" temperature but the wrong ensemble.



safe value for production or does it create artifacts of any kind? Also, why 
does the most used values of 0.5 and 0.1 not give reasonable results? I've test 
it with the Verlet scheme and the simulation reach the correct target 
temperature in a total of 2 ns. Also, why did the average temperature rise 
during production from 300-302 to 302-303 K? The thermostat setting are the 
same.



Regarding the pressure, I got:

   *   tau-p = 1.0 ••• 0.02-2 bar
   *   tau-p = 0.5 ••• 0.6-3 bar

It seems tau-p doesn’t have any effect…



The magnitude of the fluctuation is just as (if not more) important than 
the average. If you get 0.6 ± 5, then is your average in any way 
distinguishable from 1.0? What about 0.6 ± 50?




Is there something I’m doing wrong or is it a normal behaviour? Especially 
regarding the pressure control, given that tau-t seems to work fine.



Finally, on a different note, 1 of my 69 ns simulations crushed because of too 
many LINCS warnings. By checking the .err file, I found out that they were 
caused by a lysine and the N-terminus of the protein. In some frames the 
dihedrals of the protonated amine groups were a bit anomalous and the groups 
didn’t have a tetrahedral conformation. However the rest of the system was 
completeley fine. Also, they weren’t “misbehaving” at the same time. First the 
lysine had this issue but then the group returned to its correct conformation; 
then many ns later the N-terminus encountered the same problem and the total 
count of LINCS warning reached 1000. The crashed simulation was completely 
identical in parameters and chemical entities to the other 5 simulations and I 
didn’t see any spikes in the various energies of the system. Am I right to 
assume that it is only a numerical rounding error on the hardware part?



I'm addition, I've attached the .mdp file I used in the production runs.


This mailing list does not accept attachments. If 

Re: [gmx-users] Gromacs - Difference between target temperature and pressure in mdp and simulation

2019-11-25 Thread Dallas Warren
Re pressure, did you check the graph of what the pressure is doing?

The variation between steps of the pressure can be huge, hundreds of bar
when the average is meant to be 1. So it is not surprising that the average
isn't exactly 1. See below link for a couple of examples. Note as the
system gets larger, then the variation decreases.

https://twitter.com/dr_dbw/status/968624615063937025

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
-
When the only tool you own is a hammer, every problem begins to resemble a
nail.


On Tue, 26 Nov 2019 at 06:16, Fabio Bologna  wrote:

> Dear Gromacs user-base and experts,
>
>  I need your help, as a Gromacs rookie. I've been trying to simulate
> the folding of a 20 residues mini-protein inside a dodecahedric explicit
> solvent box (30542 atoms) using Gromacs 2016.1. Because I need to start
> from the linear structure, my box is quite big ; I had to use the group
> cut-off scheme without explicit buffering instead of the default Verlet, in
> order to obtain acceptable performances. Also, I’m using the V-rescale
> thermostat (tau-t = 0.1) and Berendsen barostat (tau-p = 1.0). As in the
> title, my simulations don’t reach the intended target values of temperature
> and pressure. These are 300 K and 1 bar respectively.
>
>
>
> Because I didn’t notice earlier these issues, I let 6 production md run
> for 69 ns. The averages temperatures ranged from 302 to 303 K and the
> average pressures ranged from 0.86 to 1.1 bar. In my humble opinion, I
> don’t think it is caused by a lack of equilibration, given the small size
> of the system and the long simulation time.
>
>
>
> I checked out the previous 1ns NVT and 1ns NPT equilibration phases (each
> using the very same thermostat and barostat, when applicable) of those 6
> simulations. The average temperatures among them ranged from 300 to 302 K
> and average pressures from 0.018 bar to 1.4 bar. The temperature averages
> seems to have risen from equilibration to production MD (which completely
> puzzles me) while at least the pressure got closer to the target 1 bar
> (will it reach 1 bar later on, meaning that there are not enough points to
> make the mean value close to the target?)
>
>
>
> I ran again the NVT and NPT equilibration phases, experimenting with
> different values of tau-t and tau-p. The following results are the averages
> outputted by gmx energy (more accurate than manually averaging the values
> in the .xvg file you get from gmx energy itself, if I have understood
> correctly) at the end of the NPT equilibration.
>
>
> Regarding temperature, I got:
>
>   *   tau-t = 0.5 --> 310-315 K
>   *   tau-t = 0.1 ••• 300-302 K
>   *   tau-t = 0.01 --> basically only 300 K
>
> Shouldn’t tau-t just affect the amplitude of temperature oscillations?
> Clearly tau-t = 0.01 is the right value for my system, but as far as I know
> it is not used in productions MDs and only in equilibration phases. Is it
> still a safe value for production or does it create artifacts of any kind?
> Also, why does the most used values of 0.5 and 0.1 not give reasonable
> results? I've test it with the Verlet scheme and the simulation reach the
> correct target temperature in a total of 2 ns. Also, why did the average
> temperature rise during production from 300-302 to 302-303 K? The
> thermostat setting are the same.
>
>
>
> Regarding the pressure, I got:
>
>   *   tau-p = 1.0 ••• 0.02-2 bar
>   *   tau-p = 0.5 ••• 0.6-3 bar
>
> It seems tau-p doesn’t have any effect…
>
>
>
> Is there something I’m doing wrong or is it a normal behaviour? Especially
> regarding the pressure control, given that tau-t seems to work fine.
>
>
>
> Finally, on a different note, 1 of my 69 ns simulations crushed because of
> too many LINCS warnings. By checking the .err file, I found out that they
> were caused by a lysine and the N-terminus of the protein. In some frames
> the dihedrals of the protonated amine groups were a bit anomalous and the
> groups didn’t have a tetrahedral conformation. However the rest of the
> system was completeley fine. Also, they weren’t “misbehaving” at the same
> time. First the lysine had this issue but then the group returned to its
> correct conformation; then many ns later the N-terminus encountered the
> same problem and the total count of LINCS warning reached 1000. The crashed
> simulation was completely identical in parameters and chemical entities to
> the other 5 simulations and I didn’t see any spikes in the various energies
> of the system. Am I right to assume that it is only a numerical rounding
> error on the hardware part?
>
>
>
> I'm addition, I've attached the .mdp file I used in the production runs.
>
>
> I am sorry for this long e-mail, I hope you can help me figure out what’s
> wrong.
>
> Thank you in advance for your time and patience.
>

Re: [gmx-users] GROMACS showing error

2019-10-22 Thread Szilárd Páll
Hi,

Please direct GROMACS usage questions to the users' list. Replying there,
make sure you are subscribed and continue the conversation there.

The issue is that you requested static library detection, but the hwloc
library dependencies are not correctly added to the GROMACS link
dependencies. There are a few workarounds:
- avoid  -DGMX_PREFER_STATIC_LIBS=ON
- use dynamic libs for hwloc (e.g. passing -DHWLOC_hwloc_LIBRARY manually)
- if you prefer to stick to statically linked external libraries and the
above don't work our, you can turn off hwloc support (-DGMX_HWLOC=OFF)

Cheers,
--
Szilárd


On Fri, Oct 18, 2019 at 11:09 PM Shradheya R.R. Gupta <
shradheyagu...@gmail.com> wrote:

> Respected sir,
>
> While installing Gromacs 2019.4 with GPU+MPI  I got the error at linking
> of MPI.
>
> *commands:-*
>
> mkdir build
>
> cd build
>
> cmake .. -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_OPENMP=ON -DGMX_GPU=ON
> -DGMX_BUILD_OWN_FFTW=ON -DGMX_PREFER_STATIC_LIBS=ON
> -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX= -DGMX_MPI=ON
> -DGMX_BUILD_UNITTESTS=ON -DCMAKE_C_COMPILER=MPICC
> -DCMAKE_CXX_COMPILER=mpicxx
>
> make (completed successfully)
>
> sudo make install
>
> After 98% completion it showed the error
>
> [image: IMG_20191017_191549.jpg]
>
>
> Sir, please suggest how can I resolve it, eagerly waiting for your reply.
> Thank you
>
> Shradheya R.R. Gupta
> Bioinformatics Infrastructure Facility- DBT - Government of India
>  University of Rajasthan,India
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs tutorial

2019-10-11 Thread Justin Lemkul




On 10/11/19 9:44 AM, Suprim Tha wrote:

Sorry to disturb you again. It would be of immense help if you could please
help me with this one.
During energy minimization with gmx mdrun -v -deffnm em ; I did not get any
results. The earlier code run was


What do you mean "did not get any results?" The tutorial system should 
run cleanly and will not require modification of any input .mdp files. 
Inspect your .log file for errors or anything printed to the terminal.


-Justin


gmx grompp -f em.mdp -c solv_ions.gro -p topol.top -o em.tpr
and I used em.mdp file provided there. Is there something wrong the em.mdp
file I used?
; LINES STARTING WITH ';' ARE COMMENTS title = Minimization
; Title of run
; Parameters describing what to do, when to stop and what to save
integrator = steep
; Algorithm (steep = steepest descent minimization) emtol = 1000.0
; Stop minimization when the maximum force < 10.0 kJ/mol emstep = 0.01
; Energy step size nsteps = 5
; Maximum number of (minimization) steps to perform
; Parameters describing how to find the neighbors of each atom and how to
calculate the interactions nstlist = 1
; Frequency to update the neighbor list and long range forces cutoff-scheme
= Verlet ns_type = grid
; Method to determine neighbor list (simple, grid) rlist = 1.2
; Cut-off for making neighbor list (short range forces) coulombtype = PME
; Treatment of long range electrostatic interactions rcoulomb = 1.2
; long range electrostatic cut-off vdwtype = cutoff vdw-modifier =
force-switch rvdw-switch = 1.0 rvdw = 1.2
; long range Van der Waals cut-off pbc = xyz
; Periodic Boundary Conditions DispCorr = no

On Fri, Oct 11, 2019 at 6:58 PM Suprim Tha 
wrote:


I had .pdb as extra. I finally got it right. Thanks a ton sir.

On Fri, Oct 11, 2019 at 6:55 PM Suprim Tha 
wrote:


The error is still the same. I changed the name to JZ4. Please see what I
am missing again.
* Toppar stream file generated by
* CHARMM General Force Field (CGenFF) program version 2.2.0
* For use with CGenFF version 4.0
*
read rtf card append
* Topologies generated by
* CHARMM General Force Field (CGenFF) program version 2.2.0
*
36 1
! "penalty" is the highest penalty score of the associated parameters.
! Penalties lower than 10 indicate the analogy is fair; penalties between
10
! and 50 mean some basic validation is recommended; penalties higher than
! 50 indicate poor analogy and mandate extensive validation/optimization.
RESI JZ4.pdb0.000 ! param penalty=   0.900 ; charge penalty=
0.342
GROUP! CHARGE   CH_PENALTY
ATOM C4 CG331  -0.271 !0.285
ATOM C7 CG2R61 -0.108 !0.000
ATOM C8 CG2R61 -0.113 !0.000
ATOM C9 CG2R61 -0.109 !0.000
ATOM C10CG2R61  0.105 !0.190
ATOM C11CG2R61 -0.115 !0.000
ATOM C12CG2R61 -0.009 !0.232
ATOM C13CG321  -0.177 !0.342
ATOM C14CG321  -0.184 !0.045
ATOM OABOG311  -0.529 !0.190
ATOM H7 HGR61   0.115 !0.000
ATOM H8 HGR61   0.115 !0.000
ATOM H9 HGR61   0.115 !0.000
ATOM H11HGR61   0.115 !0.000
ATOM H132   HGA20.090 !0.000
ATOM H133   HGA20.090 !0.000
ATOM H142   HGA20.090 !0.000
ATOM H143   HGA20.090 !0.000
ATOM HABHGP10.420 !0.000
ATOM H41HGA30.090 !0.000
ATOM H42HGA30.090 !0.000
ATOM H43HGA30.090 !0.000
BOND C4   C14
BOND C4   H41
BOND C4   H42
BOND C4   H43
BOND C7   C8
BOND C7   C11
BOND C7   H7
BOND C8   C9
BOND C8   H8
BOND C9   C10
BOND C9   H9
BOND C10  C12
BOND C10  OAB
BOND C11  C12
BOND C11  H11
BOND C12  C13
BOND C13  C14
BOND C13  H132
BOND C13  H133
BOND C14  H142
BOND C14  H143
BOND OAB  HAB
END
read param card flex append
* Parameters generated by analogy by
* CHARMM General Force Field (CGenFF) program version 2.2.0
*
! Penalties lower than 10 indicate the analogy is fair; penalties between
10
! and 50 mean some basic validation is recommended; penalties higher than
! 50 indicate poor analogy and mandate extensive validation/optimization.
BONDS
ANGLES
DIHEDRALS
CG321  CG2R61 CG2R61 OG311  2.4000  2   180.00 ! JZ4.pdb , from CG311
CG2R61 CG2R61 OG311, penalty= 0.6
CG2R61 CG321  CG321  CG331  0.0400  3 0.00 ! JZ4.pdb , from
CG2R61 CG321 CG321 CG321, penalty= 0.9
IMPROPERS
END
RETURN

@MOLECULE
JZ4.pdb
22 22 1 0 0
SMALL
NO_CHARGES

@ATOM
   1 C4 24.2940  -24.1240   -0.0710 C.3   1 JZ40.
   2 C7 21.5530  -27.2140   -4.1120 C.ar  1 JZ40.
   3 C8 22.0680  -26.7470   -5.3310 C.ar  1 JZ40.
   4 C9 22.6710  -25.5120   -5.4480 C.ar  1 JZ40.
   5 C1022.7690  -24.7300   -4.2950 C.ar  1 JZ40.
   6 C1121.6930  -26.4590   -2.9540 C.ar  1 JZ40.
   7 C1222.2940  -25.1870   -3.0750 C.ar  1 JZ40.
   8 C1322.4630  -24.4140   -1.8080 C.3   1 JZ40.
   9 C1423.9250  -24.7040   -1.3940 C.3   1 JZ4

Re: [gmx-users] Gromacs tutorial

2019-10-11 Thread Suprim Tha
Sorry to disturb you again. It would be of immense help if you could please
help me with this one.
During energy minimization with gmx mdrun -v -deffnm em ; I did not get any
results. The earlier code run was
gmx grompp -f em.mdp -c solv_ions.gro -p topol.top -o em.tpr
and I used em.mdp file provided there. Is there something wrong the em.mdp
file I used?
; LINES STARTING WITH ';' ARE COMMENTS title = Minimization
; Title of run
; Parameters describing what to do, when to stop and what to save
integrator = steep
; Algorithm (steep = steepest descent minimization) emtol = 1000.0
; Stop minimization when the maximum force < 10.0 kJ/mol emstep = 0.01
; Energy step size nsteps = 5
; Maximum number of (minimization) steps to perform
; Parameters describing how to find the neighbors of each atom and how to
calculate the interactions nstlist = 1
; Frequency to update the neighbor list and long range forces cutoff-scheme
= Verlet ns_type = grid
; Method to determine neighbor list (simple, grid) rlist = 1.2
; Cut-off for making neighbor list (short range forces) coulombtype = PME
; Treatment of long range electrostatic interactions rcoulomb = 1.2
; long range electrostatic cut-off vdwtype = cutoff vdw-modifier =
force-switch rvdw-switch = 1.0 rvdw = 1.2
; long range Van der Waals cut-off pbc = xyz
; Periodic Boundary Conditions DispCorr = no

On Fri, Oct 11, 2019 at 6:58 PM Suprim Tha 
wrote:

> I had .pdb as extra. I finally got it right. Thanks a ton sir.
>
> On Fri, Oct 11, 2019 at 6:55 PM Suprim Tha 
> wrote:
>
>> The error is still the same. I changed the name to JZ4. Please see what I
>> am missing again.
>> * Toppar stream file generated by
>> * CHARMM General Force Field (CGenFF) program version 2.2.0
>> * For use with CGenFF version 4.0
>> *
>> read rtf card append
>> * Topologies generated by
>> * CHARMM General Force Field (CGenFF) program version 2.2.0
>> *
>> 36 1
>> ! "penalty" is the highest penalty score of the associated parameters.
>> ! Penalties lower than 10 indicate the analogy is fair; penalties between
>> 10
>> ! and 50 mean some basic validation is recommended; penalties higher than
>> ! 50 indicate poor analogy and mandate extensive validation/optimization.
>> RESI JZ4.pdb0.000 ! param penalty=   0.900 ; charge penalty=
>> 0.342
>> GROUP! CHARGE   CH_PENALTY
>> ATOM C4 CG331  -0.271 !0.285
>> ATOM C7 CG2R61 -0.108 !0.000
>> ATOM C8 CG2R61 -0.113 !0.000
>> ATOM C9 CG2R61 -0.109 !0.000
>> ATOM C10CG2R61  0.105 !0.190
>> ATOM C11CG2R61 -0.115 !0.000
>> ATOM C12CG2R61 -0.009 !0.232
>> ATOM C13CG321  -0.177 !0.342
>> ATOM C14CG321  -0.184 !0.045
>> ATOM OABOG311  -0.529 !0.190
>> ATOM H7 HGR61   0.115 !0.000
>> ATOM H8 HGR61   0.115 !0.000
>> ATOM H9 HGR61   0.115 !0.000
>> ATOM H11HGR61   0.115 !0.000
>> ATOM H132   HGA20.090 !0.000
>> ATOM H133   HGA20.090 !0.000
>> ATOM H142   HGA20.090 !0.000
>> ATOM H143   HGA20.090 !0.000
>> ATOM HABHGP10.420 !0.000
>> ATOM H41HGA30.090 !0.000
>> ATOM H42HGA30.090 !0.000
>> ATOM H43HGA30.090 !0.000
>> BOND C4   C14
>> BOND C4   H41
>> BOND C4   H42
>> BOND C4   H43
>> BOND C7   C8
>> BOND C7   C11
>> BOND C7   H7
>> BOND C8   C9
>> BOND C8   H8
>> BOND C9   C10
>> BOND C9   H9
>> BOND C10  C12
>> BOND C10  OAB
>> BOND C11  C12
>> BOND C11  H11
>> BOND C12  C13
>> BOND C13  C14
>> BOND C13  H132
>> BOND C13  H133
>> BOND C14  H142
>> BOND C14  H143
>> BOND OAB  HAB
>> END
>> read param card flex append
>> * Parameters generated by analogy by
>> * CHARMM General Force Field (CGenFF) program version 2.2.0
>> *
>> ! Penalties lower than 10 indicate the analogy is fair; penalties between
>> 10
>> ! and 50 mean some basic validation is recommended; penalties higher than
>> ! 50 indicate poor analogy and mandate extensive validation/optimization.
>> BONDS
>> ANGLES
>> DIHEDRALS
>> CG321  CG2R61 CG2R61 OG311  2.4000  2   180.00 ! JZ4.pdb , from CG311
>> CG2R61 CG2R61 OG311, penalty= 0.6
>> CG2R61 CG321  CG321  CG331  0.0400  3 0.00 ! JZ4.pdb , from
>> CG2R61 CG321 CG321 CG321, penalty= 0.9
>> IMPROPERS
>> END
>> RETURN
>>
>> @MOLECULE
>> JZ4.pdb
>> 22 22 1 0 0
>> SMALL
>> NO_CHARGES
>>
>> @ATOM
>>   1 C4 24.2940  -24.1240   -0.0710 C.3   1 JZ40.
>>   2 C7 21.5530  -27.2140   -4.1120 C.ar  1 JZ40.
>>   3 C8 22.0680  -26.7470   -5.3310 C.ar  1 JZ40.
>>   4 C9 22.6710  -25.5120   -5.4480 C.ar  1 JZ40.
>>   5 C1022.7690  -24.7300   -4.2950 C.ar  1 JZ40.
>>   6 C1121.6930  -26.4590   -2.9540 C.ar  1 JZ40.
>>   7 C1222.2940  -25.1870   -3.0750 C.ar  1 JZ40.
>>   8 C1322.4630  -24.4140   -1.8080 C.3   1 JZ40.
>>   9 C1423.9250  -24.7040   -1.3940 

Re: [gmx-users] Gromacs tutorial

2019-10-11 Thread Suprim Tha
I had .pdb as extra. I finally got it right. Thanks a ton sir.

On Fri, Oct 11, 2019 at 6:55 PM Suprim Tha 
wrote:

> The error is still the same. I changed the name to JZ4. Please see what I
> am missing again.
> * Toppar stream file generated by
> * CHARMM General Force Field (CGenFF) program version 2.2.0
> * For use with CGenFF version 4.0
> *
> read rtf card append
> * Topologies generated by
> * CHARMM General Force Field (CGenFF) program version 2.2.0
> *
> 36 1
> ! "penalty" is the highest penalty score of the associated parameters.
> ! Penalties lower than 10 indicate the analogy is fair; penalties between
> 10
> ! and 50 mean some basic validation is recommended; penalties higher than
> ! 50 indicate poor analogy and mandate extensive validation/optimization.
> RESI JZ4.pdb0.000 ! param penalty=   0.900 ; charge penalty=
> 0.342
> GROUP! CHARGE   CH_PENALTY
> ATOM C4 CG331  -0.271 !0.285
> ATOM C7 CG2R61 -0.108 !0.000
> ATOM C8 CG2R61 -0.113 !0.000
> ATOM C9 CG2R61 -0.109 !0.000
> ATOM C10CG2R61  0.105 !0.190
> ATOM C11CG2R61 -0.115 !0.000
> ATOM C12CG2R61 -0.009 !0.232
> ATOM C13CG321  -0.177 !0.342
> ATOM C14CG321  -0.184 !0.045
> ATOM OABOG311  -0.529 !0.190
> ATOM H7 HGR61   0.115 !0.000
> ATOM H8 HGR61   0.115 !0.000
> ATOM H9 HGR61   0.115 !0.000
> ATOM H11HGR61   0.115 !0.000
> ATOM H132   HGA20.090 !0.000
> ATOM H133   HGA20.090 !0.000
> ATOM H142   HGA20.090 !0.000
> ATOM H143   HGA20.090 !0.000
> ATOM HABHGP10.420 !0.000
> ATOM H41HGA30.090 !0.000
> ATOM H42HGA30.090 !0.000
> ATOM H43HGA30.090 !0.000
> BOND C4   C14
> BOND C4   H41
> BOND C4   H42
> BOND C4   H43
> BOND C7   C8
> BOND C7   C11
> BOND C7   H7
> BOND C8   C9
> BOND C8   H8
> BOND C9   C10
> BOND C9   H9
> BOND C10  C12
> BOND C10  OAB
> BOND C11  C12
> BOND C11  H11
> BOND C12  C13
> BOND C13  C14
> BOND C13  H132
> BOND C13  H133
> BOND C14  H142
> BOND C14  H143
> BOND OAB  HAB
> END
> read param card flex append
> * Parameters generated by analogy by
> * CHARMM General Force Field (CGenFF) program version 2.2.0
> *
> ! Penalties lower than 10 indicate the analogy is fair; penalties between
> 10
> ! and 50 mean some basic validation is recommended; penalties higher than
> ! 50 indicate poor analogy and mandate extensive validation/optimization.
> BONDS
> ANGLES
> DIHEDRALS
> CG321  CG2R61 CG2R61 OG311  2.4000  2   180.00 ! JZ4.pdb , from CG311
> CG2R61 CG2R61 OG311, penalty= 0.6
> CG2R61 CG321  CG321  CG331  0.0400  3 0.00 ! JZ4.pdb , from CG2R61
> CG321 CG321 CG321, penalty= 0.9
> IMPROPERS
> END
> RETURN
>
> @MOLECULE
> JZ4.pdb
> 22 22 1 0 0
> SMALL
> NO_CHARGES
>
> @ATOM
>   1 C4 24.2940  -24.1240   -0.0710 C.3   1 JZ40.
>   2 C7 21.5530  -27.2140   -4.1120 C.ar  1 JZ40.
>   3 C8 22.0680  -26.7470   -5.3310 C.ar  1 JZ40.
>   4 C9 22.6710  -25.5120   -5.4480 C.ar  1 JZ40.
>   5 C1022.7690  -24.7300   -4.2950 C.ar  1 JZ40.
>   6 C1121.6930  -26.4590   -2.9540 C.ar  1 JZ40.
>   7 C1222.2940  -25.1870   -3.0750 C.ar  1 JZ40.
>   8 C1322.4630  -24.4140   -1.8080 C.3   1 JZ40.
>   9 C1423.9250  -24.7040   -1.3940 C.3   1 JZ40.
>  10 OAB23.4120  -23.5360   -4.3420 O.3   1 JZ40.
>  11 H7 21.0447  -28.1661   -4.0737 H 1 JZ40.
>  12 H8 21.9896  -27.3751   -6.2061 H 1 JZ40.
>  13 H9 23.0531  -25.1628   -6.3959 H 1 JZ40.
>  14 H1121.3551  -26.8308   -1.9980 H 1 JZ40.
>  15 H132   22.3119  -23.3483   -1.9799 H 1 JZ40.
>  16 H133   21.7732  -24.7775   -1.0463 H 1 JZ40.
>  17 H142   24.0626  -25.7843   -1.3476 H 1 JZ40.
>  18 H143   24.5923  -24.2974   -2.1540 H 1 JZ40.
>  19 HAB22.8219  -22.8428   -4.0372 H 1 JZ40.
>  20 H4125.3323  -24.36610.1556 H 1 JZ40.
>  21 H4224.1722  -23.0413   -0.1025 H 1 JZ40.
>  22 H4323.6470  -24.54010.7013 H 1 JZ40.
> @BOND
>  119 1
>  223 ar
>  326 ar
>  434 ar
>  545 ar
>  657 ar
>  75   10 1
>  867 ar
>  978 1
> 1089 1
> 11   112 1
> 12   123 1
> 13   134 1
> 14   146 1
> 15   158 1
> 16   168 1
> 17   179 1
> 18   189 1
> 19   19   10 1
> 20   201 1
> 21   211 1
> 22   221 1
> @SUBSTRUCTURE
>  1 JZ4 1 

Re: [gmx-users] Gromacs tutorial

2019-10-11 Thread Suprim Tha
The error is still the same. I changed the name to JZ4. Please see what I
am missing again.
* Toppar stream file generated by
* CHARMM General Force Field (CGenFF) program version 2.2.0
* For use with CGenFF version 4.0
*
read rtf card append
* Topologies generated by
* CHARMM General Force Field (CGenFF) program version 2.2.0
*
36 1
! "penalty" is the highest penalty score of the associated parameters.
! Penalties lower than 10 indicate the analogy is fair; penalties between 10
! and 50 mean some basic validation is recommended; penalties higher than
! 50 indicate poor analogy and mandate extensive validation/optimization.
RESI JZ4.pdb0.000 ! param penalty=   0.900 ; charge penalty=   0.342
GROUP! CHARGE   CH_PENALTY
ATOM C4 CG331  -0.271 !0.285
ATOM C7 CG2R61 -0.108 !0.000
ATOM C8 CG2R61 -0.113 !0.000
ATOM C9 CG2R61 -0.109 !0.000
ATOM C10CG2R61  0.105 !0.190
ATOM C11CG2R61 -0.115 !0.000
ATOM C12CG2R61 -0.009 !0.232
ATOM C13CG321  -0.177 !0.342
ATOM C14CG321  -0.184 !0.045
ATOM OABOG311  -0.529 !0.190
ATOM H7 HGR61   0.115 !0.000
ATOM H8 HGR61   0.115 !0.000
ATOM H9 HGR61   0.115 !0.000
ATOM H11HGR61   0.115 !0.000
ATOM H132   HGA20.090 !0.000
ATOM H133   HGA20.090 !0.000
ATOM H142   HGA20.090 !0.000
ATOM H143   HGA20.090 !0.000
ATOM HABHGP10.420 !0.000
ATOM H41HGA30.090 !0.000
ATOM H42HGA30.090 !0.000
ATOM H43HGA30.090 !0.000
BOND C4   C14
BOND C4   H41
BOND C4   H42
BOND C4   H43
BOND C7   C8
BOND C7   C11
BOND C7   H7
BOND C8   C9
BOND C8   H8
BOND C9   C10
BOND C9   H9
BOND C10  C12
BOND C10  OAB
BOND C11  C12
BOND C11  H11
BOND C12  C13
BOND C13  C14
BOND C13  H132
BOND C13  H133
BOND C14  H142
BOND C14  H143
BOND OAB  HAB
END
read param card flex append
* Parameters generated by analogy by
* CHARMM General Force Field (CGenFF) program version 2.2.0
*
! Penalties lower than 10 indicate the analogy is fair; penalties between 10
! and 50 mean some basic validation is recommended; penalties higher than
! 50 indicate poor analogy and mandate extensive validation/optimization.
BONDS
ANGLES
DIHEDRALS
CG321  CG2R61 CG2R61 OG311  2.4000  2   180.00 ! JZ4.pdb , from CG311
CG2R61 CG2R61 OG311, penalty= 0.6
CG2R61 CG321  CG321  CG331  0.0400  3 0.00 ! JZ4.pdb , from CG2R61
CG321 CG321 CG321, penalty= 0.9
IMPROPERS
END
RETURN

@MOLECULE
JZ4.pdb
22 22 1 0 0
SMALL
NO_CHARGES

@ATOM
  1 C4 24.2940  -24.1240   -0.0710 C.3   1 JZ40.
  2 C7 21.5530  -27.2140   -4.1120 C.ar  1 JZ40.
  3 C8 22.0680  -26.7470   -5.3310 C.ar  1 JZ40.
  4 C9 22.6710  -25.5120   -5.4480 C.ar  1 JZ40.
  5 C1022.7690  -24.7300   -4.2950 C.ar  1 JZ40.
  6 C1121.6930  -26.4590   -2.9540 C.ar  1 JZ40.
  7 C1222.2940  -25.1870   -3.0750 C.ar  1 JZ40.
  8 C1322.4630  -24.4140   -1.8080 C.3   1 JZ40.
  9 C1423.9250  -24.7040   -1.3940 C.3   1 JZ40.
 10 OAB23.4120  -23.5360   -4.3420 O.3   1 JZ40.
 11 H7 21.0447  -28.1661   -4.0737 H 1 JZ40.
 12 H8 21.9896  -27.3751   -6.2061 H 1 JZ40.
 13 H9 23.0531  -25.1628   -6.3959 H 1 JZ40.
 14 H1121.3551  -26.8308   -1.9980 H 1 JZ40.
 15 H132   22.3119  -23.3483   -1.9799 H 1 JZ40.
 16 H133   21.7732  -24.7775   -1.0463 H 1 JZ40.
 17 H142   24.0626  -25.7843   -1.3476 H 1 JZ40.
 18 H143   24.5923  -24.2974   -2.1540 H 1 JZ40.
 19 HAB22.8219  -22.8428   -4.0372 H 1 JZ40.
 20 H4125.3323  -24.36610.1556 H 1 JZ40.
 21 H4224.1722  -23.0413   -0.1025 H 1 JZ40.
 22 H4323.6470  -24.54010.7013 H 1 JZ40.
@BOND
 119 1
 223 ar
 326 ar
 434 ar
 545 ar
 657 ar
 75   10 1
 867 ar
 978 1
1089 1
11   112 1
12   123 1
13   134 1
14   146 1
15   158 1
16   168 1
17   179 1
18   189 1
19   19   10 1
20   201 1
21   211 1
22   221 1
@SUBSTRUCTURE
 1 JZ4 1 RESIDUE   4 A JZ4 0 ROOT


On Fri, Oct 11, 2019 at 6:47 PM Justin Lemkul  wrote:

>
>
> On 10/11/19 8:59 AM, Suprim Tha wrote:
> > I could not find out difference in residue names.
> > The .str file is as below
> > RESI jz4.pdb0.000 ! param penalty=   0.900 ; charge penalty=
>  0.342
>
> The residue name is JZ4, not jz4.pdb - change this and the script will
> work.

Re: [gmx-users] Gromacs tutorial

2019-10-11 Thread Justin Lemkul




On 10/11/19 8:59 AM, Suprim Tha wrote:

I could not find out difference in residue names.
The .str file is as below
RESI jz4.pdb0.000 ! param penalty=   0.900 ; charge penalty=   0.342


The residue name is JZ4, not jz4.pdb - change this and the script will work.

-Justin


GROUP! CHARGE   CH_PENALTY
ATOM C4 CG331  -0.271 !0.285
ATOM C7 CG2R61 -0.108 !0.000
ATOM C8 CG2R61 -0.113 !0.000
ATOM C9 CG2R61 -0.109 !0.000
ATOM C10CG2R61  0.105 !0.190
ATOM C11CG2R61 -0.115 !0.000
ATOM C12CG2R61 -0.009 !0.232
ATOM C13CG321  -0.177 !0.342
ATOM C14CG321  -0.184 !0.045
ATOM OABOG311  -0.529 !0.190
ATOM H7 HGR61   0.115 !0.000
ATOM H8 HGR61   0.115 !0.000
ATOM H9 HGR61   0.115 !0.000
ATOM H11HGR61   0.115 !0.000
ATOM H132   HGA20.090 !0.000
ATOM H133   HGA20.090 !0.000
ATOM H142   HGA20.090 !0.000
ATOM H143   HGA20.090 !0.000
ATOM HABHGP10.420 !0.000
ATOM H41HGA30.090 !0.000
ATOM H42HGA30.090 !0.000
ATOM H43HGA30.090 !0.000
BOND C4   C14
BOND C4   H41
BOND C4   H42
BOND C4   H43
BOND C7   C8
BOND C7   C11
BOND C7   H7
BOND C8   C9
BOND C8   H8
BOND C9   C10
BOND C9   H9
BOND C10  C12
BOND C10  OAB
BOND C11  C12
BOND C11  H11
BOND C12  C13
BOND C13  C14
BOND C13  H132
BOND C13  H133
BOND C14  H142
BOND C14  H143
BOND OAB  HAB
END
read param card flex append
* Parameters generated by analogy by
* CHARMM General Force Field (CGenFF) program version 2.2.0
*
! Penalties lower than 10 indicate the analogy is fair; penalties between 10
! and 50 mean some basic validation is recommended; penalties higher than
! 50 indicate poor analogy and mandate extensive validation/optimization.
BONDS
ANGLES
DIHEDRALS
CG321  CG2R61 CG2R61 OG311  2.4000  2   180.00 ! jz4.pdb , from CG311
CG2R61 CG2R61 OG311, penalty= 0.6
CG2R61 CG321  CG321  CG331  0.0400  3 0.00 ! jz4.pdb , from CG2R61
CG321 CG321 CG321, penalty= 0.9
IMPROPERS
END
RETURN
The .mol2 file is as below
@MOLECULE
jz4.pdb
22 22 1 0 0
SMALL
NO_CHARGES

@ATOM
   1 C4 24.2940  -24.1240   -0.0710 C.3   1 JZ40.
   2 C7 21.5530  -27.2140   -4.1120 C.ar  1 JZ40.
   3 C8 22.0680  -26.7470   -5.3310 C.ar  1 JZ40.
   4 C9 22.6710  -25.5120   -5.4480 C.ar  1 JZ40.
   5 C1022.7690  -24.7300   -4.2950 C.ar  1 JZ40.
   6 C1121.6930  -26.4590   -2.9540 C.ar  1 JZ40.
   7 C1222.2940  -25.1870   -3.0750 C.ar  1 JZ40.
   8 C1322.4630  -24.4140   -1.8080 C.3   1 JZ40.
   9 C1423.9250  -24.7040   -1.3940 C.3   1 JZ40.
  10 OAB23.4120  -23.5360   -4.3420 O.3   1 JZ40.
  11 H7 21.0447  -28.1661   -4.0737 H 1 JZ40.
  12 H8 21.9896  -27.3751   -6.2061 H 1 JZ40.
  13 H9 23.0531  -25.1628   -6.3959 H 1 JZ40.
  14 H1121.3551  -26.8308   -1.9980 H 1 JZ40.
  15 H132   22.3119  -23.3483   -1.9799 H 1 JZ40.
  16 H133   21.7732  -24.7775   -1.0463 H 1 JZ40.
  17 H142   24.0626  -25.7843   -1.3476 H 1 JZ40.
  18 H143   24.5923  -24.2974   -2.1540 H 1 JZ40.
  19 HAB22.8219  -22.8428   -4.0372 H 1 JZ40.
  20 H4125.3323  -24.36610.1556 H 1 JZ40.
  21 H4224.1722  -23.0413   -0.1025 H 1 JZ40.
  22 H4323.6470  -24.54010.7013 H 1 JZ40.
@BOND
  119 1
  223 ar
  326 ar
  434 ar
  545 ar
  657 ar
  75   10 1
  867 ar
  978 1
 1089 1
 11   112 1
 12   123 1
 13   134 1
 14   146 1
 15   158 1
 16   168 1
 17   179 1
 18   189 1
 19   19   10 1
 20   201 1
 21   211 1
 22   221 1
@SUBSTRUCTURE
  1 JZ4 1 RESIDUE   4 A JZ4 0 ROOT


On Fri, Oct 11, 2019 at 6:38 PM Nidhi singh  wrote:


The file is not recognising your .str file. The name must be different in
your mol2 file and str file. You just need to rectify that.



On Fri, 11 Oct 2019 at 8:51 PM, Suprim Tha 
wrote:


I was trying gromacs tutorial on molecular dynamics simulation of
protein-ligand complex. Everything was going well until the step to
convert CHARMM
jz4.str file into GROMACS files using the command
python cgenff_charmm2gmx_py2.py JZ4 jz4.mol2 jz4.str charmm36-mar2019.ff
The error was:
Error in atomgroup.py: read_mol2_coor_only: no. of atoms in mol2 (22) and
top (0) are unequal
Usually this means the specified residue name does not match between str

Re: [gmx-users] Gromacs tutorial

2019-10-11 Thread Suprim Tha
I could not find out difference in residue names.
The .str file is as below
RESI jz4.pdb0.000 ! param penalty=   0.900 ; charge penalty=   0.342
GROUP! CHARGE   CH_PENALTY
ATOM C4 CG331  -0.271 !0.285
ATOM C7 CG2R61 -0.108 !0.000
ATOM C8 CG2R61 -0.113 !0.000
ATOM C9 CG2R61 -0.109 !0.000
ATOM C10CG2R61  0.105 !0.190
ATOM C11CG2R61 -0.115 !0.000
ATOM C12CG2R61 -0.009 !0.232
ATOM C13CG321  -0.177 !0.342
ATOM C14CG321  -0.184 !0.045
ATOM OABOG311  -0.529 !0.190
ATOM H7 HGR61   0.115 !0.000
ATOM H8 HGR61   0.115 !0.000
ATOM H9 HGR61   0.115 !0.000
ATOM H11HGR61   0.115 !0.000
ATOM H132   HGA20.090 !0.000
ATOM H133   HGA20.090 !0.000
ATOM H142   HGA20.090 !0.000
ATOM H143   HGA20.090 !0.000
ATOM HABHGP10.420 !0.000
ATOM H41HGA30.090 !0.000
ATOM H42HGA30.090 !0.000
ATOM H43HGA30.090 !0.000
BOND C4   C14
BOND C4   H41
BOND C4   H42
BOND C4   H43
BOND C7   C8
BOND C7   C11
BOND C7   H7
BOND C8   C9
BOND C8   H8
BOND C9   C10
BOND C9   H9
BOND C10  C12
BOND C10  OAB
BOND C11  C12
BOND C11  H11
BOND C12  C13
BOND C13  C14
BOND C13  H132
BOND C13  H133
BOND C14  H142
BOND C14  H143
BOND OAB  HAB
END
read param card flex append
* Parameters generated by analogy by
* CHARMM General Force Field (CGenFF) program version 2.2.0
*
! Penalties lower than 10 indicate the analogy is fair; penalties between 10
! and 50 mean some basic validation is recommended; penalties higher than
! 50 indicate poor analogy and mandate extensive validation/optimization.
BONDS
ANGLES
DIHEDRALS
CG321  CG2R61 CG2R61 OG311  2.4000  2   180.00 ! jz4.pdb , from CG311
CG2R61 CG2R61 OG311, penalty= 0.6
CG2R61 CG321  CG321  CG331  0.0400  3 0.00 ! jz4.pdb , from CG2R61
CG321 CG321 CG321, penalty= 0.9
IMPROPERS
END
RETURN
The .mol2 file is as below
@MOLECULE
jz4.pdb
22 22 1 0 0
SMALL
NO_CHARGES

@ATOM
  1 C4 24.2940  -24.1240   -0.0710 C.3   1 JZ40.
  2 C7 21.5530  -27.2140   -4.1120 C.ar  1 JZ40.
  3 C8 22.0680  -26.7470   -5.3310 C.ar  1 JZ40.
  4 C9 22.6710  -25.5120   -5.4480 C.ar  1 JZ40.
  5 C1022.7690  -24.7300   -4.2950 C.ar  1 JZ40.
  6 C1121.6930  -26.4590   -2.9540 C.ar  1 JZ40.
  7 C1222.2940  -25.1870   -3.0750 C.ar  1 JZ40.
  8 C1322.4630  -24.4140   -1.8080 C.3   1 JZ40.
  9 C1423.9250  -24.7040   -1.3940 C.3   1 JZ40.
 10 OAB23.4120  -23.5360   -4.3420 O.3   1 JZ40.
 11 H7 21.0447  -28.1661   -4.0737 H 1 JZ40.
 12 H8 21.9896  -27.3751   -6.2061 H 1 JZ40.
 13 H9 23.0531  -25.1628   -6.3959 H 1 JZ40.
 14 H1121.3551  -26.8308   -1.9980 H 1 JZ40.
 15 H132   22.3119  -23.3483   -1.9799 H 1 JZ40.
 16 H133   21.7732  -24.7775   -1.0463 H 1 JZ40.
 17 H142   24.0626  -25.7843   -1.3476 H 1 JZ40.
 18 H143   24.5923  -24.2974   -2.1540 H 1 JZ40.
 19 HAB22.8219  -22.8428   -4.0372 H 1 JZ40.
 20 H4125.3323  -24.36610.1556 H 1 JZ40.
 21 H4224.1722  -23.0413   -0.1025 H 1 JZ40.
 22 H4323.6470  -24.54010.7013 H 1 JZ40.
@BOND
 119 1
 223 ar
 326 ar
 434 ar
 545 ar
 657 ar
 75   10 1
 867 ar
 978 1
1089 1
11   112 1
12   123 1
13   134 1
14   146 1
15   158 1
16   168 1
17   179 1
18   189 1
19   19   10 1
20   201 1
21   211 1
22   221 1
@SUBSTRUCTURE
 1 JZ4 1 RESIDUE   4 A JZ4 0 ROOT


On Fri, Oct 11, 2019 at 6:38 PM Nidhi singh  wrote:

> The file is not recognising your .str file. The name must be different in
> your mol2 file and str file. You just need to rectify that.
>
>
>
> On Fri, 11 Oct 2019 at 8:51 PM, Suprim Tha 
> wrote:
>
> > I was trying gromacs tutorial on molecular dynamics simulation of
> > protein-ligand complex. Everything was going well until the step to
> > convert CHARMM
> > jz4.str file into GROMACS files using the command
> > python cgenff_charmm2gmx_py2.py JZ4 jz4.mol2 jz4.str charmm36-mar2019.ff
> > The error was:
> > Error in atomgroup.py: read_mol2_coor_only: no. of atoms in mol2 (22) and
> > top (0) are unequal
> > Usually this means the specified residue name does not match between str
> > and mol2 files.
> > I have attached the generated str and mol2 files.
> > Please help me find the error.
> > --
> > Gromacs Users 

Re: [gmx-users] Gromacs tutorial

2019-10-11 Thread Nidhi singh
The file is not recognising your .str file. The name must be different in
your mol2 file and str file. You just need to rectify that.



On Fri, 11 Oct 2019 at 8:51 PM, Suprim Tha 
wrote:

> I was trying gromacs tutorial on molecular dynamics simulation of
> protein-ligand complex. Everything was going well until the step to
> convert CHARMM
> jz4.str file into GROMACS files using the command
> python cgenff_charmm2gmx_py2.py JZ4 jz4.mol2 jz4.str charmm36-mar2019.ff
> The error was:
> Error in atomgroup.py: read_mol2_coor_only: no. of atoms in mol2 (22) and
> top (0) are unequal
> Usually this means the specified residue name does not match between str
> and mol2 files.
> I have attached the generated str and mol2 files.
> Please help me find the error.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.

-- 
Dr. Nidhi
PhD
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs tutorial

2019-10-11 Thread Justin Lemkul




On 10/11/19 8:49 AM, Suprim Tha wrote:

I was trying gromacs tutorial on molecular dynamics simulation of
protein-ligand complex. Everything was going well until the step to
convert CHARMM
jz4.str file into GROMACS files using the command
python cgenff_charmm2gmx_py2.py JZ4 jz4.mol2 jz4.str charmm36-mar2019.ff
The error was:
Error in atomgroup.py: read_mol2_coor_only: no. of atoms in mol2 (22) and
top (0) are unequal
Usually this means the specified residue name does not match between str
and mol2 files.
I have attached the generated str and mol2 files.


The mailing list does not accept attachments. Make sure the residue name 
in the .str file matches the .mol2 file.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS Energy groups

2019-10-04 Thread Justin Lemkul




On 10/3/19 10:27 PM, Adip Jhaveri wrote:

Hello all,
I am simulating a system of two proteins in solution. For the simulation I
has specified (in the .mdp file) energygrps as : Protein W_ION.

Now in the output energy file, these energy groups are separated only for
the Columbic and LJ interactions: For e.g. (Columbic : Protein-Protein),
(Columbic Protein - W_ION). Is it possible to get a similar
decomposition for the bonded energy terms? (Like Bond: Protein, G96Angle:
Protein)


energygrps decompose short-range nonbonded terms, so no. The only way to 
get such terms is to make a .tpr containing only the species of interest 
and a corresponding .xtc/.trr and use mdrun -rerun to get the energies.



Also, is it possible during post-processing to get energies for a different
group like for each individual protein (e.g Protein_A) or would I have to
run the simulation again with different energy groups?


Use mdrun -rerun as stated above.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS Domain decomposition

2019-09-24 Thread John Whittaker
> Dear gmx-users,
>
> Would you mind giving me some instruction on this.
>
> I am learning Gromacs tutorial #1 of mdtutotials.com. However to the 5th
> step: EM,  http://www.mdtutorials.com/gmx/lysozyme/05_EM.html
> I got the error:
> "Domain decomposition does not support simple neighbor searching, use grid
> searching or run with one MPI rank"

Did you use the .mdp file in the tutorial
(http://www.mdtutorials.com/gmx/lysozyme/Files/minim.mdp)? You can see
that the "ns-type" (neighbor search type) parameter is set to "grid" in
that file. Whatever you had in your .mdp file set the ns-type to "simple",
which does not support domain decomposition and thus cannot be run on more
than one thread.

Best,

John

>
> Running on 1 thread of PC by changing from
> gmx mdrun -v -deffnm em
> to
> gmx mdrun -nt 1 -s -v -deffnm em
> gives no problem but it's too slow, (50k steps, 2seconds/1step)
>
> Thank you very much for your kind helps,
> Best regards,
>
> The full log is as follows
> -
> GROMACS:  gmx mdrun, VERSION 5.1.1
> Executable:   /usr/local/gromacs/bin/gmx
> Data prefix:  /usr/local/gromacs
> Command line:
>   gmx mdrun -s -deffnm em
>
> Back Off! I just backed up em.log to ./#em.log.33#
>
> Running on 1 node with total 12 cores, 24 logical cores
> Hardware detected:
>   CPU info:
> Vendor: GenuineIntel
> Brand:  Intel(R) Xeon(R) CPU   X5650  @ 2.67GHz
> SIMD instructions most likely to fit this hardware: SSE4.1
> SIMD instructions selected at GROMACS compile time: SSE4.1
>
> Reading file em.tpr, VERSION 5.1.1 (single precision)
> ---
> Program gmx mdrun, VERSION 5.1.1
> Source code file:
> /home/nguyenduyvy/Downloads/gromacs-5.1.1/src/gromacs/domdec/domdec.cpp,
> line: 6542
>
> Fatal error:
> Domain decomposition does not support simple neighbor searching, use grid
> searching or run with one MPI rank
> For more information and tips for troubleshooting, please check the
> GROMACS
> website at http://www.gromacs.org/Documentation/Errors
>
> -
>
> Yours sincerely,
> Nguyen Duy Vy
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.
>


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-14 Thread yujie Liu
It is great!

But, I think more effective is to enable CUDA (GPU) support.
https://www.dropbox.com/s/jtk5p7bz0ppgcbf/windows_gromacs2019.3%2Bfftw%2BintelC%2B%2B%2Bcuda10.rar?dl=0
here is a CUDA version of GROMACS 2019.3 by using VS 2017 and Intel C++,
which is support AVX2_256 and more effective because it can use your GPU.
More compile informations can been found my own wesit
https://liuyujie714.com/15.html



YuJie Liu
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-11 Thread Tatsuro MATSUOKA
>>  As in message in the cmake configure process, gcc on 64 bit windows is 
> buggy
>>  for AVX simd instruction.
> 
> Correct. But that is a GCC bug 
> (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=54412 ) 
> With ICC, MSVC, and Clang the problem doesn't exist.
I also noticed that.
I tried build using Clang on Cygwin. Build itself was passed but executable 
crashed.
But that was the first time to use Clang and something might wrong for my 
setting.  

>>  2. With AVX I could  Gromacs 2018 or later but binary are broken.
>>  With SSE simd instruction, Gromacs works all versions Usable binary with 
> AVX
>>  can be build on Gromacs 2016 or before.
> Are you referring to AVX with GCC on Win64? Or do you have issues with AVX 
> with 
> any other compiler or OS?

Yes. AVX with GCC with Win64 (Cygwin 64). 
With 32 bit GCC on Cygwin with AVX, build and execute does not have problem.

As you said, with MSVC, Gromacs with AVX on Win64 works. 
BTW, at least VC 2017, MSVC supports AVX2 later, bur Gromacs 2019.3 does not 
support AVX2 or later.


I tried to attach the below

--- a/cmake/gmxSimdFlags.cmake    2019-05-29 16:16:15.0 +0900
+++ b/cmake/gmxSimdFlags.cmake    2019-09-10 15:18:47.782352000 +0900
@@ -245,7 +245,7 @@
 int main(){__m256i 
x=_mm256_set1_epi32(5);x=_mm256_add_epi32(x,x);return _mm256_movemask_epi8(x);}"
 TOOLCHAIN_C_FLAGS TOOLCHAIN_CXX_FLAGS
 SIMD_AVX2_C_FLAGS SIMD_AVX2_CXX_FLAGS
-    "-march=core-avx2" "-mavx2" "/arch:AVX" "-hgnu") # no AVX2-specific 
flag for MSVC yet
+    "-march=core-avx2" "-mavx2" "/arch:AVX2" "-hgnu") # no AVX2-specific 
flag for MSVC yet
 
 if(${SIMD_AVX2_C_FLAGS_RESULT})
 set(${C_FLAGS_VARIABLE} "${TOOLCHAIN_C_FLAGS} ${SIMD_AVX2_C_FLAGS}" 
CACHE INTERNAL "C flags required for AVX2 instructions")

The patch enables AVX2 detection, but built systyem compiles AVX 256 sources 
but not AVX2 256.
Further modification should be required.


Tatsuro  



- Original Message -
> From: "Schulz, Roland" 
> To: "gmx-us...@gromacs.org" ; Tatsuro MATSUOKA 
> ; Szilárd Páll 
> Cc: "gromacs.org_gmx-users@maillist.sys.kth.se" 
> 
> Date: 2019/9/12, Thu 04:08
> Subject: RE: [gmx-users] gromacs binaries for windows (Cygwin 64)
> 
> 
> 
>>  -Original Message-
>>  From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
>>  [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of
>>  Tatsuro MATSUOKA
>>  Sent: Tuesday, September 10, 2019 8:26 PM
>>  To: gmx-us...@gromacs.org; Szilárd Páll ;
>>  tmaccha...@yahoo.co.jp
>>  Cc: gromacs.org_gmx-users@maillist.sys.kth.se
>>  Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
>> 
>>  Sorry I wrote the previous mail without revise.
>> 
>> 
>>  Thank you for your comments.
>>  As in message in the cmake configure process, gcc on 64 bit windows is 
> buggy
>>  for AVX simd instruction.
> 
> Correct. But that is a GCC bug 
> (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=54412 ) 
> With ICC, MSVC, and Clang the problem doesn't exist.
> 
>>  1. I cannot use -DGMX_BUILD_OWN_FFTW=ON. I used FFTW on Cygwin
>>  repo.
> Correct. This option isn't intended for anything other than Linux.
> 
>>  2. With AVX I could  Gromacs 2018 or later but binary are broken.
>>  With SSE simd instruction, Gromacs works all versions Usable binary with 
> AVX
>>  can be build on Gromacs 2016 or before.
> Are you referring to AVX with GCC on Win64? Or do you have issues with AVX 
> with 
> any other compiler or OS?
> 
>>  Please also see Notes for Cygwin build
>>  (http://tmacchant3.starfree.jp/gromacs/win/notes_cygwin.html ) on my
> 
> Roland
> 
>>  web site.
>> 
>>  Now I added MSVC binary (http://tmacchant3.starfree.jp/gromacs/win ) and
>>  Notes for MSVC build
>>  (http://tmacchant3.starfree.jp/gromacs/win/notes_MSVC.html ).
>> 
>>  Tatsuro
>> 
>> 
>> 
>>  - Original Message -
>>  > From: Tatsuro MATSUOKA 
>>  > To: Szilárd Páll ; Discussion list for 
> GROMACS
>>  > users 
>>  > Cc: "gromacs.org_gmx-users@maillist.sys.kth.se"
>>  > 
>>  > Date: 2019/9/11, Wed 11:54
>>  > Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
>>  >
>>  >T hank you for your comments.
>>  > As in cmake configure process, gcc on 64 bit windows is buggy for AVX
>>  >simd  instruction.
>>  >
>>  >
>>  > 1. I cannot use -DGMX_BUILD_OWN_FFTW=ON. I used FFTW on Cygwin
>

Re: [gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-11 Thread Schulz, Roland


> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
> [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of
> Tatsuro MATSUOKA
> Sent: Tuesday, September 10, 2019 8:26 PM
> To: gmx-us...@gromacs.org; Szilárd Páll ;
> tmaccha...@yahoo.co.jp
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
> 
> Sorry I wrote the previous mail without revise.
> 
> 
> Thank you for your comments.
> As in message in the cmake configure process, gcc on 64 bit windows is buggy
> for AVX simd instruction.

Correct. But that is a GCC bug 
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=54412) 
With ICC, MSVC, and Clang the problem doesn't exist.
 
> 1. I cannot use -DGMX_BUILD_OWN_FFTW=ON. I used FFTW on Cygwin
> repo.
Correct. This option isn't intended for anything other than Linux.

> 2. With AVX I could  Gromacs 2018 or later but binary are broken.
> With SSE simd instruction, Gromacs works all versions Usable binary with AVX
> can be build on Gromacs 2016 or before.
Are you referring to AVX with GCC on Win64? Or do you have issues with AVX with 
any other compiler or OS?

> Please also see Notes for Cygwin build
> (http://tmacchant3.starfree.jp/gromacs/win/notes_cygwin.html) on my

Roland

> web site.
> 
> Now I added MSVC binary (http://tmacchant3.starfree.jp/gromacs/win) and
> Notes for MSVC build
> (http://tmacchant3.starfree.jp/gromacs/win/notes_MSVC.html).
> 
> Tatsuro
> 
> 
> 
> - Original Message -
> > From: Tatsuro MATSUOKA 
> > To: Szilárd Páll ; Discussion list for GROMACS
> > users 
> > Cc: "gromacs.org_gmx-users@maillist.sys.kth.se"
> > 
> > Date: 2019/9/11, Wed 11:54
> > Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
> >
> >T hank you for your comments.
> > As in cmake configure process, gcc on 64 bit windows is buggy for AVX
> >simd  instruction.
> >
> >
> > 1. I cannot use -DGMX_BUILD_OWN_FFTW=ON. I used FFTW on Cygwin
> repo.
> > 2. With AVX I could  Gromacs 2018 or later but binary are broken before.
> > Usable binary can be Gromacs 2016 or before.
> >
> > Please also SEE Notes on my web site.
> >
> > Now I adde MSVC binary.
> >
> >
> > Tatsuro
> >
> >
> >
> >
> > - Original Message -
> >>  From: Szilárd Páll 
> >>  To: Discussion list for GROMACS users ;
> > Tatsuro MATSUOKA 
> >>  Cc: "gromacs.org_gmx-users@maillist.sys.kth.se"
> > 
> >>  Date: 2019/9/10, Tue 18:21
> >>  Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
> >>
> >>  Dear Tatsuro,
> >>
> >>  Thanks for the contributions!
> >>
> >>  Do the builds work out cleanly on cygwin? Are there any additional
> >> instructions we should consider including in our installation guide?
> >>
> >>  Cheers,
> >>  --
> >>  Szilárd
> >>
> >>  On Fri, Sep 6, 2019 at 5:46 AM Tatsuro MATSUOKA
> > 
> >>  wrote:
> >>>
> >>>   I have prepared gromacs binaries for windows (Cygwin 64) on my own
> >>> web
> >
> >>  site.
> >>>   (For testing purpose.)
> >>>
> >>>   http://tmacchant3.starfree.jp/gromacs/win/
> >>>
> >>>   Tatsuro
> >>>
> >>>   --
> >>>   Gromacs Users mailing list
> >>>
> >>>   * Please search the archive at
> >>  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >>>
> >>>   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>>   * For (un)subscribe requests visit
> >>>   https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> >>> or
> > send
> >>  a mail to gmx-users-requ...@gromacs.org.
> >>
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> 
> --
> Gromacs Users mailing list
> 
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-10 Thread Tatsuro MATSUOKA
Sorry I wrote the previous mail without revise.


Thank you for your comments.
As in message in the cmake configure process, gcc on 64 bit windows is buggy 
for AVX simd instruction.
 
1. I cannot use -DGMX_BUILD_OWN_FFTW=ON. I used FFTW on Cygwin repo.
2. With AVX I could  Gromacs 2018 or later but binary are broken.
With SSE simd instruction, Gromacs works all versions
Usable binary with AVX can be build on Gromacs 2016 or before.

Please also see Notes for Cygwin build 
(http://tmacchant3.starfree.jp/gromacs/win/notes_cygwin.html) on my web site.
 
Now I added MSVC binary (http://tmacchant3.starfree.jp/gromacs/win) and
Notes for MSVC build 
(http://tmacchant3.starfree.jp/gromacs/win/notes_MSVC.html).

Tatsuro    



- Original Message -
> From: Tatsuro MATSUOKA 
> To: Szilárd Páll ; Discussion list for GROMACS users 
> 
> Cc: "gromacs.org_gmx-users@maillist.sys.kth.se" 
> 
> Date: 2019/9/11, Wed 11:54
> Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
> 
>T hank you for your comments.
> As in cmake configure process, gcc on 64 bit windows is buggy for AVX simd 
> instruction.
> 
> 
> 1. I cannot use -DGMX_BUILD_OWN_FFTW=ON. I used FFTW on Cygwin repo.
> 2. With AVX I could  Gromacs 2018 or later but binary are broken before.
> Usable binary can be Gromacs 2016 or before.
> 
> Please also SEE Notes on my web site.
> 
> Now I adde MSVC binary.
> 
> 
> Tatsuro    
> 
> 
> 
> 
> - Original Message -
>>  From: Szilárd Páll 
>>  To: Discussion list for GROMACS users ; 
> Tatsuro MATSUOKA 
>>  Cc: "gromacs.org_gmx-users@maillist.sys.kth.se" 
> 
>>  Date: 2019/9/10, Tue 18:21
>>  Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
>> 
>>  Dear Tatsuro,
>> 
>>  Thanks for the contributions!
>> 
>>  Do the builds work out cleanly on cygwin? Are there any additional
>>  instructions we should consider including in our installation guide?
>> 
>>  Cheers,
>>  --
>>  Szilárd
>> 
>>  On Fri, Sep 6, 2019 at 5:46 AM Tatsuro MATSUOKA 
>  
>>  wrote:
>>> 
>>>   I have prepared gromacs binaries for windows (Cygwin 64) on my own web 
> 
>>  site.
>>>   (For testing purpose.)
>>> 
>>>   http://tmacchant3.starfree.jp/gromacs/win/ 
>>> 
>>>   Tatsuro
>>> 
>>>   --
>>>   Gromacs Users mailing list
>>> 
>>>   * Please search the archive at 
>>  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>> 
>>>   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists 
>>> 
>>>   * For (un)subscribe requests visit
>>>   https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send 
>>  a mail to gmx-users-requ...@gromacs.org.
>> 
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists 
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
> 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-10 Thread Tatsuro MATSUOKA
Thank you for your comments.
As in cmake configure process, gcc on 64 bit windows is buggy for AVX simd 
instruction.


1. I cannot use -DGMX_BUILD_OWN_FFTW=ON. I used FFTW on Cygwin repo.
2. With AVX I could  Gromacs 2018 or later but binary are broken before.
Usable binary can be Gromacs 2016 or before.

Please also SEE Notes on my web site.

Now I adde MSVC binary.


Tatsuro    




- Original Message -
> From: Szilárd Páll 
> To: Discussion list for GROMACS users ; Tatsuro 
> MATSUOKA 
> Cc: "gromacs.org_gmx-users@maillist.sys.kth.se" 
> 
> Date: 2019/9/10, Tue 18:21
> Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
> 
> Dear Tatsuro,
> 
> Thanks for the contributions!
> 
> Do the builds work out cleanly on cygwin? Are there any additional
> instructions we should consider including in our installation guide?
> 
> Cheers,
> --
> Szilárd
> 
> On Fri, Sep 6, 2019 at 5:46 AM Tatsuro MATSUOKA  
> wrote:
>> 
>>  I have prepared gromacs binaries for windows (Cygwin 64) on my own web 
> site.
>>  (For testing purpose.)
>> 
>>  http://tmacchant3.starfree.jp/gromacs/win/ 
>> 
>>  Tatsuro
>> 
>>  --
>>  Gromacs Users mailing list
>> 
>>  * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>>  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists 
>> 
>>  * For (un)subscribe requests visit
>>  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
> a mail to gmx-users-requ...@gromacs.org.
> 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-10 Thread Tatsuro MATSUOKA
Thanks for comments.


> I'm curious: is there any advantage of using Cygwin over using WSL  
> (https://docs.microsoft.com/en-us/windows/wsl/install-win10 ) for using 
> GROMACS?
> If you use WSL than installing GROMACS on Windows is trivial.

First "Notes for Cygwin build"
I mentioned WSL build as :

On Windows 10 64bit, one of the efficient method to build GROMACS is to 
use the WSL (Windows Subsystem for Linux). Information of setting up the WSL 
and building steps of GROMACS on the WSL can be found by search 
engines (e. g. Google). Building GROMACS on Cygwin is sometimes useful 
because binaries can be portable.

I now added MSVC compiled binaries not using icc. 
At this moment I can only use AVX but not AVX2, AVX512.


Tatsuro


- Original Message -
> From: "Schulz, Roland" 
> To: "gmx-us...@gromacs.org" ; Tatsuro MATSUOKA 
> 
> Cc: "gromacs.org_gmx-users@maillist.sys.kth.se" 
> 
> Date: 2019/9/11, Wed 02:49
> Subject: RE: [gmx-users] gromacs binaries for windows (Cygwin 64)
> 
> I'm curious: is there any advantage of using Cygwin over using WSL  
> (https://docs.microsoft.com/en-us/windows/wsl/install-win10 ) for using 
> GROMACS?
> If you use WSL than installing GROMACS on Windows is trivial.
> 
> MSVC has also AVX512 support:
> https://devblogs.microsoft.com/cppblog/microsoft-visual-studio-2017-supports-intel-avx-512/
>  
> 
> https://github.com/MicrosoftDocs/cpp-docs/issues/1078 
> 
> I haven't tested whether AVX2 or AVX512 works correctly with MSVC.
> 
> Also note that it's possible to produce native Windows binaries (without a 
> dependency on Cygwin/Mingw/WSL) compiled for AVX2 and AVX512 with LLVM and 
> ICC:
> https://github.com/boostorg/hana/wiki/Setting-up-Clang-on-Windows#visual-studio-2015-with-clangllvm-clang-cl
>  
> 
> https://software.intel.com/en-us/system-studio/choose-download 
> 
> Roland
> 
>>  -Original Message-
>>  From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
>>  [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of
>>  Szilárd Páll
>>  Sent: Tuesday, September 10, 2019 2:21 AM
>>  To: Discussion list for GROMACS users ; 
> Tatsuro
>>  MATSUOKA 
>>  Cc: gromacs.org_gmx-users@maillist.sys.kth.se
>>  Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
>> 
>>  Dear Tatsuro,
>> 
>>  Thanks for the contributions!
>> 
>>  Do the builds work out cleanly on cygwin? Are there any additional
>>  instructions we should consider including in our installation guide?
>> 
>>  Cheers,
>>  --
>>  Szilárd
>> 
>>  On Fri, Sep 6, 2019 at 5:46 AM Tatsuro MATSUOKA
>>   wrote:
>>  >
>>  > I have prepared gromacs binaries for windows (Cygwin 64) on my own web
>>  site.
>>  > (For testing purpose.)
>>  >
>>  > http://tmacchant3.starfree.jp/gromacs/win/ 
>>  >
>>  > Tatsuro
>>  >
>>  > --
>>  > Gromacs Users mailing list
>>  >
>>  > * Please search the archive at
>>  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>  posting!
>>  >
>>  > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists 
>>  >
>>  > * For (un)subscribe requests visit
>>  > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>  send a mail to gmx-users-requ...@gromacs.org.
>>  --
>>  Gromacs Users mailing list
>> 
>>  * Please search the archive at
>>  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>  posting!
>> 
>>  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists 
>> 
>>  * For (un)subscribe requests visit
>>  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
>>  a mail to gmx-users-requ...@gromacs.org.
> 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-10 Thread Schulz, Roland
PS:
When it comes to installing a FFT library to compile a native binary, in my 
experience the two easiest choices are:
- If you anyhow use ICC then to use MKL instead of FFTW not requiring any extra 
steps
- Otherwise use the FFTW3 port from vcpkg. Just make sure to enable AVX/AVX2

Roland

> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
> [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of
> Schulz, Roland
> Sent: Tuesday, September 10, 2019 10:49 AM
> To: gmx-us...@gromacs.org; Tatsuro MATSUOKA 
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
> 
> I'm curious: is there any advantage of using Cygwin over using WSL
> (https://docs.microsoft.com/en-us/windows/wsl/install-win10) for using
> GROMACS?
> If you use WSL than installing GROMACS on Windows is trivial.
> 
> MSVC has also AVX512 support:
> https://devblogs.microsoft.com/cppblog/microsoft-visual-studio-2017-
> supports-intel-avx-512/
> https://github.com/MicrosoftDocs/cpp-docs/issues/1078
> 
> I haven't tested whether AVX2 or AVX512 works correctly with MSVC.
> 
> Also note that it's possible to produce native Windows binaries (without a
> dependency on Cygwin/Mingw/WSL) compiled for AVX2 and AVX512 with
> LLVM and ICC:
> https://github.com/boostorg/hana/wiki/Setting-up-Clang-on-
> Windows#visual-studio-2015-with-clangllvm-clang-cl
> https://software.intel.com/en-us/system-studio/choose-download
> 
> Roland
> 
> > -Original Message-
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
> > [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf
> > Of Szilárd Páll
> > Sent: Tuesday, September 10, 2019 2:21 AM
> > To: Discussion list for GROMACS users ; Tatsuro
> > MATSUOKA 
> > Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> > Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
> >
> > Dear Tatsuro,
> >
> > Thanks for the contributions!
> >
> > Do the builds work out cleanly on cygwin? Are there any additional
> > instructions we should consider including in our installation guide?
> >
> > Cheers,
> > --
> > Szilárd
> >
> > On Fri, Sep 6, 2019 at 5:46 AM Tatsuro MATSUOKA
> >  wrote:
> > >
> > > I have prepared gromacs binaries for windows (Cygwin 64) on my own
> > > web
> > site.
> > > (For testing purpose.)
> > >
> > > http://tmacchant3.starfree.jp/gromacs/win/
> > >
> > > Tatsuro
> > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> > > or
> > send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
> 
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-10 Thread Schulz, Roland
I'm curious: is there any advantage of using Cygwin over using WSL  
(https://docs.microsoft.com/en-us/windows/wsl/install-win10) for using GROMACS?
If you use WSL than installing GROMACS on Windows is trivial.

MSVC has also AVX512 support:
https://devblogs.microsoft.com/cppblog/microsoft-visual-studio-2017-supports-intel-avx-512/
https://github.com/MicrosoftDocs/cpp-docs/issues/1078

I haven't tested whether AVX2 or AVX512 works correctly with MSVC.

Also note that it's possible to produce native Windows binaries (without a 
dependency on Cygwin/Mingw/WSL) compiled for AVX2 and AVX512 with LLVM and ICC:
https://github.com/boostorg/hana/wiki/Setting-up-Clang-on-Windows#visual-studio-2015-with-clangllvm-clang-cl
https://software.intel.com/en-us/system-studio/choose-download

Roland

> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
> [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of
> Szilárd Páll
> Sent: Tuesday, September 10, 2019 2:21 AM
> To: Discussion list for GROMACS users ; Tatsuro
> MATSUOKA 
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] gromacs binaries for windows (Cygwin 64)
> 
> Dear Tatsuro,
> 
> Thanks for the contributions!
> 
> Do the builds work out cleanly on cygwin? Are there any additional
> instructions we should consider including in our installation guide?
> 
> Cheers,
> --
> Szilárd
> 
> On Fri, Sep 6, 2019 at 5:46 AM Tatsuro MATSUOKA
>  wrote:
> >
> > I have prepared gromacs binaries for windows (Cygwin 64) on my own web
> site.
> > (For testing purpose.)
> >
> > http://tmacchant3.starfree.jp/gromacs/win/
> >
> > Tatsuro
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
> 
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-10 Thread Szilárd Páll
Dear Tatsuro,

Thanks for the contributions!

Do the builds work out cleanly on cygwin? Are there any additional
instructions we should consider including in our installation guide?

Cheers,
--
Szilárd

On Fri, Sep 6, 2019 at 5:46 AM Tatsuro MATSUOKA  wrote:
>
> I have prepared gromacs binaries for windows (Cygwin 64) on my own web site.
> (For testing purpose.)
>
> http://tmacchant3.starfree.jp/gromacs/win/
>
> Tatsuro
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs lincs warning: relative constraint deviation after lincs

2019-09-07 Thread Mark Abraham
It could. A script could contain anything :-) We can't tell unless you want
to e.g. share it on a pastebin service.

Mark

On Sat, 7 Sep 2019 at 18:09, rajat punia  wrote:

> I am using the output (confout.gro) to analyze some results using the same
> script. I think non-existence of confout.gro is pausing the script.
> Can this be a possible reason?
>
> On Sat, 7 Sep 2019 at 20:06, Mark Abraham 
> wrote:
>
> > Hi,
> >
> > Such errors do lead to a normal gmx mdrun aborting. So the question is
> more
> > what is in your script that might affect that?
> >
> > Mark
> >
> > On Sat., 7 Sep. 2019, 07:39 rajat punia,  wrote:
> >
> > > Hi, I am trying to run multiple (1000) md simulations using a shell
> > script.
> > > Some of the simulations (say simulation no. 56) shows error " lincs
> > > warning:  relative constraint deviation after lincs" and get paused
> > there.
> > > In that case, i have to manually abort that particular simulation
> (using
> > > Ctrl+C) and then subsequent simulation starts.
> > > What i want is, the simulation that shows this error get aborted
> > > automatically and further simulations continues. What changes can i do
> in
> > > the script file for this?
> > > Your suggestions would be highly appreciated.
> > > Thanks
> > > NOTE: I don't want to resolve this error. I just want the simulation
> > which
> > > shows this error get automatically aborted.
> > >
> > > --
> > > *Regards,*
> > > *Rajat Punia*
> > > *PhD Chemical Engineering*
> > > *IIT Delhi*
> > > *+91-9821210386*
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
> --
> *Regards,*
> *Rajat Punia*
> *PhD Chemical Engineering*
> *IIT Delhi*
> *+91-9821210386*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs lincs warning: relative constraint deviation after lincs

2019-09-07 Thread rajat punia
I am using the output (confout.gro) to analyze some results using the same
script. I think non-existence of confout.gro is pausing the script.
Can this be a possible reason?

On Sat, 7 Sep 2019 at 20:06, Mark Abraham  wrote:

> Hi,
>
> Such errors do lead to a normal gmx mdrun aborting. So the question is more
> what is in your script that might affect that?
>
> Mark
>
> On Sat., 7 Sep. 2019, 07:39 rajat punia,  wrote:
>
> > Hi, I am trying to run multiple (1000) md simulations using a shell
> script.
> > Some of the simulations (say simulation no. 56) shows error " lincs
> > warning:  relative constraint deviation after lincs" and get paused
> there.
> > In that case, i have to manually abort that particular simulation (using
> > Ctrl+C) and then subsequent simulation starts.
> > What i want is, the simulation that shows this error get aborted
> > automatically and further simulations continues. What changes can i do in
> > the script file for this?
> > Your suggestions would be highly appreciated.
> > Thanks
> > NOTE: I don't want to resolve this error. I just want the simulation
> which
> > shows this error get automatically aborted.
> >
> > --
> > *Regards,*
> > *Rajat Punia*
> > *PhD Chemical Engineering*
> > *IIT Delhi*
> > *+91-9821210386*
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
*Regards,*
*Rajat Punia*
*PhD Chemical Engineering*
*IIT Delhi*
*+91-9821210386*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs lincs warning: relative constraint deviation after lincs

2019-09-07 Thread Mark Abraham
Hi,

Such errors do lead to a normal gmx mdrun aborting. So the question is more
what is in your script that might affect that?

Mark

On Sat., 7 Sep. 2019, 07:39 rajat punia,  wrote:

> Hi, I am trying to run multiple (1000) md simulations using a shell script.
> Some of the simulations (say simulation no. 56) shows error " lincs
> warning:  relative constraint deviation after lincs" and get paused there.
> In that case, i have to manually abort that particular simulation (using
> Ctrl+C) and then subsequent simulation starts.
> What i want is, the simulation that shows this error get aborted
> automatically and further simulations continues. What changes can i do in
> the script file for this?
> Your suggestions would be highly appreciated.
> Thanks
> NOTE: I don't want to resolve this error. I just want the simulation which
> shows this error get automatically aborted.
>
> --
> *Regards,*
> *Rajat Punia*
> *PhD Chemical Engineering*
> *IIT Delhi*
> *+91-9821210386*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2019.3 compilation with GPU support

2019-08-26 Thread Mark Abraham
Hi,

All versions of icc requires a standard library from an installation of
gcc. There are various dependencies between them, and your system admins
should have an idea which one is known to work well in your case. If you
need to help the GROMACS build find the right one, do check out the GROMACS
install guide for how to direct a particular gcc to be used with icc. I
would suggest nothing earlier than gcc 5.

Mark


On Mon, 26 Aug 2019 at 17:51, Prithwish Nandi 
wrote:

> Hi,
> I am trying to compile Gromacs-2019.3 at our HPC cluster. I successfully
> compiled the single and double precision versions, but it’s producing error
> for GPU support. (The error message is pasted below)
>
> I am using Intel/2018 update 4 and CUDA/10.0. The base gcc version is
> 4.8.5. Using MKL as the FFTW lib and mliicc as the C compiler, and mpiicpc
> as the CXX compiler.
>
> The error I am getting is given below.
>
> Do you have any clue for this?
>
> Thanks, //PN
>
> The error message:
>
>   Error generating file
>
> /xxx/xxx/gromacs/intel/2019.3/kay/gromacs-2019.3/build_gpu/src/gromacs/CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/./libgromacs_generated_nbnxn_cuda_kernel_pruneonly.cu.o
>
>
> make[2]: ***
> [src/gromacs/CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn_cuda_kernel_pruneonly.cu.o]
> Error 1
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(68): error:
> identifier "_mm_set1_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(70): error:
> identifier "_mm_set1_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(112): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error:
> identifier "_mm_and_si128" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(172): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(175): error:
> identifier "_mm_or_si128" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(176): error:
> identifier "_mm_sub_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(177): error:
> identifier "_mm_mul_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error:
> identifier "_mm_hadd_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error:
> identifier "_mm_cvtsd_f64" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error:
> identifier "_mm_mul_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error:
> identifier "_mm_add_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(187): error:
> identifier "_mm_storeu_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(68): error:
> identifier "_mm_set1_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(70): error:
> identifier "_mm_set1_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(112): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error:
> identifier "_mm_and_si128" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(172): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(175): error:
> identifier "_mm_or_si128" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(176): error:
> identifier "_mm_sub_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(177): error:
> identifier "_mm_mul_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error:
> identifier "_mm_hadd_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error:
> identifier "_mm_cvtsd_f64" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error:
> identifier "_mm_mul_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error:
> identifier "_mm_add_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(187): error:
> identifier "_mm_storeu_pd" is undefined
>
> 14 errors detected in the compilation of
> "/localscratch/397189/tmpxft_6280_-6_gpubonded-impl.cpp4.ii".
>
>
>
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read 

Re: [gmx-users] gromacs is not recognising opls ff

2019-08-26 Thread Justin Lemkul




On 8/26/19 7:04 AM, Ayesha Fatima wrote:

Dear All,
I have come across another issue
When i want to use opls itp for cholesterol, it gives me this error " Fatal
error:
Residue 'OL' not found in residue topology database"
It does not take CHOL as the residue name as given below


That suggests your input file has incorrect formatting. If it is a PDB 
file, the column positions are fixed. The error is consistent with the 
"CHOL" residue name having been shifted by two characters/columns.


-Justin


[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass  typeB
chargeB  massB
 1  opls_158   1   CHOL  C1  1   0.2050  12.011
 2  opls_140   1   CHOL  H1  1   0.0600  1.008
 3  opls_154   1   CHOL  O1  1  -0.6830  15.9994

Any suggestions?
Thank you
regards


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs FEP tutorial

2019-08-20 Thread Justin Lemkul




On 8/20/19 1:36 PM, Alex Mathew wrote:

http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/free_energy/01_theory.html



The protocol in the tutorial is simply the elimination of LJ parameters 
to compute the vdW contribution to free energy of solvation. It is not 
(by design) a complete cycle that computes a full free energy of solvation.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs FEP tutorial

2019-08-20 Thread Alex Mathew
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/free_energy/01_theory.html


>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs FEP tutorial

2019-08-19 Thread Mark Abraham
Hi,

To which tutorial are you referring?

Mark

On Mon., 19 Aug. 2019, 19:09 Alex Mathew,  wrote:

> Can anyone tell me which thermodynamics cycle was used in the tutorial? for
> FEP.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs-GPU help

2019-08-17 Thread Benson Muite

Hi,

This may also be helpful:

http://www.hecbiosim.ac.uk/jade-benchmarks

https://github.com/hpc-uk/archer-benchmarks/blob/master/reports/single_node/index.md#tab16

Regards,

Benson

On 8/16/19 5:45 AM, Benson Muite wrote:

Hi,

You may wish to search the list archives as indicated at:

Please search the archive 
athttp://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
posting!


See also:

https://arxiv.org/abs/1903.05918

Benson

On 8/16/19 5:37 AM, tarzan p wrote:
Hi all,I have started using GROMACS recently on my work station (2 X 
Intel 6148 (20 cores each) and 2 x tesla v100 .I have compiled it as 
per the instructions atRun GROMACS 3X Faster on NVIDIA GPUs


|
|
|
|  |  |

  |

  |
|
|  |
Run GROMACS 3X Faster on NVIDIA GPUs

Complete your molecular dynamics simulations in hours instead of 
days. Learn more.

  |

  |

  |





I would like to get request for a proper benchmark for GPU version 
and would like to know how to run the GPU version. I mean the command 
to use one GPU and 2 GPU(s).

With best wishes

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs-GPU help

2019-08-15 Thread Benson Muite

Hi,

You may wish to search the list archives as indicated at:

Please search the archive 
athttp://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List  before posting!

See also:

https://arxiv.org/abs/1903.05918

Benson

On 8/16/19 5:37 AM, tarzan p wrote:

Hi all,I have started using GROMACS recently on my work station (2 X Intel 6148 
(20 cores each) and 2 x tesla v100 .I have compiled it as per the instructions 
atRun GROMACS 3X Faster on NVIDIA GPUs

|
|
|
|  |  |

  |

  |
|
|  |
Run GROMACS 3X Faster on NVIDIA GPUs

Complete your molecular dynamics simulations in hours instead of days. Learn 
more.
  |

  |

  |





I would like to get request for a proper benchmark for GPU version and would 
like to know how to run the GPU version. I mean the command to use one GPU and 
2 GPU(s).
With best wishes

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 4.5.7 compatibility with Titan X GPU

2019-08-08 Thread Mark Abraham
Hi,

Unfortunately that version of GROMACS hasn't been tested or supported in
well over five years, so probably it is simply incompatible with modern
GPUs. You could try explicit solvent in modern GROMACS, which might be
comparably fast with that old version :-) Or AMBER if you really need
implicit solvent.

Mark

On Thu, 8 Aug 2019 at 16:42, Timothy Hurlburt 
wrote:

> Any feedback would be appreciated.
> Thank you.
>
> On Fri, Aug 2, 2019 at 11:29 AM Timothy Hurlburt <
> timothy.hurlb...@uoit.net>
> wrote:
>
> > Hi,
> > I am trying to install GPU accelerated Gromacs 4.5.7 for use with
> implicit
> > solvent.
> > I am using an Nividia GM200 [GeForce GTX TITAN X] GPU.
> >
> > When I tried to install Gromacs with CUDA toolkit 3.1 and OpenMM 2.0 I
> get
> > this error: "SetSim copy to cSim failed invalid device symbol openMM".
> > -Based on this discussion
> >
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2013-January/077622.html
> > I presumed toolkit 3.1 is not compatible with my gpu so I tried toolkit
> > 7.5.
> >
> > I installed Gromacs with CUDA toolkit 7.5 and OpenMM 2.0 without fatal
> > errors. However when I tried mdrun-gpu I got this fatal error: "The
> > requested platform "CUDA" could not be found."
> >
> > I ran these commands
> > export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
> > export CUDA_HOME=/usr/local/cuda
> > mdrun-gpu -s run.tpr -deffnm run -v
> >
> > Then I got this error message
> > Fatal error:
> > The requested platform "CUDA" could not be found.
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> >
> > I am not sure if my GPU is incompatible or whether I am missing some
> flags.
> > Any help would be greatly appreciated.
> >
> > Thanks
> >
> >
> >
> >
> >
> >
> >
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 4.5.7 compatibility with Titan X GPU

2019-08-08 Thread Timothy Hurlburt
Any feedback would be appreciated.
Thank you.

On Fri, Aug 2, 2019 at 11:29 AM Timothy Hurlburt 
wrote:

> Hi,
> I am trying to install GPU accelerated Gromacs 4.5.7 for use with implicit
> solvent.
> I am using an Nividia GM200 [GeForce GTX TITAN X] GPU.
>
> When I tried to install Gromacs with CUDA toolkit 3.1 and OpenMM 2.0 I get
> this error: "SetSim copy to cSim failed invalid device symbol openMM".
> -Based on this discussion
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2013-January/077622.html
> I presumed toolkit 3.1 is not compatible with my gpu so I tried toolkit
> 7.5.
>
> I installed Gromacs with CUDA toolkit 7.5 and OpenMM 2.0 without fatal
> errors. However when I tried mdrun-gpu I got this fatal error: "The
> requested platform "CUDA" could not be found."
>
> I ran these commands
> export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
> export CUDA_HOME=/usr/local/cuda
> mdrun-gpu -s run.tpr -deffnm run -v
>
> Then I got this error message
> Fatal error:
> The requested platform "CUDA" could not be found.
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
>
> I am not sure if my GPU is incompatible or whether I am missing some flags.
> Any help would be greatly appreciated.
>
> Thanks
>
>
>
>
>
>
>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs-5.1.4 with CHARMM36 March 2019 RNA Residue Fatal Error

2019-08-02 Thread Justin Lemkul




On 8/2/19 12:22 AM, Joseph,Newlyn wrote:

Hello,


I'm running into the following error when trying to pdb2gmx my PDB file.


Program gmx pdb2gmx, VERSION 5.1.4
Source code file: 
/gpfs/apps/hpc.rhel7/Packages/Apps/Gromacs/5.1.4/Dist_514/gromacs-5.1.4/src/gromacs/gmxpreprocess/resall.c,
 line: 645

Fatal error:
Residue 'C' not found in residue topology database
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors?


I presume I'm naming my residues incorrectly, but upon closer inspection of the 
merged.rtp file within the forcefield, I see no section for RNA residues. I'm 
attempting to simulate an RNA that has the following as the first couple lines 
in the PDB:


In CHARMM, both DNA and RNA are named ADE, CYT, GUA, THY/URA and are 
generated as RNA. One then patches the RNA residue to become DNA by 
removing the 2'-OH. We can't do this in GROMACS, so there are fixed 
residue names.


RNA: ADE, CYT, GUA, URA
DNA: DA, DC, DG, DT

These are all in merged.rtp.

-Justin



REMARK  GENERATED BY CHARMM-GUI (HTTP://WWW.CHARMM-GUI.ORG) V2.0 ON OCT, 26. 
2018. JOB
REMARK  READ PDB, MANIPULATE STRUCTURE IF NEEDED, AND GENERATE TOPOLOGY FILE
REMARK   DATE:10/27/18  0:52: 0  CREATED BY USER: apache
ATOM  1  H5T   C A   1 -29.997 -20.428   3.250  1.00  0.00   H
ATOM  2  O5'   C A   1 -30.685 -20.928   3.695  1.00  0.00   O
ATOM  3  C5'   C A   1 -30.499 -22.286   3.303  1.00  0.00   C
ATOM  4  H5'   C A   1 -30.674 -22.932   4.190  1.00  0.00   H
ATOM  5 H5''   C A   1 -29.451 -22.408   2.957  1.00  0.00   H
ATOM  6  C4'   C A   1 -31.442 -22.699   2.192  1.00  0.00   C
ATOM  7  H4'   C A   1 -31.693 -23.777   2.284  1.00  0.00   H
ATOM  8  O4'   C A   1 -32.682 -21.940   2.275  1.00  0.00   O
ATOM  9  C1'   C A   1 -33.141 -21.611   0.975  1.00  0.00   C
ATOM 10  H1'   C A   1 -34.185 -21.975   0.869  1.00  0.00   H


Any help or suggestions?


Newlyn Joseph, M.S.
M.D. Candidate, Class of 2023
University of Connecticut School of Medicine
nejos...@uchc.edu | new.josep...@gmail.com
(203) 584-6402
sent from Outlook Web App


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs pullcode

2019-08-01 Thread Justin Lemkul



On 7/31/19 4:20 AM, zhaox wrote:

Hi,
There are two  groups in my system.When I use the pull code to pull 
one group along the axis X, setting pull_coord1_geometry 
=direction-periodic,I am confused with the "distance at start" and the 
"reference at t=0".Could anyone can tell me how to understand 
these?And if I use the absolute reference,how to understand these? For 
example,when I set the pull_coord1_origin=0 0 0,how to calculate 
“distance at start”?

Thank you in advance.


I answered this exact question some time ago:

https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2019-July/125833.html

-Justin


My pull code is followed:
; Pull code
freezegrps          = bot    top
freezedim           = Y Y Y  n y n
pull                = yes
pull_ncoords        = 2         ; two reaction coordinates x and z
pull_ngroups        = 2         ; one group defining one reaction 
coordinate

pull_group1_name    = top
pull_group2_name    = bot
pull_coord1_type    = umbrella    ;harmonic potential
pull_coord1_geometry= direction-periodic
pull_coord1_vec     = 1 0 0
pull_coord1_origin  = 0 0 0
pull_coord1_groups  = 0 1
pull_coord1_start   = yes
pull_coord1_k       = 4000
pull_coord1_rate    = 0.005

pull_coord2_type    = constant-force
pull_coord2_geometry= distance
pull_coord2_dim     = n n y
pull_coord2_groups  = 1 2
pull_coord2_start   = yes
pull_coord2_k       = 3838550        ;KJ mol^-1 nm^-1

pull_group1_pbcatom  = 5806
;pull_group2_pbcatom  = 1887
pull-pbc-ref-prev-step-com = yes



cos-acceleration    = 0.05
comm-mode            = linear
nstcomm              = 10
comm-grps            =


zhaox
zh...@nuaa.edu.cn

 

签名由 网易邮箱大师  
定制




--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs pullcode

2019-07-08 Thread Justin Lemkul



On 7/6/19 9:17 PM, zhaox wrote:

Hi,
There are two  groups in my system.When I use the pull code to pull 
one group along the axis X, setting pull_coord1_geometry =direction,I 
am confused with the "distance at start" and the "reference at 
t=0".Could anyone can tell me how to understand these?And if I use the 
absolute reference,how to understand these? For


Reference at t0 refers to the COM position of group1. Distance at start 
is the COM distance between the two groups.


example,when I set the pull_coord1_origin=0 0 0,what means the 
"distance at start"?


This should be the same concept, except that rather than having a 
reference that is the COM of a specified group, it is the coordinate origin.



Thank you in advance.
My pull code is followed:
; Pull code
freezegrps          = bot    top
freezedim           = Y Y Y  N N  N
pull                = yes
pull_ncoords        = 1           ; only one reaction coordinate
pull_ngroups        = 2           ; one group defining one reaction 
coordinate

pull_group1_name    = top
pull_group2_name    = bot
pull_coord1_type    = constant-force
pull_coord1_geometry= direction
;pull_coord1_origin  =
pull_coord1_vec     = 1 0 0
pull_coord1_groups  = 1 2
pull_coord1_start   = yes
;pull_coord1_rate    = -0.0001
pull_coord1_dim     = y n n


Note that pull_coord1_dim is not used with "direction" geometry.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GROMACS Output Timestep Changes Over a Long, Check-pointed Simulation

2019-06-19 Thread Mark Abraham
Hi,

Yes. You can do your correlation function in step numbers and scale those
when you produce output.

Mark

On Wed., 19 Jun. 2019, 19:53 Eric R Beyerle,  wrote:

> I'm calculating a time correlation function from the trajectory using an
> in-house code and I wanted to make sure the timesteps between each frame
> are the same (i.e. the 0.2 ps write interval specified in the .mdp
> file). Based on your response, it seems as though they are.
>
> Many thanks,
>
> Eric
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS Output Timestep Changes Over a Long, Check-pointed Simulation

2019-06-19 Thread Eric R Beyerle
I'm calculating a time correlation function from the trajectory using an 
in-house code and I wanted to make sure the timesteps between each frame 
are the same (i.e. the 0.2 ps write interval specified in the .mdp 
file). Based on your response, it seems as though they are.


Many thanks,

Eric
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


  1   2   3   4   5   6   7   8   9   >