thanks Szilard for the reply!
below is the output of the cmake and make command, i doesn't find sphinx, but
then i doesn't seem to look for it either ...anyway, on this second attempt I
get a different error, no idea why, only diff: the first time i first made
gromacsand only then the manual, th
hi,so I say:prompt> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DCMAKE_C_COMPILER=gcc-7
-DCMAKE_CXX_COMPILER=g++-7 -DGMX_GPU=on
-DCMAKE_INSTALL_PREFIX=/home/michael/local/gromacs-2019-3-bin
-DGMX_BUILD_MANUAL=onprompt> make -j 4prompt> make install
prompt> make manualmanual cannot be built because Sphinx
Hi,
thanks Szilard for the reply,
below I include what i get to stdout/stderr and the mdp (log file is
too long) obviously the timings are not accurate due to buffering
but to me it looks as if Gromacs spent reproducibly about 2 minutes
running on ONE thread, and only then switches to use 4. This
Hi
running a gmx job on a newly set-up debian-buster, with intel i7-3820 CPU, and
one Nvidia 1060 GTX-6Gb
architecture, the job dies without further notice, nothing in log file, nor in
stdout/stderr ...however in /var/syslogi find exactly at the time when gromacs
stops this message:
Jan 26 23
hi,
i notice that gromacs, when i start an MD simulation usuallyspends up to a few
minutes using only one (out of several possible) threads.after a while it seems
to have figured something out and then starts to runusing more threads. This is
particularly conspicuous if also GPU is used.It is no
Hi,
gromacs started dying on me lately with rather obscure error messages as in the
caption of this mail. Errors seem to be related to the nvidia driver (see below
for more output,and further below for the mdp file) ... i perform a large
number of short (2ns) sims and this happens perhapsone ou
, the 1060 in combination with gmx
can give particularlylarge spikes or changes in energy consumption so that even
a 700W PSU cannot cope?
cheers,michael
>On 23.10.2018 15:12, Michael Brunsteiner wrote:
>> the computers are NOT overclocked, cooling works, cpu temperatures are well
>&
Hi,
this might be considered off-topic, but i believe there is some evidence to the
contrary ... what i see is this:
I have a couple of (fairly new) workstations (specs below) and on at least two
of them, on at least three differentoccasions, i recently saw the following
behaviour:computer is h
figured it out ... hwloc version 2.0.2 was too new ... oncei used hwloc-1.11.11
instead things worked ...
mic
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Supp
Hi,
In the past I compiled gmx 2016.4 on our old cluster and this worked as
expected.
Now I just compiled 2018.3 only to find that the resulting binary was
unexpectedly slow ...
On each node I have a multi-core architecture with 12 logical threads,
but htop shows that gmx uses only one thread.
hi
about my hardware gmx has to say:
Hardware topology: Basic
Sockets, cores, and logical processors:
Socket 0: [ 0 6] [ 1 7] [ 2 8] [ 3 9] [ 4 10] [ 5
11]
if i want to run two gmx jobs simultaneously on this one node,i usually do
something like:
promt> gmx md
Hi,I ran a few MD runs with identical input files (the SAME tpr file. mdp
included below) on the same computer
with gmx 2018 and observed rather large performance variations (~50%) as in:
grep Performance */mcz1.log7/mcz1.log:Performance: 98.510 0.244
7d/mcz1.log:Performance: 140
doing some test runs to optimize the mdrun settings for my hardwarei noticed a
couple of things i fail to understand (everything below is gmx-2018on an intel
CPU 6 cores, 2 threads each, and GTX 1060)
1) when i start a run as in, e.g.: prompt> gmx mdrun -v -nt 12 -ntmpi 1 -ntomp
12 -deffnm mc
= 4.5e-5 4.5e-5 4.5e-5 0 0 0
ref-p = 1 1 1 0 0 0
;
annealing = single
annealing-npoints = 2
annealing-time = 0 10
annealing-temp = 511 491
From: Szilárd Páll
To: Michael Brunsteiner
Sent: Thursday, February 22, 2018 4:15 PM
Subject: Re: [gmx-users] 2
hi
just installed gmx-2018 on a x86_64 PC with a Geforce GTX 780 and the
cudasoftware directly from the nvidia webpage (didn't work using the debian
nvidia packages)
output of lscpu is included below.
i find that:
1) 2018 is slightly faster (~5%) than 2016.2) both 2016 and 2018 use the GPU,
but
Szilárd wrote:> Option A) Get a gcc 5.x (e.g. compile from source)
> Option B) Install CUDA 9.1 (and required driver) which is compatible
> with gcc 6.3
> (http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html)
didn't try A (there might be a good reason why stretch does not have gcc
as compilation of gmx-2018 idn't work for me i went back to 2016.4
after some google-ing i found that the following commands work, and gmx
compiles without any complaints...
> cmake .. -DGMX_GPU=on -DNVML_FOUND=1
> -DNVML_LIBRARY=/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
> -DNVML_INCLUDE_DIR=
Hi,
I have problems compiling gromacs-2018 (January 10 release) on debian stretch
(vanilla)
my specs: Linux 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3+deb9u1 (2017-12-23) x86_64
GNU/Linux
gcc-6.3.0,Cuda compilation tools, release 8.0, V8.0.44
graphics: Geforce GTX 780.
I do:
> cmake .. -DGMX_GPU=on
w
hi,
I just tried to install gmx (2015.4) on my desktop with vanilla debian
(stable/stretch with the debian nvidia/cuda driver packages installed)
as recommended in INSTALL ...
prompt> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=on
-DCMAKE_INSTALL_PREFIX=/home/user/local/gromacs-2016-4-bin
finishes
> Anjali Patel wrote:> What is the procedure of using gromacs specially for
> inorganic compound??> I am beginner for gromacs simulation package, I used it
> for organic
> compounds. But unable to understand how to deal with inorganic compounds
> and what about the force field and solvent we can
Thanks Peter and Mark!
I'll try running on single cores ...
however, comparing the timings I believe the bottleneck might be the time spent
in I/O(reading/writing to disk) and here running several jobs on a single node
with multiple coresmight make things even worse.
also funny: In the log files
Hi,I have to run a lot (many thousands) of very short MD reruns with gmx.Using
gmx-2016.3 it works without problems, however, what i see is thatthe overall
performance (in terms of REAL execution time as measured with the unix time
command)which I get on a relatively new computer is poorer than
.00 0.00
ATOM 8 CMP1 8 23.000 30.000 103.000 0.00 0.00
--
=====
Michael Brunsteiner, PhD
Senior Researcher, Area II
Research Center Pharmaceutical Engineering GmbH
A-8010 Graz, Inffeldgasse 13
phone: +43 316 873 30908
mobile: +43 66
ch file or directory
#include
^
compilation terminated.
any ideas how to resolve this?
thanks,
Michael
--
=============
Michael Brunsteiner, PhD
Senior Researcher, Area II
Research Center Pharmaceutical Engineering Gm
hi ,
I've been trying to perform a nromal mode analysis using (after a thorough
energy minimization of my system)
prompt> mdrun_d -v -s nm.tpr -o nm.trr -mtx nm.mtx[...]
Maximum force: 8.56366e+00
The force is probably not small enough to ensure that you are at a minimum.
Be aware that negative
On 4/21/15 10:41 AM, Michael Brunsteiner wrote:
>>
>> Hi,
>> I just tried to continue a simulation, after having extended the total time
>> using
>>
>> gmx convert-tpr -s original-xxx.tpr -o xxx.tpr -until 40
>> mdrun -nt 4 -deffnm xxx -cpi xxx.c
Hi,
I just tried to continue a simulation, after having extended the total time
using
gmx convert-tpr -s original-xxx.tpr -o xxx.tpr -until 40
mdrun -nt 4 -deffnm xxx -cpi xxx.cpt
xxx.cpt, xxx.log, and xxx.trr are present in the working directory and are
the original output files, i.e.,
you could be normal?
>
> From: Szilárd Páll
>To: Michael Brunsteiner
>Cc: Discussion list for GROMACS users ;
>"gromacs.org_gmx-users@maillist.sys.kth.se"
>
>Sent: Wednesday, September 17, 2014 4:18 PM
>Subject: Re:
ilárd Páll wrote:
Subject: Re: [gmx-users] GPU waits for CPU, any remedies?
To: "Michael Brunsteiner"
Cc: "Discussion list for GROMACS users" ,
"gromacs.org_gmx-users@maillist.sys.kth.se"
Date: Tuesday, September 16, 2014, 6:52 PM
Well, it looks like you are i)
hi,
testing a new computer we just got i found that for the system i use performance
is sub-optimal as the GPU appears to be about 50% faster than the CPU (see below
for details)
the dynamic load balancing that is performed automatically at the beginning
of each simulation does not seem to impro
Dear all,
i just got new hardware, and ran a couple of tests, comparing performance of
the new machine to results from another five years old computer.
I found the outcome (see below) somewhat disappointing, and write to
see if other people got similar results, or if perhaps i overlooked somethin
be normal?
>
> From: Szilárd Páll
>To: Michael Brunsteiner
>Sent: Thursday, July 17, 2014 2:00 AM
>Subject: Re: [gmx-users] hardware setup for gmx
>
>
>Dear Michael,
>
>I'd appreciate if you kept the further discussion on the gmx-users list.
>
>On
Hi,
I made myself a force field for an organic polymer using a terminology/syntax
similar to amino acids, so that i can make topoly files directly from pdb files
using pdb2gmx.
This worked nicely up to gmx-4.6.5, but when trying 5.0 pdb2gmx
complains about not finding particular atoms in a pdb f
Hi,
I am sorry in case I overlooked the answers in the release-notes,
but I didn't find there answers to:
1) does gmx-5.0 support free energy calculations + GPU ?
2) does gmx-5.0 support double precision + GPU ?
cheers
michael
===
Why be happy when you could be
Hi,
can anybody recommend a hardware setup to perform MD runs (with PME) that has a
good
price-performance ratio? ... in particular I'd be interested in learning which
combinations
of CPU and GPU can be expected to provide a good FLOPS-per-dollar ratio with
the more
recent gmx versions (4.6
Hi Amninder,
DeltaU_mix = - * N_A - * N_B
... average total intermolecular energy of the blend
... the average interaction energy of a single molecule of type A in a
pure sample of A
... the average interaction energy of a single molecule of type B in a
pure sample of B
N_A ... the numb
Dear all,
I am interested in H-bond life-times and tried to use g_hbond
with the - ac option
(gmx 4.6.5) - but i am not sure how to interpret
the outout ...
1) in the output written to stdout there is a table:
Type Rate (1/ps) Time (ps) DG (kJ/mol) Chi^2
Forward
etc...
37 matches
Mail list logo