Dear Gromacs Users,
We're finally buying some Intel E52650 servers + NVIDIA GTX980 cards.
However, there's some servers that come with only PCI-e 3.0 x8 slots and
others with x16 slots.
Do you think this is relevant for gromacs performance ? And if so, how much
relevant ?
Thanks in advance.
Hey Mirco,
Your 1-3% claim is based on the webpage you linked ?
Is it reliable to compare GPU performances for gromacs with those of 3D
videogames ?
Thanks!
2015-06-11 13:21 GMT+02:00 Mirco Wahab mirco.wa...@chemie.tu-freiberg.de:
On 11.06.2015 13:08, David McGiven wrote:
We're finally
Dear All,
I would like to statically compile GROMACS 5 in an Intel Xeon X3430 machine
with gcc4.7 (cluster front node) BUT run it on an Intel Xeon E5-2650V2
machine (cluster compute node).
Would that be possible ? And if so, how should I do it ?
Haven't found it on the Installation_Instructions
cores in use to see how Gromacs behaves with these processors
(unless someone has done these tests, and can confirm that Gromacs has no
issues with 16 or 18 core cpu's).
Harry
On Feb 24, 2015, at 1:32 PM, David McGiven wrote:
Hi Szilard,
Thank you very much for your great advice.
2015-02
Hi Szilard,
Thank you very much for your great advice.
2015-02-20 19:03 GMT+01:00 Szilárd Páll pall.szil...@gmail.com:
On Fri, Feb 20, 2015 at 2:17 PM, David McGiven davidmcgiv...@gmail.com
wrote:
Dear Gromacs users and developers,
We are thinking about buying a new cluster of ten
2015-02-24 15:46 GMT+01:00 Szilárd Páll pall.szil...@gmail.com:
Perhaps he has seen some real results that do not show issues at 16 or
18 cores/socket, in which case they would be advantageous, if one can
afford them. I am only going on what the manager of our cluster mentioned
to me in
Hi Carsten,
Sorry I just saw your message today. Thank you very much for the details.
Cheers
2015-02-02 14:11 GMT+01:00 Carsten Kutzner ckut...@gwdg.de:
Hi David,
On 22 Jan 2015, at 18:01, David McGiven davidmcgiv...@gmail.com wrote:
Hey Karsten,
Just another question. What do you
Dear Gromacs users and developers,
We are thinking about buying a new cluster of ten or twelve 1U/2U machines
with 2 Intel Xeon CPU's 8-12 cores each. Some of the 2600v2 or v3 series.
Not yet clear the details, we'll see.
I've been told in this list that NVIDIA GTX offer the best
an
inteligent cluster purchase.
Thanks again.
Best,
D
2015-01-16 14:46 GMT+01:00 Carsten Kutzner ckut...@gwdg.de:
Hi David,
On 16 Jan 2015, at 12:28, David McGiven davidmcgiv...@gmail.com wrote:
Hi Carsten,
Thanks for your answer.
2015-01-16 11:11 GMT+01:00 Carsten Kutzner ckut
Sorry where it says between two gromacs runs I must have said three
gromacs runs. One for each combination of cpu/gpu.
2015-01-22 18:01 GMT+01:00 David McGiven davidmcgiv...@gmail.com:
Hey Karsten,
Just another question. What do you think will be the performance
difference between two
Thank you very much Karsten.
2015-01-16 14:46 GMT+01:00 Carsten Kutzner ckut...@gwdg.de:
Hi David,
On 16 Jan 2015, at 12:28, David McGiven davidmcgiv...@gmail.com wrote:
Hi Carsten,
Thanks for your answer.
2015-01-16 11:11 GMT+01:00 Carsten Kutzner ckut...@gwdg.de:
Hi David
Regards,
D
Best,
Carsten
On 15 Jan 2015, at 17:35, David McGiven davidmcgiv...@gmail.com wrote:
Dear Gromacs Users,
We’ve got some funding to build a new cluster. It’s going to be used
mainly
for gromacs simulations (80% of the time). We run molecular dynamics
simulations
Dear Gromacs Users,
We’ve got some funding to build a new cluster. It’s going to be used mainly
for gromacs simulations (80% of the time). We run molecular dynamics
simulations of transmembrane proteins inside a POPC lipid bilayer. In a
typical system we have ~10 atoms, from which almost 1/3
, at 15:39, David McGiven davidmcgiv...@gmail.com
wrote:
What is even more strange is that I tried with 10 pme nodes (mdrun
-ntmpi
48 -v -c TEST_md.gro -npme 16), got a 15,8% performance loss and
ns/day
are
very similar : 33 ns/day
D.
2014-09-05 14
Dear Gromacs users,
I just compiled gromacs 5.0 with the same compiler (gcc 4.7.2), same OS
(RHEL 6) same configuration options and basically everything than my
previous gromacs 4.6.5 compilation and when doing one of our typical
simulations, I get worst performance.
4.6.5 does 45 ns/day
5.0
lines you used to invoke mdrun as well as the
log files of the runs you are comparing.
Cheers,
--
Szilárd
On Fri, Sep 5, 2014 at 12:10 PM, David McGiven davidmcgiv...@gmail.com
wrote:
Dear Gromacs users,
I just compiled gromacs 5.0 with the same compiler (gcc 4.7.2), same OS
(RHEL 6
and performance
measurements. The list does not accept attachments, please upload it
somewhere (dropbox, pastebin, etc.) and post a link.
Cheers,
--
Szilárd
On Fri, Sep 5, 2014 at 12:37 PM, David McGiven davidmcgiv...@gmail.com
wrote:
Command line in both cases is :
1st : grompp -f grompp.mdp
setting the -npme flag as 12.
Regards,
Abhishek Acharya
On Fri, Sep 5, 2014 at 4:43 PM, David McGiven davidmcgiv...@gmail.com
wrote:
Thanks Szilard, here it goes! :
4.6.5 : http://pastebin.com/nqBn3FKs
5.0 : http://pastebin.com/kR4ntHtK
2014-09-05 12:47 GMT+02:00 Szilárd Páll
What is even more strange is that I tried with 10 pme nodes (mdrun -ntmpi
48 -v -c TEST_md.gro -npme 16), got a 15,8% performance loss and ns/day are
very similar : 33 ns/day
D.
2014-09-05 14:54 GMT+02:00 David McGiven davidmcgiv...@gmail.com:
Hi Abhi,
Yes I noticed that imbalance but I
19 matches
Mail list logo