Hi,
We will have AVX2 acceleration ready for general usage before the end of July
(together with some other goodies), and it will be markedly faster, but until
it's ready and tested to give correct results we can't say anything about the
final performance.
However, in general AVX2 will have th
nt to ask!
Cheers,
Erik
On Mar 20, 2013, at 5:14 PM, Erik Lindahl wrote:
> Hi,
>
> On April 4 (9-10am, US pacific time), Nvidia is helping us organize a web
> seminar about Gromacs-4.6 and all the new GPU capabilities. While I would
> expect their representative to spend a cou
Hi,
On April 4 (9-10am, US pacific time), Nvidia is helping us organize a web
seminar about Gromacs-4.6 and all the new GPU capabilities. While I would
expect their representative to spend a couple of minutes talking about
hardware, they will invite interested users to free remote testdrives of
he MX record to a new domain and we cannot count on the old machine to
forward things.
Cheers,
Erik
--
Erik Lindahl
Professor of Theoretical & Computational Biophysics, KTH
Professor of Computational Structural Biology, Stockholm University
Tel (KTH): +46 8 55378029 Tel (SU): +46 8 164675
Cel
Computational Physics, University Stuttgart
* Emmanuel Birru, Monash University, Parkville Campus
Big congratulations to both winners from the GROMACS & NVIDIA teams, and let me
convey a *very* big and special thank-you to Mark Berger for making this
possible!
All the best,
Erik
--
Erik Lin
elp!
Erik
On Mar 27, 2012, at 5:45 AM, Erik Lindahl wrote:
> Hi!
>
> Big thanks to those of you who already participated in this survey - we got
> about 100 *VERY* useful answers this far, but considering the size of the
> community I think we should be able to improve that five
, so
it is very important for us (and you) to get as representative results as
possible!
If you haven't already, it would be great if you could take a couple of minutes
and participate at
https://www.surveymonkey.com/s/YD9ZMJK
Cheers,
Erik
On Mar 3, 2012, at 1:39 AM, Erik Lindahl
This could be a great way to kickstart your GPU simulation usage!
Nvidia sweepstake rules:
http://www.nvidia.com/object/sweepstakes-official-rules.html
All the best,
Erik Lindahl & The GROMACS Development team
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman
rcomputing Center, Oak Ridge National Labs, and RIKEN in Kobe
where we have access to some of the world’s fastest supercomputers.
Submit a PDF application with a CV, description of interests, and a list of
publications for postdoctoral positions to Erik Lindahl or Berk
Hess ; we are also happy t
Hi,
We're in the process of transferring the domain name to a more convenient
registrar, and this will also require us to change DNS servers. Thus, you might
experience short interruptions in name resolutions for *.gromacs.org next week!
Cheers,
Erik--
gmx-users mailing listgmx-users@groma
that).
The upside of all this is that the file organization will gradually get easier
to understand, and the code itself should also get much more modular and
readable (which we hope will lead to a faster release schedule and more
features :-)
Happy new year!
Erik
--
Erik Lindahl
Profess
that).
The upside of all this is that the file organization will gradually get easier
to understand, and the code itself should also get much more modular and
readable (which we hope will lead to a faster release schedule and more
features :-)
Happy new year!
Erik
--
Erik Lindahl
Profess
Hi,
We'll be moving git.gromacs.org to a new server room in a different building
tomorrow, so it might be unavailable for a short while tomorrow afternoon
European time.
Sorry for the inconvenience!
Cheers,
Erik
Sent from my iPhone--
gmx-users mailing listgmx-users@gromacs.org
http://lis
//www.nvidia.com/MD_Test_Drive, and Roy has also said you can contact
him directly at r...@nvidia.com
Cheers,
Erik
------
Erik Lindahl
Professor, Computational Structural Biology
Center for Biomembrane Research & Swedish e-Science Research
ing!
> Please don't post (un)subscribe requests to the list. Use thewww interface or
> send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
--
Erik Lindahl
Professor, Com
r errors in functional forms.
In fact, he did find one minor program error related to ordering of improper
torsions in TPR during his work, but that turned out to be in Amber ;-) Suffice
to say, I trust his work a _lot_.
Cheers,
Erik
------
grammer/staff), and what your plans look like
time-wise! I'll be happy to tell you more about the work.
Cheers,
Erik
------
Erik Lindahl
Professor, Computational Structural Biology
Center for Biomembrane Research & Swedis
nd it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
--
Erik Lindahl
Professor, Computational Structural Biology
Center for Biomembrane Research & Swedish e-Science Research Cente
rik
On Dec 6, 2009, at 8:10 PM, Erik Lindahl wrote:
> Hi,
>
> We've just put a maintenance release on the site (find it on www.gromacs.org
> or directly on ftp.gromacs.org).
>
> Basically, this includes a number of minor fixes we've incorporated from
> Bugzilla th
Hi,
We've just put a maintenance release on the site (find it on www.gromacs.org or
directly on ftp.gromacs.org).
Basically, this includes a number of minor fixes we've incorporated from
Bugzilla the last few months. In particular, the append issues with files >2GB
have been fixed, and 32/64 b
Hi,
On Sep 3, 2009, at 4:52 AM, Daniel Adriano Silva M wrote:
Dear Gromacs users, (all related to GROMACS ver 4.0.x)
I am facing a very strange problem on a recently acquired supermicro 8
XEON-cores nodes (2.5GHz quad-core/node, 4G/RAM with the four memory
channels activated, XEON E5420, 20Gbs
4.9%
Any ideas why I am seeing this?
Here is the initial mdrun printed input info:
:-) G R O M A C S (-:
Groningen Machine for Chemical Simulation
:-) VERSION 4.0.5 (-:
Written by David van der Spoel, Erik Lindahl,
Hi,
There have been some reports on newer ia64 processors being quite fast
with the Fortran kernels instead (even faster than asm!), so I would
try that.
This has to do with the brain-dead architecture on ia64. The asm
kernels were written for original itanium2 timings, but with the
reg
Hi,
This is just a maintenance release - with fixes for a bunch of minor
things that have been reported on bugzilla. Find it in the usual place:
ftp://ftp.gromacs.org:/pub/gromacs/gromacs-4.0.4.tar.gz
Cheers,
Erik
PS: We haven't forgotten those of you that like binaries, but we're
rework
Hi everybody,
We've just released a maintenance version 4.0.3 with a bunch of
bugfixes and minor enhancements - all issues in bugzilla that had been
settled to be bugs have been fixed.
You can find it in the usual place:
ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.0.3.tar.gz
Some of the f
Hi,
I think we've fixed all minor issues with Gromacs-4.0 now; please
download the new release from
ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.0.2.tar.gz
Now, a friend of order might ask what happened to 4.0.1 that appeared
on the ftp site for a couple of hours on friday?
Unfortunately
Hi,
As expected there was one or two minor issues with 4.0; I'm planning
to create a 4.0.1 release over the weekend or early next week, after
which we'll also do binary packages.
So, you have one or two days left to report 4.0 hickups to bugzilla
and have them fixed rapidly!
Cheers,
Er
Hi,
On Oct 10, 2008, at 12:46 PM, Justin A. Lemkul wrote:
I want to run MD over a part of my molecule , for few residues only
(not the whole molecule).
Can I do it using GROMACS ?
I searched for the online documentation and mailing list, but
unable to get appropriate information.
If somebo
Hi,
In Gromacs 3.3 and earlier that depended on whether you were writing
velocities & coordinates to the full precision trajectory - we can
only restart from a frame where we have that.
Starting with Gromacs 4, runs are automatically checkpointed every 15
minutes (by default). Then you ca
And if you hold on for 3-4 days we will have a binary disk image
package for Gromacs 4.0 available for macs (there is already one for
Gromacs 3.3.x)
Cheers,
Erik
On Oct 10, 2008, at 12:42 PM, Justin A. Lemkul wrote:
Kwee Hong wrote:
Hi.
I'm very new to iMAC as I've just started to use
So,
First the bad news: from now on we're probably not going to care too
much about bugs in gromacs-3.3.
The good news is that Gromacs 4.0 is finally & officially _released_.
You can download the source code package at
ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.0.tar.gz
We will put docum
Hi,
Not 4.0, but it's in the pipeline for 4.1.
The issue is not doing it, but doing it clearly faster than explicit
water MD, since water interactions are extremely optimized in Gromacs!
Cheers,
Erik
On Oct 9, 2008, at 2:38 PM, Arthur Roberts wrote:
Hi, all,
I know this question might s
n PhD students.
To apply, please submit a CV, possibly list of publications, and a
brief letter (1 page) describing your background and interest to Erik
Lindahl ([EMAIL PROTECTED]). Evaluation of applications will begin
november 1 and continue until the position has been filled. The
tentative sta
Hi,
Great. I'll just revert to make static libraries the default for 4.0
too.
Cheers,
Erik
On Oct 9, 2008, at 7:55 AM, Justin A. Lemkul wrote:
Erik Lindahl wrote:
On Oct 9, 2008, at 5:24 AM, Justin A. Lemkul wrote:
To follow up just a bit more - it appears that this probl
On Oct 9, 2008, at 5:24 AM, Justin A. Lemkul wrote:
To follow up just a bit more - it appears that this problem is
isolated to the PowerPC architecture. I can compile and install RC4
on my own laptop (a recent Intel Mac) with no problems.
I think it's the shared libraries in combination
Hi Justin,
This might be due to shared libraries, which I thought always worked
on OS X :-)
Could you try with the option --disable-shared?
I guess the world still might not be ready for default shared
libraries... in that case I'll disable it.
Cheers,
Erik
On Oct 9, 2008, at 4:36 AM,
Hi,
I think we have fixed all issues with the release candidates, and I
have just put the rc4 version at
ftp://ftp.gromacs.org/pub/beta/gromacs-4.0_rc4.tar.gz
Please test this to check whether there are any showstopper bugs :-)
Although I'm sure there are still some minor issues, our curre
Hi,
On Oct 6, 2008, at 6:46 AM, Himanshu Khandelia wrote:
We are buying a new cluster with 8-code nodes and infiniband, and
have a
choice between 10 Gbit/s and 20 Gbit/s transfer rates between nodes.
I do
not immediately see the need for 20GBit/s between nodes, but thought
it
might be wor
Yes.
Cheers,
Erik
On Sep 29, 2008, at 3:18 PM, himanshu khandelia wrote:
Is there an implementation in gromacs for using a uniform
neutralizing plasma with PME, to avoid use of counterions?
Thank you
-Himanshu
___
gmx-users mailing listgmx-u
Hi,
Please search the list - all this stuff will be available in Gromacs
once we have neighborlists and PME working, but that's non-trivial
work ;-)
Cheers,
Erik
On Sep 30, 2008, at 10:17 AM, Jose Duarte wrote:
Hi Tiago
I think this is precisely what the [EMAIL PROTECTED] people are do
Hi,
We've corrected some minor things, and one major: there was a pointer
not always being updated in the PME loop (new optimization stuff we
introduced very late, so CVS versions have been fine until 20080918)
which could lead to force errors.
So, please point your browsers/ftp-clients t
ou in advance.
--- On Sun, 9/21/08, Erik Lindahl <[EMAIL PROTECTED]> wrote:
From: Erik Lindahl <[EMAIL PROTECTED]>
Subject: [gmx-users] Announcing: Gromacs 4.0, release candidate 1
To: "Discussion list for GROMACS users"
Date: Sunday, September 21, 2008, 11:46 PM
Stockholm, Se
ort bugs to us. Don't complain
if your trajectories are eaten after you've moved them in the middle
of a simulation, though ;-)
Cheers,
Erik
On Sep 22, 2008, at 8:46 AM, Erik Lindahl wrote:
Stockholm, September 22 2008
In a bold move today, the Gromacs developers finally decided not
Stockholm, September 22 2008
In a bold move today, the Gromacs developers finally decided not to
wait for Duke Nukem Forever before releasing Gromacs 4.0, and just put
out release candidate 1 together with a new manual at
ftp://ftp.gromacs.org:/pub/beta/
"We realize it could be a big disa
Hi Roland,
We have it working with pdb2gmx, but not CMAP yet, but we're working on
integrating that. I'll see what I can do about pushing things into CVS when
I'm back from vacation in two weeks!
Cheers,
Erik
On Thu, Jul 17, 2008 at 6:57 PM, Roland Schulz <[EMAIL PROTECTED]> wrote:
> Hi all,
>
Hi,
Skip the final slash (my bad):
http://wiki.gromacs.org/index.php/Stanford_2008_Workhop
Or just click your way from the main page.
Cheers,
Erik
On Apr 25, 2008, at 11:54 AM, Mu Yuguang (Dr) wrote:
Sorry Eric,
I cannot see anything.
Regards
Yuguang
___
Hi,
I'm still waiting for a couple of speakers, but I've finally put up
PDF copies of the slides from workshop talks. Have a look at
http://wiki.gromacs.org/index.php/Stanford_2008_Workhop/
Cheers,
Erik
___
gmx-users mailing listgmx-users@grom
lease search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
Erik Lindahl <[EMA
Hi,
One option would simply be to remove make_edi from Makefile.am.
Unfortunately this tool has proven to be a bit buggy in the compile
stage, and you won't need it for any normal simulations.
The other fix would be to create separate static variables to use in
the list for reading option
-term staff scientist (5
years) or even assistant professorship appointments.
Feel free to drop me a line if you're a finishing PhD student or
postdoc who might be interested, and I'd be happy if you want to
spread the the word to your local students!
Cheers
e" to simulation data
would be a factor ~3 higher in the latter case.
Cheers,
Erik
Erik Lindahl <[EMAIL PROTECTED]> Backup: <[EMAIL PROTECTED]>
Assistant Professor, Computational Structural Biology
Center for Biomembrane Research, Dept. Biochemistry & Bioph
Hi,
Oops - I happened to only post this to the gmx-developers list first:
I have put up a somewhat primitive information & registration page
about the Stanford workshop at
http://www.gromacs.org/stanford2008/
As I mentioned on friday, the space is somewhat limited (30-35
participants),
there is any, about the workshop that
will be held in Göttingen? I am outside of the countries you
mentioned :)
-Original Message-
From: Erik Lindahl <[EMAIL PROTECTED]>
To: gmx-users@gromacs.org
Date: Fri, 15 Feb 2008 20:13:19 +0100
Subject: [gmx-users] Stanford workshop april 7
Göttingen soon :-)
From wednesday any possible remaining slots will be filled on a first-
come, first-served, basis.
Cheers,
Erik
--------
Erik Lindahl <[EMAIL PROTECTED]> Backup: <[EMAIL PROTECTED]>
Assistant Professor, Computational Structural Biology
Center for
Hi Warner,
I think this is a bug in the assembler/linker shipped with Leopard
since the instructions are perfectly valid.
In any case, I've worked around it in the release branch of CVS.
Cheers,
Erik
On Feb 7, 2008, at 8:52 PM, Warner Yuen wrote:
My apologies. I should have known better, b
Hi,
We've thought of doing an advanced simulation/modeling workshop at
Stanford for a while, and finally settled on a date, although with a
bit short notice :-)
In any case, we will do a semi-advanced workshop at Stanford
University in the US on April 7 & 8 with room for 30-35 people. The
Hi David,
On Dec 9, 2007, at 3:50 AM, David Mobley wrote:
All,
This may be more of an OPLS question than a GROMACS question, but I am
trying to determine where the OPLS parameters for alkynes (opls_925
through opls_932 in ffoplsaa.atp of the gromacs distribution) come
from, and especially the
On Nov 13, 2007, at 11:56 AM, Henry O Ify wrote:
please stop mailing me, i am tired of all this mail
First, it was you and not we who subscribed you to the list :-)
Second, you're more than welcome to unsubscribe. However, you would
probably have had better luck if you had read the footer
Hi,
On Nov 7, 2007, at 5:40 PM, maria goranovic wrote:
Thanks for the help, David. Actually, I just realized I was trying to
decide based on mailing list archives which, in some cases, are 5
years old. My mistake. I will use the 43a2 field for my protein.
Is there any standard procedure to com
Hi,
Yes, you should. As I think we write somewhere in the manual or mdp
file documentation, for Berendsen coupling the time constant refers
to the exponential relaxation time, while for N-H or P-R they
correspond to the period of the fluctuations between the system and
reservoir. Typicall
MPICC is the environment variable (upper case important). It should
be set to the name of your mpi-enabled C compiler.
Cheers,
ERik
On Oct 18, 2007, at 4:40 AM, liu xin wrote:
Hi Erick
you mean export MPICC=mpcc? ok, I will try that
On 10/18/07, Erik Lindahl <[EMAIL PROTECTED]>
Hi,
On Oct 17, 2007, at 7:13 PM, liu xin wrote:
Thanks for your quick comment David
but if I tried
./configure --enable-mpi --prefix=/hpc/gromacsmpi
it will complain about cant find MPI compiler, but I've already
export mpcc=mpicc
Try setting MPICC instead :-)
Cheers,
Erik
the opportunity to invite everybody else to
post your gromacs- ,or at least MD-, related job ads on the list.
Most of us have been in the job-hunting stage even if it was a while
ago!
Cheers,
Erik Lindahl
___
gmx-users mailing listg
] on behalf of Erik Lindahl
Sent: Mon 10/15/2007 19:08
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] errores during compilation of GROMACS 3.3.2
Hi,
Could you try without optimization flags? Go to src/gmxlib/nonbonded/
nb_kernel_ia64_double and type
make
blems with the intel one as well.
Best,
Itamar
-Original Message-
From: [EMAIL PROTECTED] on behalf of Erik Lindahl
Sent: Mon 10/15/2007 17:02
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] errores during compilation of GROMACS 3.3.2
Hi Itamar,
Not immediately. Did this
Hi Itamar,
Not immediately. Did this work fine with 3.3.1? As far as I know we
haven't changed anything in the ia64 assembly routines.
Cheers,
Erik
On Oct 15, 2007, at 2:16 AM, Itamar Kass wrote:
Dear all,
I am trying to compile GROMOCS 3.3.2 on Altix 3700 BX2 (Itanium2
1.6GHz). I am d
Hi,
On Oct 13, 2007, at 6:12 PM, Yang Ye wrote:
Program like VMD can draw extra bond. You need to get its manual
and code a line or two in TCL.
Regards,
Yang Ye
On 10/14/2007 11:38 PM, van Bemmelen wrote:
Hi Ozge,
I'm not sure what you mean by "visualization". But what about
using VMD
f
Hi,
On Oct 13, 2007, at 6:04 PM, Sagittarius wrote:
I already have some kind of cygwin.
Could you please provide me with more details on
"install cygwin with some kind of X server"
Thank you in advance
Actually, if you don't absolutely need ngmx you can just disable the
X11-dependent parts
Hi,
On Oct 12, 2007, at 7:54 PM, Nickle Fan wrote:
Dear gmx-users:
I am working on incorporating a customized non-bonded interaction
into my simulation. I am trying to implementing it through the
tabulated interaction potential.
The manual says that the potential should be tabulated up t
Hi,
On Oct 9, 2007, at 6:21 PM, Triguero, Luciano O wrote:
Hi,
I am trying to compile version 3.3.2 in the parallel mode. I use
the following options to configure:
./configure --prefix=dir --disable-float -enable-mpi
and receive the following error message:
checking size of int... configu
HI,
On Oct 5, 2007, at 10:04 PM, David van der Spoel wrote:
vijaya subramanian wrote:
Hi
I have been running GROMACS jobs on a Cray XT3 MPP machine. I
get limited time, 12 hr, for any given run and find that the
.edr file and .log files do not get written to continuously. This
is ok if
Hi Jones,
On Sep 17, 2007, at 6:25 PM, Jones de Andrade wrote:
Still under NDA? Haven't it been lift last sunday?
Well, I guess it's because this was pre-release hardware, so it
didn't have any expiry date :-)
Anyway: I don't know exactly the terms of the NDA, So if you can't
answer me
Hi,
Yes, the SPEC numbers aren't representative for typical gromacs
performance.
Technically I'm still on NDA for the benchmarks I ran on Barcelona so
I can't share the exact numbers with you, but on average the
performance will be pretty much the same _per_clock_ as Intel
Clovertown. A
Hi,
Well, I have a much simpler solution - there is no reason whatsoever
to use fortran on ia32 or x86-64 since Gromacs will always use the
much faster SSE assembly loops.
Cheers,
Erik
On Sep 13, 2007, at 8:44 AM, Diego Enry wrote:
This works for me:
% vim /etc/ld.conf
/opt/intel/mkl
Hi,
On Aug 31, 2007, at 11:02 PM, Hwankyu Lee wrote:
Thank you very much for your suggestion in the gromacs forum, but I
may not understand your explanation completely. In the small
bilayer, there is no fluctuation, so area/lipid can be calculated
based on the cell dimensions. But, in
Hi,
In principle you can calculate this from equations e.g. in Safran's
book "Statistical Thermodynamics of Surfaces, Interfaces, and
Membranes".
However, when we worked with this a few years ago we ended up in the
conclusion that for the properties we were interested in, the
"effective
, Ansgar Esztermann wrote:
On Fri, Aug 24, 2007 at 09:56:32AM +0200, Erik Lindahl wrote:
Hi,
Quad-core is the way to go. I recently benchmarked the scaling of the
CVS version, and with 8 independent jobs we get 85-97% throughput
scaling, depending on the type of simulation. And you get
Hi,
Quad-core is the way to go. I recently benchmarked the scaling of the
CVS version, and with 8 independent jobs we get 85-97% throughput
scaling, depending on the type of simulation. And you get essentially
the same performance if you run two jobs using 4 cores each.
Even for a single
Hi,
We've supported x86-64 since 2002 or so. There are already handcoded
x86-64 kernels in the correspodning x86_64_sse directory.
This is just a matter of your system not being recognized correctly
as x86-64, but ia32 for some reason. First make sure that you are
really setting -m64 in C
Hi,
On Aug 21, 2007, at 8:08 AM, Nicolas Schmidt wrote:
Would't this mean, that molecules could suddenly appear and lead to
a discontinuity in the LJ-potential?
Or in what way should I modify the r_vdw according to what rlist? I
wanna end with a cut-off of the LJ-potentail around 1.75nm.
And the reason for this is that nobody has really used lagrangian-
interpolation PME since smooth PME appeared in 1995 :-)
Cheers,
Erik
On Aug 21, 2007, at 4:47 AM, Mark Abraham wrote:
Jones de Andrade wrote:
Hi all.
Well, I have some kind of a "didatic" doubt: Ok, from what I can
underst
Hi Nicholas,
nstlist=1 means you recalculate the neighborlist every single step,
which will be quite expensive.
Depending on the type of system and temperature you are simulation
you probably want to start somewhere around nstlist=10.
Cheers,
Erik
On Aug 20, 2007, at 10:26 PM, Nicolas
Hi,
Please post questions like this to the mailing list, so the answers
make it to the archive.
For QM/MM, have a look at
http://www.mpibpc.mpg.de/groups/grubmueller/start/people/ggroenh/
qmmm.html
Unfortunately we are not even allowed to distribute a "diff" file
with modifications to G
Hi Jeroen,
Just set the charges to zero - then the program will automatically
detect it and use LJ-only loops everywhere.
Cheers,
Erik
On Aug 15, 2007, at 8:12 PM, van Bemmelen wrote:
Hi all,
As a side note: although I also have some trouble figuring out what
Nicolas Schmidt is exactly t
Hi,
On Aug 10, 2007, at 4:37 PM, Florian Haberl wrote:
option if you're doing really short (minutes ~ hours)
simulations.
The configuration:
*Two 3.0GHz Quad-Core Intel Xeon
* *8GB (4 x 2GB)
* *500GB 7200-rpm Serial ATA 3Gb/s
RAM size shouldn't be an issue, except if you're simulating huge
Hi,
As long as you have the exact date you checked it out from CVS it is
possible to obtain the same version again.
However, bear in mind that CVS _does_ break periodically, and we
don't check it like the released version. Before I do anything
production-related with CVS I always check th
t??? So I could continue the job in the new machine???
Best Regards,
Anthony
On Tuesday 07 August 2007 5:25 pm, Erik Lindahl wrote:
Hi Anthony,
As long as the version you're continuing with is the same or more
recent than the one you started with it should work fine; all gromacs
output files ar
Hi Anthony,
As long as the version you're continuing with is the same or more
recent than the one you started with it should work fine; all gromacs
output files are stored in portable formats and are can be read by
newer versions.
You are not guaranteed _binary_ identical results, though
Hi,
In principle Gromacs should never just crash with a segmentation
fault, but at least give you a (perhaps cryptic) error message and
exit somewhat gracefully.
As far as I know there is only one exception to this: If you are
using tabulated interactions the table can only be of finite s
Consult the documentation for your MPI library and queue system. This
is not a Gromacs error message - mpirun is complaining that there
aren't enough processors to start 4 jobs.
Cheers,
Erik
On Aug 6, 2007, at 1:14 PM, Anupam Nath Jha wrote:
i have tried that also but still same error i
The -np flag should be specified to the mpirun command (gromacs
determines it automatically).
It could also be a problem with your MPI library or queue system - I
have no idea why it is trying to start 3 processes...
Cheers,
Erik
On Aug 6, 2007, at 12:46 PM, Anupam Nath Jha wrote:
Dear
Hi,
I agree with you about the GbE sharing between 4 cores degardes the
performance. Fortunately, every Cluster node has two GbE ports. I
want to know, can I configure lamd in such a manner that every
processor
on every node (with two cores) uses one of these ports for its
communication purpo
Hi,
I read at Gmx site that the DPPC system
composed of 121,856 atoms. I saw the gmx topology files, it
seems that Gmx makes data decomposition on input data to run in
parallel
(in our simulation case using "-np 12" for execution
on 3 nodes, the data space for every process is about 10156 ato
Hi,
On Jul 22, 2007, at 6:08 PM, Kazem Jahanbakhsh wrote:
mpirun -np 8 mdrun_d -v -deffnm grompp
First, when you run in double precision you will communicate exactly
twice as much data. Since gigabit ethernet is usually both latency
and bandwidth-limiting, you might get better scaling (a
I put a recent copy (friday) at
http://lindahl.sbc.su.se/outgoing/aatto/
Cheers,
Erik
And yes... we should get a nightly-snapshot system running... :-)
On Jul 22, 2007, at 4:08 PM, Kumar V wrote:
Hai all,
Our campus network is behind firewall because of which I
cant use CVS pse
Hi Jian,
The cutoff is based on the center of geometry of the "charge group"
definitions in your topology. Since we typically use PME nowadays the
don't have to be neutral, so "neighborsearching group" would probably
be a better name.
If each atom is a separate charge group the cutoff is
Hi,
On Jul 19, 2007, at 10:15 AM, Edgar Luttmann wrote:
I can't find any usage of the GB parameters as read from the mdp
file - neither in the latest release nor the HEAD revision in the
CVS. Is there another branch in the CVS you got that code in?
No, at least not publicly available atm.
Hi,
On Jul 19, 2007, at 12:52 AM, Ibrahim M. Moustafa wrote:
Hi Jhon,
I have the same issue with GROMACS on Mac PPC 10.4.
The program executes only with the full path specified. I posted a
similar question
before, you can check the archive, but got no solution for the
problem.
What I di
Although I realize it is the holiday season, at least in Europe, I
would like to once more recommend the development wiki for
documenting this kind of discussion, and for publishing
specifications for algorithms, like (totally unrelated, but came up
earlier today):
http://wiki.gromacs.
Hi Jim,
That system should scale quite well with a fast network (infiniband)
since you don't seem to be using PME, and the CVS 3.99 version is
even better.
However, as David already mentioned, the gigabit network is likely
limiting you. If you test the CVS code there are timing reports th
1 - 100 of 257 matches
Mail list logo