git checkout --track -b release-4-0-patches origin/release-4-0-patches
Carsten
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http
don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am
Hi Vijaya,
what version of Gromacs is this and how big do the trr files
have to be so that the segv shows up?
Carsten
On Apr 22, 2010, at 6:56 PM, vijaya subramanian wrote:
Hi
When I run make_edi with a small eigenvec.trr file it works, but gives me a
segmentation fault when I input
/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten
On Apr 16, 2010, at 1:40 AM, Shuangxing Dai wrote:
I am not running in parallel. Right now I just changed links order from 12 to
4. It is still slow. While I change to shift, not Ewald, it finished 1
steps in 10 mins. In the paper:
J Comput Chem. 2005 Dec;26(16):1701-18.
GROMACS:
1:18
(Mnbf/s) (GFlops) (ns/day) (hour/ns)
Performance:398.725 22.539 22.158 1.083
Finished mdrun on node 0 Mon Feb 15 22:54:31 2010
On Mon, Feb 15, 2010 at 5:36 PM, Carsten Kutzner ckut...@gwdg.de wrote:
Hi,
18 seconds real time is a bit
Hi,
18 seconds real time is a bit short for such a test. You should run
at least several minutes. The performance you can expect depends
a lot on the interconnect you are using. You will definitely need a
really low-latency interconnect if you have less then 1000 atoms
per core.
Carsten
On
Hi Jochen,
it should work by putting it in the LDFLAGS. Either you should then
get an executable that says something like
Electric Fence 2.2.0
at the very start of execution or it should not compile when
the library is not found.
Carsten
On Jan 27, 2010, at 5:32 PM, Jochen Hub wrote:
On Jan 26, 2010, at 11:48 AM, Carla Jamous wrote:
Hi everyone,
Please I'm having a problem with mdrun:
If I type:
mdrun -v -s test.tpr -o test.trr -c test.pdb -x test.xtc -e test.edr -g
test.log
I never get the .xtc file. Can anyone tell me why what can I do to have an
.xtc file?
Hi,
you should set one pull group, not 700. The number of atoms in your
pull group is 700. Freezing the pull group in x and y direction probably
does what you want. Please also consider to upgrade to 4.0.7,
which is the most recent stable version.
Best,
Carsten
On Jan 22, 2010, at 7:41 AM,
On Jan 10, 2010, at 4:24 PM, Chao Zhang wrote:
Dear GMX-Users,
I'm testing my 256 full hydrated lipid on blue gene. The purpose is to find
out the right number for -npme, as mdrun can not estimate itself
successfully.
You might find g_tune_pme useful, which is available in the git
Hi Chris,
On Dec 23, 2009, at 9:06 PM, chris.ne...@utoronto.ca wrote:
Hello,
I am having trouble getting make_edi -linfix to work with multiple
eigenvectors.
This works for a single EV:
$ echo 3 | make_edi -s ../../SETUP/makeTPR/edi.tpr -f
../../SETUP/makeEDI/eigenvec.trr -o
/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use thewww interface or
send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
--
gmx-users mailing listgmx-users@gromacs.org
On Dec 21, 2009, at 5:26 PM, david.lisgar...@canterbury.ac.uk
david.lisgar...@canterbury.ac.uk wrote:
Dear Users,
Re Introductory tutorial;
Trying to run the following:
editconf -f out.gro -o fws_ctr.gro -center x/2 y/2 z/2
You have to give real coordinates for the center, not
the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute
...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de
you in advance.
Flor
Dra.M.Florencia Martini
Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas
Cátedra de Química General e Inorgánica
Facultad de Farmacia y Bioquímica
Universidad de Buenos Aires
Junín 956 2º (1113)
TE: 54 011 4964-8249 int 24
--- El vie 25-sep-09, Carsten Kutzner
to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49
On Sep 23, 2009, at 2:12 PM, Enamul Haque wrote:
Hi gromacs experts,
I am trying to install gromacs version 4.0 to my laptop running
under ubuntu. But I can't. It shows some error like --
./configure
checking build system type... i686-pc-linux-gnulibc1
checking host system type...
? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp
-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http
? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp
before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical
On Aug 11, 2009, at 11:52 AM, BSV Ramesh wrote:
-- Dear All,
I am getting the following error:
Killed by signal 2
Killed by signal 2
Killed by signal 2
Killed by signal 2
Killed by signal 2
Killed by signal 2
Killed by signal 2
Killed by signal 2
.
.
.
.
Killed by signal 2
,
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551
On Jul 7, 2009, at 9:28 AM, Chih-Ying Lin wrote:
Hi
I have installed the Gromacs and I want to see the source codes.
Which directory can I find them?
gromacs-4.0.5/src
gromacs-4.0.5/include
Carsten
___
gmx-users mailing list
On Jul 7, 2009, at 9:35 AM, Chih-Ying Lin wrote:
Hi
inside template.c =
#include statutil.h
#include typedefs.h
#include smalloc.h
#include vec.h
#include copyrite.h
#include statutil.h
#include tpxio.h
what are they?
and, where are they?
Go to the gromacs-4.0.5 directory and find out with
On Jun 28, 2009, at 10:46 AM, sharada wrote:
Hi,
I waited for it to finish for almost 5 days nothing happened
except creation of those files. I had posted a similar mail some
time back. Is there no solution for this. Is it something to do
with speed of the system? I ran the program
On Jun 29, 2009, at 11:58 AM, sharada wrote:
hello,
I have downloaded the tar file from the github and extracted the
contents. However I am unable to understand the README file as it is
in a language other than english. Could you kindly provide the
instructions how to go about using it.
On Jun 22, 2009, at 4:43 PM, akalabya bissoyi wrote:
hello everybody
i am running my gromacs in PC, it takes me lot of time for running
my simulation.
Can anybody help me regarding parallel computing using my PC so that
my simulation will be faster.
Any materials/protocol so that i can
/Gromacs/
Regards,
Carsten
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
Hi,
it's written at the begin of the .c file:
* You can compile this tool using the Gromacs Makefile from the
* share/gromacs/template directory, just replace 'template' by
'g_tune_pme'
* where needed. To enable shell completions for g_tune_pme, just
* copy the provided completion.*
? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp
On Apr 21, 2009, at 5:04 PM, sheerychen wrote:
Hello, every body. I have a question about parallel running of
mdrun_mpi. I doubt that sometimes the parallel running of mdrun_mpi
can not utilize the domain decomposition.
This is the case when I use the batch work in the computer cluster
Hi,
On Apr 21, 2009, at 5:53 PM, sheerychen wrote:
yes, both versions are compiled as mpi version. However the start
mpi messages are different. For MPICH, it would show that 1D domain
decomposition like 3*1*1, and only 1 file would be produced. However
for MPICH2, no such information
= 6 then.
Carsten
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
On Jan 13, 2009, at 9:15 PM, ha salem wrote:
Dear Karsten and gromacs specialists
I enabled flow control on hp procurve now I want to know how I can
config low latency on the network ?is it required?
thank you
There are quite some parameters that affect Ethernet performance. They
can
be
of a
parallel
simulation.
Carsten
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
to Enable with the space bar. More information
will
be available in the manual, see e.g.
http://www.hp.com/rnd/support/manuals/2800.htm
Carsten
--- On Fri, 1/2/09, Carsten Kutzner ckut...@gwdg.de wrote:
From: Carsten Kutzner ckut...@gwdg.de
Subject: Re: [gmx-users] dear KUTZNER (flow control
Dear Ha Salem,
in all the switches we tested at that time, flow control was disabled
by default. You can
connect to the switch (e.g. via telnet) and activate flow control.
For the ProCurve switches
flow control can be enabled and disabled for each of the ports
individually.
Carsten
Am
On Dec 18, 2008, at 10:25 AM, Venkat Reddy wrote:
how to save the coordinates of atoms at regular intervals that were
generated during mdrun ?? Is it automatic or we need to
modify .mdp file
Hi,
The 'nstxout' and 'nstxtcout' entries in the mdp file allow you to set
how often the
? Read http://www.gromacs.org/mailing_lists/users.php
Carsten Kutzner
[EMAIL PROTECTED]
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
Hi,
most likely the Ethernet is the problem here. I compiled some numbers
for the DPPC
benchmark in the paper Speeding up parallel GROMACS on high-latency
networks,
http://www3.interscience.wiley.com/journal/114205207/abstract?CRETRY=1SRETRY=0
which are for version 3.3, but PME will behave
-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck
://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
www.mpibpc.mpg.de/home/grubmueller/
www.mpibpc.mpg.de/home
Hi Justin,
I have written a small gmx tool that tries various PME/PP balances
systematically
for a given number of nodes and afterwards gives a suggestion what
the fastest
combindation is. Although I plan to extend it with more
functionality, it's already
working and I can send it to you
gromacs version are you using?
And have a look at the messages by Carsten Kutzner in this list, he
wrote a lot on gromacs scaling.
Jochen
Best regards,
Tiago Marques
connected via
infiniband.
With Thanks,
Vivek
2008/9/26 Carsten Kutzner [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Hi Tiago,
if you swith off PME and suddenly your system scales, then the
problems are likely to result from bad MPI_Alltoall performance. Maybe
this is worth a check
is helpfull in figuring out the problem.
Please, advice
With Thanks,
Vivek
2008/9/11 Carsten Kutzner [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
vivek sharma wrote:
Hi There,
I am running gromacs parellal version on cluster, with different
-np options.
Hi,
which
paper.
- Speeding up parallel GROMACS on high-latency networks, 2007, JCC, Vol 28, 12
- GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable
Molecular
Simulation, 2008, JCTC 4 (3)
Hope that helps,
Carsten
With Thanks,
Vivek
2008/9/12 Carsten Kutzner [EMAIL PROTECTED
/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck
Hi Rebeca,
Lines 69/70 of 3.3 include/types/simple.h reads
/* Max number of nodes */
#define MAXNODES256
This obviously needs to be set to a higher value.
There is also a MAXNODES parameter in CVS src/gmxlib/tpxio.c,
which I guess also needs to be set to the same value if
you
/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de
of Cambridge.
80 Tennis Court Road, Cambridge CB2 1GA, UK.
http://www.bio.cam.ac.uk/~awd28
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http
Lee Soin wrote:
Does GROMACS have a multi-thread implementation, instead of using MPI?
No, at least not yet.
Carsten
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
Lee Soin wrote:
But I see that in mdrun there's an option nt--number of threads to start
on each node. Does this mean multi-thread?
Yes, nt means number of threads. It's already there for a future version.
Carsten
___
gmx-users mailing list
steps, might be longer).
Carsten
Best wishes!
Ji Xu
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
2008-06-23
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077
seen, I can say that you cannot expect any
speedup if your computers are only connected with 100 Mpbs. You will
need at least 1000 Mbps, or better Infiniband/Myrinet.
Carsten
thank you
--- On *Sun, 6/15/08, Carsten Kutzner /[EMAIL PROTECTED]/* wrote:
From: Carsten Kutzner [EMAIL PROTECTED
On 15.06.2008, at 20:19, ha salem wrote:
dear users
I have encouneterd a problem with mpirun I have 2 pc (every pc
has 1 intel
quad core cpu) ,when I run mdrun on 1 machine with -np 4
option the calculation
run on 4 cores and goes faster ,system monitor show all 4 cores
. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne
the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313
before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck
Hi Nicolas,
it is no problem to read 'older' tpr files with a newer version of
gromacs. The other way round, it will probably not work - but gromacs
will give you an error message then, displaying the version differences.
Carsten
Nicolas Martinez wrote:
Hello gromacs users
I am using
Am 24.03.2008 um 10:17 schrieb maria goranovic:
Hi Folks,
My simulation is running too slow. It took 10 wall clock hours (40
cpu hours) for a short 50 ps simulation of a ~ 23000 atom DPPC
bilayer. The hardware is a 4-cpu core. The installation is gromacs
3.3.1. I have run much larger
before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational
Anna Marabotti wrote:
Dear GMX-developers (in particular dear Carsten Kutzner),
to overcome problems in making parallel runs with GROMACS on a Linux cluster
with Gigabit Ethernet
interconnection, I downloaded the package gmx_all-to-all to speed up the
processes. In the
instructions, however
requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen
http://folding.bmc.uu.se/remd/index.php
It's in the news section of www.gromacs.org.
Carsten
OZGE ENGIN wrote:
Hi all,
Somebody has sent a mail which is about the address of a server in which the
temperatures for a REMD simulation is calculated. However, I can not find
this mail. Could
(un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
Hi Andreas,
try
./configure --enable-mpi --without-x
Carsten
Am 18.01.2008 um 09:45 schrieb Andreas Kukol:
On SuseLinux 10.3 the command ./configure --enable-mpi works fine
but make terminates at this point. Using the option --disable-
shared did not change anything.
I would be
)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077
/ include directory in
the LDFLAGS / CPPFLAGS variables.
This should to the trick.
Carsten
Am 03.01.2008 um 17:56 schrieb mahdi fathi:
Dear Dr Carsten Kutzner
I want install gromacs on 4 machines at my small lab
but I couldnt find clear instructions for parallel instalation on
gromacs.org
at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Hi Servaas,
I often had similar problems when running on mpich-1.2.x. In my case
they all vanished when I was using any other MPI implementation, like
LAM, OpenMPI, or mpich-2.x.
Carsten
servaas michielssens wrote:
-- next part --
An HTML attachment was scrubbed...
Hadas Leonov wrote:
After installing with openmpi - I ran some benchmarks for 4 processors
on Mac-Pro:
d.villin:
Leopard performance: 13714 ps/day
old OS performance:41143 ps/day.
gmx-benchmark : 48000 ps/day.
d.poly-ch2
Leopard performance: 8640 ps/day
old OS
)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077
Hadas Leonov wrote:
Great, thanks!
It does solve the ia32 compilation problems, but not the lam-mpi
compilation problems - I still get undefined symbols for a few lam
variables.
So still using open-mpi, Gromacs works a little better now, for d.villin
benchmark, the performance are:
1
the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical
Hi Hadas,
I think the problem is that the make command for some reason thinks it
should build the Itanium inner loops:
ld: in ../gmxlib/.libs/libgmx_mpi.a(nb_kernel204_ia32_sse.o), in
though on your machine the x86_64 kernels should be build. Probably one
can set the correct architecture with
himanshu khandelia wrote:
Hi Carsten,
The benchmarks were made is 1 NIC/node, and yet the scaling is bad.
Does that mean that there is indeed network congestion ? We will try
using back to back connections soon,
Hi Himanshu,
In my opinion the most probable scenario is that the bandwidth of
search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute
the benchmark for 8 CPUs. See if you get a different value.
Regards,
Carsten
Thank you,
-Himanshu
On 10/25/07, Carsten Kutzner [EMAIL PROTECTED] wrote:
Hi Himanshu,
maybe your problem is not even flow control, but the limited network
bandwidth which is shared among 4 CPUs in your
mdrun -s x.tpr
Hope that helps,
Regards,
Carsten
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep
level on dedicated
nodes, it will not affect the performance.
Hope that helps, regards,
Carsten
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551
Hi Linda,
you can use these commands to download from CVS:
cvs -z3 -d :pserver:[EMAIL PROTECTED]:/home/gmx/cvs login
Then hit RETURN on password prompt
cvs -z3 -d :pserver:[EMAIL PROTECTED]:/home/gmx/cvs co gmx
Carsten
Zhaoyang Fu wrote:
Dear gmx users developers,
Would you
Mark Abraham wrote:
Gurpreet Singh wrote:
I get the following errors while using paralled version of mdrun compiled
with openmpi.
mpirun -np 4 mdrun_d_mpi -np 4 -v -deffnm EQUI1
*
Program mdrun_d_mpi, VERSION 3.3.99_development_20070720
Source code file: gmx_parallel_3dfft.c, line: 90
Hi Abu,
with these commands you download the latest CVS version:
cvs -z3 -d :pserver:[EMAIL PROTECTED]:/home/gmx/cvs login
return
cvs -z3 -d :pserver:[EMAIL PROTECTED]:/home/gmx/cvs co gmx
Carsten
Naser, Md Abu wrote:
I implemented an xy only fitting option for trjconv in the
pim schravendijk wrote:
Hi People!,
The Gromacs website is seriously broken. I wanted to copy-paste the
line for cvs checkout as I usually do, but following the link to the
CVS page via either the download or the developer menu gives me a page
saying I don't have access and need to log in.
-to-all just
turned out to be the No. 1 candidate.
Carsten
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research
Hi Chris,
the patch can be downloaded from the gromacs website:
download - user contributions - contributed software - gmx_alltoall
Carsten
[EMAIL PROTECTED] wrote:
This is a fantastic development. I had wondered why my scaling was so
much better for openmpi than for lam. Has the patch been
requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen
://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical
mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck
-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max
)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen
it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http
-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck
computer nodes.
Carsten
-Original Message-
From: Carsten Kutzner [EMAIL PROTECTED]
To: Discussion list for GROMACS users gmx-users@gromacs.org
Sent: Mon, 19 Jun 2006 10:28:51 +0200
Subject: Re: [gmx-users] MPICH or LAM/MPI
Hello Hector,
since it does not take long to install lam
101 - 200 of 202 matches
Mail list logo