On Fri, 2009-06-05 at 14:36 +0200, DEVEL Michel wrote:
> Terry Frankcombe a écrit :
> > Is there any compelling reason you're not using the wrappers
> > mpif77/mpif90?
> >
> >
> In fact, this is for the same reason that I also try to use static linking:
> I have been using two middle-size cluste
Hi,
Would you please tell me how did you do the experiment by calling MPI_Test in
little more details?
Thanks!
From: Lars Andersson
To: us...@open-mpi.org
Sent: Tuesday, June 9, 2009 6:11:11 AM
Subject: Re: [OMPI users] "Re: Best way to overlap computation
On Mon, Jun 8, 2009 at 11:07 PM, Lars Andersson wrote:
> I'd say that your own workaround here is to intersperse MPI_TEST's
> periodically. This will trigger OMPI's pipelined protocol for large
> messages, and should allow partial bursts of progress while you're
> assumedly off doing useful work. I
There is a whole page on valgrind web page about this topic. Please
read http://valgrind.org/docs/manual/manual-core.html#manual-core.suppress
for more information.
george.
On Jun 8, 2009, at 15:24 , Ralph Castain wrote:
We deliberately choose to not initialize our msg buffers as this
t
We deliberately choose to not initialize our msg buffers as this takes
considerable time. Instead, we fill in only the portion required by a given
message, and then send only that much of the buffer. Thus, the uninitialized
portion is ignored.
I don't know of a way to tell valgrind to ignore it, I
Hi all,
I've configured a source build of OpenMPI 1.3.2 with valgrind enabled
[1], and I'm seeing a lot of errors with writev() when I run this under
valgrind. For example, with the following `hello, world' program:
#include
#include
int main(int argc, char *argv[]) {
MPI_Init(&argc
Dear all,
I have finally succeded to build a static openmpi library for my single
machine setup + using it to link and execute a code, either with
gfortran or ifort
I used
./configure --prefix=/usr/local --with-sge --enable-static
--without-openib --without-portals --without-udapl --without-elan
-
Hi,
I'm encountering some issues when running a multithreaded program with
OpenMPI (trunk rev. 21380, configured with --enable-mpi-threads)
My program (included in the tar.bz2) uses several pthreads that perform
ping pongs concurrently (thread #1 uses tag #1, thread #2 uses tag #2, etc.)
This prog
Hi,
Yes, we 2 NICs on the same bus and the other 2 are embeded.
We did the experiment about netperf in our cluster and we
could not get full bandwith using 4 pairs copies on two nodes.
the bandwidth is increased when the number of NICs changes to 2
but there is no big increase when it becomes 3, 4