On Sat, Aug 16, 2008 at 08:05:14AM -0400, Jeff Squyres wrote:
> On Aug 13, 2008, at 7:06 PM, Yvan Fournier wrote:
>
>> I seem to have encountered a bug in MPI-IO, in which
>> MPI_File_get_position_shared hangs when called by multiple processes
>> in
>> a communicator. It can be illustrated by the
i am sending you my simulator's Makefile.common which points to openmpi, please
take a look at it. Thanks a lot.
--- On Mon, 9/15/08, Jeff Squyres wrote:
From: Jeff Squyres
Subject: Re: [OMPI users] errors returned from openmpi-1.2.7 source code
To: "Open MPI Users"
List-Post: users@lists.ope
Excellent!
We developers have talked about creating an FAQ entry for running at
large scale for a long time, but have never gotten a round tuit. I
finally filed a ticket to do this (https://svn.open-mpi.org/trac/ompi/ticket/1503
) -- these pending documentation tickets will likely be handle
Hi,
I am happy to state that I believe I have finally found the fix for the No
route to host error
The solution was to increase the ARP cache in the head node and also add a
few static ARP entries. The cache was running out sometime during the
program execution leading to connection disruptio
On Sep 15, 2008, at 2:59 PM, Enrico Barausse wrote:
that was indeed the problem, I'm an idiot (sorry...). I thought there
was an explicit interface somewhere in the libraries that would signal
a missing argument as a syntax error, so I did not check as carefully
as I should have...
Unfortunate
Hi Jeff,
> But
> a missing ierr can be a common cause for a segv in Fortran -- we
> typically don't assign to ierr until after the MPI_Send completes, so
> it *could* explain the behavior you're seeing...?
that was indeed the problem, I'm an idiot (sorry...). I thought there
was an explicit inter
On Sep 14, 2008, at 1:24 PM, Shafagh Jafer wrote:
I installed openmpi-1.2.7 and tested the hello_c and ring_c examples
on single and multiple node and worked fine. However, when I use
openmpi with my simulator (by replacing the old mpich path with the
new openmpi ) I get many errors reporte
On Sep 15, 2008, at 11:22 AM, Paul Kapinos wrote:
But the setting of the environtemt variable OPAL_PREFIX to an
appropriate value (assuming PATH and LD_LIBRARY_PATH are setted too)
is not enough to let the OpenMPI rock&roll from the new lokation.
Hmm. It should be.
Because of the fact, th
w.open-mpi.org/mailman/listinfo.cgi/users
-- next part --
A non-text attachment was scrubbed...
Name: verwurschel_pfade_openmpi.sh
Type: application/x-sh
Size: 369 bytes
Desc: not available
URL: <http://www.open-mpi.org/MailArchives/users/attachments/20080915/434c36
Simply to keep track of what's going on:
I checked the build environment for openmpi and the system's setting,
they were built using gcc 3.4.4 with -Os, which was reputed unstable and
problematic with this compiler version. I've asked Prasanna to rebuild
using -O2 but this could be a bit lengt
, that we have to support an complete petting zoo of
>>>> OpenMPI's. Sometimes we may need to move things around.
>>>>
>>>>
>>>> If OpenMPI is being configured, the install path may be provided using
>>>> --prefix keyword, say so:
&
tmp1 an installation of OpenMPI may be
>>> found.
>>>
>>> Then, say, we need to *move* this Version to an another path, say
>>> /my/love/path/for/openmpi/blupp
>>>
>>> Of course we have to set $PATH and $LD_LIBRARY_PATH accordingly (we
>>> can that ;
Aurélien Bouteiller wrote:
You can't assume that MPI_Send does buffering.
Yes, but I think this is what Eric meant by misinterpreting Enrico's
problem. The communication pattern is to send a message, which is
received remotely. There is remote computation, and then data is sent
back. No
Hi Jeff, hi all!
Jeff Squyres wrote:
Short answer: yes, we do compile in the prefix path into OMPI. Check
out this FAQ entry; I think it'll solve your problem:
http://www.open-mpi.org/faq/?category=building#installdirs
Yes, reading man pages helps!
Thank you to provide useful help.
Bu
Hello,
I don't have a common file system for all cluster nodes.
I've tried to run the application again with VT_UNIFY=no and to call
vtunify manually. It works well. I managed to get the .otf file.
Thank you.
Thomas Ropars
Andreas Knüpfer wrote:
Hello Thomas,
sorry for the delay. My firs
You can't assume that MPI_Send does buffering. Without buffering, you
are in a possible deadlock situation. This pathological case is the
exact motivation for the existence of MPI_Sendrecv. You can also
consider Isend Recv Wait, then the Send will never block, even if the
destination is not
Sorry about that, I had misinterpreted your original post as being the
pair of send-receive. The example you give below does seem correct
indeed, which means you might have to show us the code that doesn't
work. Note that I am in no way a Fortran expert, I'm more versed in C.
The only hint I'd
17 matches
Mail list logo