Hi Leonardo,
I guess that your program uses POSIX threads and needs the MPI thread
support level MPI_THREAD_MULTIPLE, right?
Unfortunately, the OMPI integrated version of VT doesn't support neither
Pthreads nor any MPI thread level.
The latest "stand-alone-version" of VT (5.6.3) supports at least
On Mar 31, 2009, at 11:00 AM, Jeff Squyres wrote:
On Mar 31, 2009, at 3:45 AM, Sylvain Jeaugey wrote:
Sorry to continue off-topic but going to System V shm would be for me
like going back in the past.
System V shared memory used to be the main way to do shared memory on
MPICH and from my (li
Inevitably, when you're testing in your own, private environment,
everything works great. You test test test and are finally convinced
that it's all perfect. Seconds after you merge it into the main SVN
trunk, you find a dozen little mistakes. Sigh. :-\
After a bunch of SVN commits, I t
Ah -- good catch. Thanks.
Should the same fixes be applied to type_create_keyval_f.c and
win_create_keyval_f.c?
On Apr 1, 2009, at 3:31 PM, wrote:
Author: igb
Date: 2009-04-01 15:31:46 EDT (Wed, 01 Apr 2009)
New Revision: 20926
URL: https://svn.open-mpi.org/trac/ompi/changeset/20926
Log
On Apr 1, 2009, at 4:29 PM, Jeff Squyres wrote:
Should the same fixes be applied to type_create_keyval_f.c and
win_create_keyval_f.c?
Good question. I'll have a look at them.
Iain
On Tue, 2009-03-31 at 11:00 -0400, Jeff Squyres wrote:
> On Mar 31, 2009, at 3:45 AM, Sylvain Jeaugey wrote:
> > System V shared memory used to be the main way to do shared memory on
> > MPICH and from my (little) experience, this was truly painful :
> > - Cleanup issues : does shmctl(IPC_RMID) s
The Open MPI Team, representing a consortium of bailed-out banks, car
manufacturers, and insurance companies, is pleased to announce the
release of the "unbreakable" / bug-free version Open MPI 2009,
(expected to be available by mid-2011). This release is essentially a
complete rewrite of Open MP
So everyone hates SYSV. Ok. :-)
Given that part of the problems we've been having with mmap have been
due to filesystem issues, should we just unlink() the file once all
processes have mapped it? I believe we didn't do that originally for
two reasons:
- leave it around for debugging pu
IIRC, we certainly used to unlink the file after init. Are you sure
somebody changed that?
On Apr 1, 2009, at 4:29 PM, Jeff Squyres wrote:
So everyone hates SYSV. Ok. :-)
Given that part of the problems we've been having with mmap have
been due to filesystem issues, should we just unlin
Bravo!! This is beautiful.
By far my favorite part is "Cobol (so say we all!)".
However, I question why ARM6 was targeted as opposed to ARM7 ;-)
-Paul
George Bosilca wrote:
The Open MPI Team, representing a consortium of bailed-out banks, car
manufacturers, and insurance companies, is pleased t
My wife thought it was frackin' brilliant. :)
-jms
Sent from my PDA. No type good.
- Original Message -
From: devel-boun...@open-mpi.org
To: Open MPI Developers
Sent: Wed Apr 01 18:58:55 2009
Subject: Re: [OMPI devel] Open MPI 2009 released
Bravo!! This is beautiful.
By far my favori
On Apr 1, 2009, at 6:58 PM, Ralph Castain wrote:
IIRC, we certainly used to unlink the file after init. Are you sure
somebody changed that?
It looks like we unlink() it during btl sm component close
(effectively during MPI_FINALIZE), not before.
--
Jeff Squyres
Cisco Systems
In osu_bw, process 0 pumps lots of Isend's to process 1, and process 1
in turn sets up lots of matching Irecvs. Many messages are in flight.
The question is what happens when resources are exhausted and OMPI
cannot handle so much in-flight traffic. Let's specifically consider
the case of lon
13 matches
Mail list logo