Hi,
I'm implementing a new MTL component that uses message queues to keep
track of posted and unexpected messages. I intended to do this by creating
two global queues, one for posted and one for unexpected, until I found
that the portals MTL uses a different approach in their queue
implemenati
Hi Jeff, hi everybody,
quite a few things to comment on. hope i catch them all:
- yes, we might move vanmpirtrace to ./ompi/contrib/vampirtrace/, why not. but
since we agreed on the current location ./tracing/vampirtrace/ we should not
rush it just because another software is coming to openmpi,
Yeah the non-blocking interface has some fault tolerance benefits as
Brian mentioned. We are not quite far enough along to use it yet. I
think that we might need to extend it a bit, but I haven't looked at
it in enough detail to say how exactly at the moment.
So I would say for the moment l
I don't think so (in fact the bookkeeping overhead of the non-blocking
receive is a slight detriment).
Right now modex information is only exchanged during init and during
spawn/dynamics operations (and I do not see that changing at any point
soon). So I think the only use of the non-blocking rec
On Oct 8, 2007, at 11:55 AM, Andrew Friedley wrote:
Tim Prins wrote:
Hi,
I am working on implementing the RSL. Part of this is changing the
modex
to use the process attribute system in the RSL. I had designed this
system to to include a non-blocking interface.
However, I have looked again
Tim Prins wrote:
Hi,
I am working on implementing the RSL. Part of this is changing the modex
to use the process attribute system in the RSL. I had designed this
system to to include a non-blocking interface.
However, I have looked again and noticed that nobody is using the
non-blocking mod
Hi,
I am working on implementing the RSL. Part of this is changing the modex
to use the process attribute system in the RSL. I had designed this
system to to include a non-blocking interface.
However, I have looked again and noticed that nobody is using the
non-blocking modex receive. Becaus
WHAT: Remove the opal message buffer code
WHY: It is not used
WHERE: Remove references from opal/mca/base/Makefile.am and
opal/mca/base/base.h
svn rm opal/mca/base/mca_base_msgbuf*
WHEN: After timeout
TIMEOUT: COB, Wednesday October 10, 2007
I ran into this code accidental
For message logging purpose, we need to interface with wait_any,
wait_some, test, test_any, test_some, test_all. It is not possible to
use PMPI for this purpose. During the face-to-face meeting in Paris
(5-12 october 2007) we discussed this issue and came to the
conclusion that the best way
In last night's MTT, I got a bunch of errors in COMM_SPAWN. I know
we're expecting it to fail (possibly/probably due to IOF errors), but
this didn't appear to be what we expected. For simplicity, I
compiled the IBM test suite manually and ran the spawn test:
[0:30] svbu-mpi:~/svn/ompi-tes
For anyone who is interested, Toon Knapen took some of the ideas
previously discussed on an MPI ABI portable layer and wrapped them up
into the "MorphMPI" concept. Check out what he has done:
http://www.clustermonkey.net//content/view/213/32/
Enjoy!
--
Jeff Squyres
Cisco Systems
I had a look at the VT integration branch today. I was surprised by
a few things; I think we can do a little better on separation of VT
from the rest of OMPI. But this also touches on the larger general
concept of how we want to bundle other software packages in Open
MPI. Here's a few re
12 matches
Mail list logo