Laurent (I'm assuming Laurent is on devel; I don't have an e-mail
address for him):
In r20352, you added the BTL tree in the onet directory. I think
there are a few problems with this commit:
- you didn't svn copy or svn move from the btl tree, so all history
will be lost, and merging in
On Jan 26, 2009, at 4:33 PM, Nifty Tom Mitchell wrote:
I suspect the most common transport would be TCP/IP and that would
introduce
gateway and routing issues between quick fabrics and other quick
fabrics
that would be intolerable for most HPC applications (but not all).
It may be that IPoI
On Mon, Jan 26, 2009 at 11:31:43AM -0800, Paul H. Hargrove wrote:
> Jeff Squyres wrote:
>> The Interop Working Group (IWG) of the OpenFabrics Alliance asked me
>> to bring a question to the Open MPI user and developer communities: is
>> anyone interested in having a single MPI job span HCAs or
On Jan 26, 2009, at 15:31 , Brice Goglin wrote:
George Bosilca wrote:
Yes, the only thing we need is an unique identifier per cluster. We
use the last 6 digits from the mapper MAC address.
Ok, thanks for the details. We are going to implement all this in
Open-MX now.
Then, I guess mx__regc
George Bosilca wrote:
> Yes, the only thing we need is an unique identifier per cluster. We
> use the last 6 digits from the mapper MAC address.
Ok, thanks for the details. We are going to implement all this in
Open-MX now.
>> Then, I guess mx__regcache_clean is called when the OMPI free hook
>>
Jeff Squyres wrote:
The Interop Working Group (IWG) of the OpenFabrics Alliance asked me
to bring a question to the Open MPI user and developer communities: is
anyone interested in having a single MPI job span HCAs or RNICs from
multiple vendors? (pardon the cross-posting, but I did want to as
The Interop Working Group (IWG) of the OpenFabrics Alliance asked me
to bring a question to the Open MPI user and developer communities: is
anyone interested in having a single MPI job span HCAs or RNICs from
multiple vendors? (pardon the cross-posting, but I did want to ask
each group sep
On Jan 26, 2009, at 9:09 AM, George Bosilca wrote:
By the way, is there a way to get more details from OMPI when it
fails
to load a component because of missing symbols like this?
LD_DEBUG=verbose isn't very convenient :)
mca_component_show_load_errors is what you need there. Set it to
som
FYI: milliways = www.open-mpi.org. All email, web, and mercurial
services will be down during the time mentioned below.
svn.open-mpi.org is a different machine, so SVN and Trac services will
remain up.
Begin forwarded message:
From: DongInn Kim <>
Date: January 26, 2009 1:48:54 PM EST
S
A few more details:
- only the MPI objects corresponding to MPI predefined handle
constants (e.g., MPI_COMM_WORLD) have padding; user-defined
communicators (etc.) do not have padding
- the HG for this work changed; it's now here:
http://www.open-mpi.org/hg/hgwebdir.cgi/tdd/pdc-pad/
-
Another update for this RFC. It turns out that using pointers instead
of structures as initializers would prevent someone from initializing a
global to one of the predefined handles. So instead, we decided to go
the route of padding the structures to provide us with the ability to
not overrun
I can start you out; let's do an hg branch for this stuff. Want to
set aside some time at the Feb Forum meeting to discuss?
On Jan 23, 2009, at 2:23 PM, Josh Hursey wrote:
I would like to start implementing the Open MPI Extensions
infrastructure on a branch, and eventually bring it in to t
There are several reasons these calls are there. Please read further.
On Jan 26, 2009, at 02:19 , Brice Goglin wrote:
Hello,
I am testing OpenMPI 1.3 over Open-MX. OpenMPI 1.2 works well but 1.3
does not load. This is caused by OMPI MX components now using some MX
internal symbols (mx_open_boa
Hi List,
the bug report in my mail from december
(http://www.open-mpi.org/community/lists/users/2008/12/7611.php)
seems to overlooked (it's definitely a bad idea to send mails between
christmas and silvester ;)).
I have not tried to reproduce the error with the 1.3 version but since
the trun
Hello,
I am testing OpenMPI 1.3 over Open-MX. OpenMPI 1.2 works well but 1.3
does not load. This is caused by OMPI MX components now using some MX
internal symbols (mx_open_board, mx__get_mapper_state and
mx__regcache_clean). This looks like an ugly hack to me :) Why don't you
talk to Myricom abou
15 matches
Mail list logo