There was discussion about this issue on the call today. Several
vendors expressed willingness to "fix" the problem and make OMPI
interop with different HCAs and RNICs in a single job run: Sun, IBM,
Mellanox.
So the question is -- what exactly do you want to do to fix this?
Assumedly th
On Jan 26, 2009, at 4:46 PM, Jeff Squyres wrote:
Note that I did not say that. I specifically stated that OMPI
failed and it is due to the fact that we are customizing for the
individual hardware devices. To be clear: this is an OMPI issue.
I'm asking (at the request of the IWG) if anyon
On Jan 26, 2009, at 4:33 PM, Nifty Tom Mitchell wrote:
I suspect the most common transport would be TCP/IP and that would
introduce
gateway and routing issues between quick fabrics and other quick
fabrics
that would be intolerable for most HPC applications (but not all).
It may be that IPoI
On Mon, Jan 26, 2009 at 11:31:43AM -0800, Paul H. Hargrove wrote:
> Jeff Squyres wrote:
>> The Interop Working Group (IWG) of the OpenFabrics Alliance asked me
>> to bring a question to the Open MPI user and developer communities: is
>> anyone interested in having a single MPI job span HCAs or
Jeff Squyres wrote:
The Interop Working Group (IWG) of the OpenFabrics Alliance asked me
to bring a question to the Open MPI user and developer communities: is
anyone interested in having a single MPI job span HCAs or RNICs from
multiple vendors? (pardon the cross-posting, but I did want to as
The Interop Working Group (IWG) of the OpenFabrics Alliance asked me
to bring a question to the Open MPI user and developer communities: is
anyone interested in having a single MPI job span HCAs or RNICs from
multiple vendors? (pardon the cross-posting, but I did want to ask
each group sep