Caitlin Bestler wrote:
devel-boun...@open-mpi.org wrote:
There are two new issues so far:
1) this has uncovered a connection migration issue in the Chelsio
driver/firmware. We are developing and testing a fix for this now.
Should be ready tomorrow hopefully.
I have a fix for the
Steve --
Can you file a trac bug about this?
On May 10, 2007, at 6:15 PM, Steve Wise wrote:
There are two new issues so far:
1) this has uncovered a connection migration issue in the Chelsio
driver/firmware. We are developing and testing a fix for this now.
Should be ready tomorrow hopef
devel-boun...@open-mpi.org wrote:
>> There are two new issues so far:
>>
>> 1) this has uncovered a connection migration issue in the Chelsio
>> driver/firmware. We are developing and testing a fix for this now.
>> Should be ready tomorrow hopefully.
>>
>
> I have a fix for the above issue and
>
> There are two new issues so far:
>
> 1) this has uncovered a connection migration issue in the Chelsio
> driver/firmware. We are developing and testing a fix for this now.
> Should be ready tomorrow hopefully.
>
I have a fix for the above issue and I can continue with OMPI testing.
To wo
On Thu, May 10, 2007 at 05:56:13PM +0300, Michael S. Tsirkin wrote:
> > Quoting Jeff Squyres :
> > Subject: Re: [ewg] Re: [OMPI devel] Re: OMPI over ofed?udapl -?bugs?opened
> >
> > On May 10, 2007, at 10:28 AM, Michael S. Tsirkin wrote:
> >
> > >>What is the advantage of this approach?
> > >
> >
On May 10, 2007, at 10:28 AM, Michael S. Tsirkin wrote:
What is the advantage of this approach?
Current ipoib cm uses this approach to simplify the implementation.
Overhead seems insignificant.
I think MPI's requirements are a bit different than IPoIB. See
Gleb's response. It is not uncom
On Thu, May 10, 2007 at 04:30:27PM +0300, Or Gerlitz wrote:
> Jeff Squyres wrote:
> >On May 10, 2007, at 9:02 AM, Or Gerlitz wrote:
>
> >>To start with, my hope here is at least to be able play defensive
> >>here, that is convince you that the disadvantages are minor, where
> >>only if this fail
On May 10, 2007, at 9:02 AM, Or Gerlitz wrote:
A different approach which you might want to consider is to have
at the btl level --two-- connections per ranks. so if A
wants to send B it does so through the A --> B connection and if
B wants to send A it does so through the B --> A connecti
On Thu, May 10, 2007 at 08:22:41AM -0400, Jeff Squyres wrote:
> (FWIW: the internal Mellanox code name for ConnectX is Hermon,
> another mountain in Israel, just like Sinai, Arbel, ...etc.).
>
Yes, but Hermon is the highest one, so theoretically Mellanox can only
go downhill from there :)
--
On May 10, 2007, at 8:23 AM, Or Gerlitz wrote:
A different approach which you might want to consider is to have at
the btl level --two-- connections per ranks. so if A
wants to send B it does so through the A --> B connection and if B
wants to send A it does so through the B --> A connecti
On May 10, 2007, at 8:08 AM, Peter Kjellstrom wrote:
I recently tried ompi on early ConnectX hardware/software.
The good news, it works =)
We've seen some really great 1-switch latency using the early access
ConnectX hardware. I have a pair of ConnectX's in my MPI development
cluster at C
I recently tried ompi on early ConnectX hardware/software.
The good news, it works =)
However, ompi needs a chunk of options set to recognize the
card so I made a small patch (setting it up like old Arbel
style hardware).
Patch against openmpi-1.3a1r14635
/Peter
---
--- ompi/mca/btl/openib/mca
12 matches
Mail list logo