Re: [OMPI users] RDMA over IB between heterogenous processors with different endianness

2008-08-25 Thread Brian W. Barrett
In the entire 1.2 series, RDMA is only allowed if the architecture of the 
two processes match.  The 1.3 series added the ability to chose based on 
datatype.


Brian

On Mon, 25 Aug 2008, Mi Yan wrote:



Brian,

I'm using OpenMPI 1.2.6 (r17946). Could you plese check which version
works ? Thanks a lot,
Mi
Inactive hide details for "Brian W. Barrett" <brbar...@open-mpi.org>
"Brian W. Barrett" <brbar...@open-mpi.org>


"Brian W. Barrett" <brbar...@open-mpi.org>
Sent by: users-boun...@open-mpi.org

08/25/2008 01:44 PM

   Please respond to
  Open MPI Users <us...@open-mpi.org>

[IMAGE]
To
[IMAGE]
Open MPI Users <us...@open-mpi.org>
[IMAGE]
cc
[IMAGE]
Greg Rodgers/Poughkeepsie/IBM@IBMUS, Brad Benton/Austin/IBM@IBMUS
[IMAGE]
Subject
[IMAGE]
Re: [OMPI users] RDMA over IB between heterogenous processors with
different endianness
[IMAGE][IMAGE]
On Mon, 25 Aug 2008, Mi Yan wrote:

> Does OpenMPI always use SEND/RECV protocol between heterogeneous
> processors with different endianness?
>
> I tried btl_openib_flags to be 2 , 4 and 6 respectively to allowe RDMA,
> but the bandwidth between the two heterogeneous nodes is slow, same as
> the bandwidth when btl_openib_flags to be 1. Seems to me SEND/RECV is
> always used no matter btl_openib_flags is. Can I force OpenMPI to use
> RDMA between x86 and PPC? I only transfer MPI_BYTE, so we do not need
the
> support for endianness.

Which version of Open MPI are you using?  In recent versions (I don't
remember exactly when the change occured, unfortuantely), the decision
between send/recv and rdma was moved from being solely based on the
architecture of the remote process to being based on the architecture and
datatype.  It's possible this has been broken again, but there defintiely
was some window (possibly only on the development trunk) when that worked
correctly.

Brian
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] RDMA over IB between heterogenous processors with different endianness

2008-08-25 Thread Jeff Squyres
I believe that this is something Brad at IBM worked on, wasn't it?  I  
*think* you may just need the development trunk (i.e., upcoming v1.3),  
but I won't swear to that.


Regardless, you need to have OMPI compiled for heterogeneous support  
because control headers still need to be adjusted for endian-ness,  
etc. (even if you're only sending MPI_BYTE).



On Aug 25, 2008, at 1:57 PM, Mi Yan wrote:


Brian,

I'm using OpenMPI 1.2.6 (r17946). Could you plese check which  
version works ? Thanks a lot,

Mi
"Brian W. Barrett" <brbar...@open-mpi.org>


"Brian W. Barrett" <brbar...@open-mpi.org>
Sent by: users-boun...@open-mpi.org
08/25/2008 01:44 PM
Please respond to
Open MPI Users <us...@open-mpi.org>

To

Open MPI Users <us...@open-mpi.org>

cc

Greg Rodgers/Poughkeepsie/IBM@IBMUS, Brad Benton/Austin/IBM@IBMUS

Subject

Re: [OMPI users] RDMA over IB between heterogenous processors with  
different endianness




On Mon, 25 Aug 2008, Mi Yan wrote:

> Does OpenMPI always use SEND/RECV protocol between heterogeneous
> processors with different endianness?
>
> I tried btl_openib_flags to be 2 , 4 and 6 respectively to allowe  
RDMA,
> but the bandwidth between the two heterogeneous nodes is slow,  
same as
> the bandwidth when btl_openib_flags to be 1. Seems to me SEND/RECV  
is
> always used no matter btl_openib_flags is. Can I force OpenMPI to  
use
> RDMA between x86 and PPC? I only transfer MPI_BYTE, so we do not  
need the

> support for endianness.

Which version of Open MPI are you using?  In recent versions (I don't
remember exactly when the change occured, unfortuantely), the decision
between send/recv and rdma was moved from being solely based on the
architecture of the remote process to being based on the  
architecture and
datatype.  It's possible this has been broken again, but there  
defintiely
was some window (possibly only on the development trunk) when that  
worked

correctly.

Brian
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] RDMA over IB between heterogenous processors with different endianness

2008-08-25 Thread Mi Yan

Brian,

  I'm using OpenMPI 1.2.6 (r17946).   Could you plese check which
version works ?  Thanks a lot,
Mi


   
 "Brian W. 
 Barrett"  
 <brbarret@open-mp  To
 i.org>Open MPI Users <us...@open-mpi.org>
 Sent by:   cc
 users-bounces@ope Greg
 n-mpi.org Rodgers/Poughkeepsie/IBM@IBMUS, 
   Brad Benton/Austin/IBM@IBMUS
   Subject
 08/25/2008 01:44      Re: [OMPI users] RDMA over IB   
 PM        between heterogenous processors 
       with different endianness   
   
 Please respond to 
  Open MPI Users   
 <users@open-mpi.o 
rg>
   
   




On Mon, 25 Aug 2008, Mi Yan wrote:

> Does OpenMPI always use SEND/RECV protocol between heterogeneous
> processors with different endianness?
>
> I tried btl_openib_flags to be 2 , 4 and 6 respectively to allowe RDMA,
> but the bandwidth between the two heterogeneous nodes is slow, same as
> the bandwidth when btl_openib_flags to be 1. Seems to me SEND/RECV is
> always used no matter btl_openib_flags is. Can I force OpenMPI to use
> RDMA between x86 and PPC? I only transfer MPI_BYTE, so we do not need the
> support for endianness.

Which version of Open MPI are you using?  In recent versions (I don't
remember exactly when the change occured, unfortuantely), the decision
between send/recv and rdma was moved from being solely based on the
architecture of the remote process to being based on the architecture and
datatype.  It's possible this has been broken again, but there defintiely
was some window (possibly only on the development trunk) when that worked
correctly.

Brian
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] RDMA over IB between heterogenous processors with different endianness

2008-08-25 Thread Brian W. Barrett

On Mon, 25 Aug 2008, Mi Yan wrote:


Does OpenMPI always use SEND/RECV protocol between heterogeneous
processors with different endianness?

I tried btl_openib_flags to be 2 , 4 and 6 respectively to allowe RDMA,
but the bandwidth between the two heterogeneous nodes is slow, same as
the bandwidth when btl_openib_flags to be 1. Seems to me SEND/RECV is
always used no matter btl_openib_flags is. Can I force OpenMPI to use
RDMA between x86 and PPC? I only transfer MPI_BYTE, so we do not need the
support for endianness.


Which version of Open MPI are you using?  In recent versions (I don't 
remember exactly when the change occured, unfortuantely), the decision 
between send/recv and rdma was moved from being solely based on the 
architecture of the remote process to being based on the architecture and 
datatype.  It's possible this has been broken again, but there defintiely 
was some window (possibly only on the development trunk) when that worked 
correctly.


Brian