Does OpenMPI always  use SEND/RECV protocol  between heterogeneous
processors with different endianness?

I tried  btl_openib_flags to be 2 , 4 and 6 respectively to allowe RDMA,
but the bandwidth between the two heterogeneous nodes is slow,  same as
the bandwidth when btl_openib_flags to be 1.  Seems to me SEND/RECV  is
always used no matter btl_openib_flags is.   Can I force  OpenMPI to use
RDMA between x86 and PPC?     I only transfer MPI_BYTE, so we do not need
the support for endianness.

thanks,
Mi  Yan

Reply via email to