Guys,

I actually could make it work!
I had to change Mellanox configuration from Ethernet to Infiniband and set
up IPoIB.
That was in fact a good experience, but the issue is that not all my
Mellanoxes can be configured to Infiniband.
My final destination is to make it work without Mellanox OFED on RoCE
(Ethernet).

Thank you again guys!
Harutyun

On Mon, Sep 5, 2022 at 11:36 AM John Hearns via users <
users@lists.open-mpi.org> wrote:

> Stupid reply from me. You do know that Infiniband adapters operate without
> an IP address?
> Yes, configuring IPOIB is a good idea - however Infiniband adapters are
> more than 'super ethernet adapters'
> I would run the following utilities to investigate your Infiniband fabric
>
> sminfo
> ibhosts
> ibdiagnet
>
> Then on one of the compute nodes
>
> ofed_info
>
> ompi_info
>
>
>
>
>
>
>
>
>
>
>
>
> On Sat, 3 Sept 2022 at 19:32, Harutyun Umrshatyan via users <
> users@lists.open-mpi.org> wrote:
>
>> Hi everyone
>>
>> Could someone please share any experience using MPI with RoCE ?
>> I am trying to set up infiniband adapters (Mellanox cards for example)
>> and run MPI applications with RoCE (Instead of TCP).
>> As I understand, there might be some environment requirements or
>> restrictions like kernel version, installed drivers, etc.
>> I have tried a lot of versions of mpi libs and could not succeed. Would
>> highly appreciate any hint or experience shared.
>>
>> Best regards,
>> Harutyun Umrshatyan
>>
>>

Reply via email to