Hi Howard

Thank you very much for your suggestions. All the installation location in
my case are the default ones, so that is likely not the issue.

What I find a bit confusing is this:

As I mentioned, my cluster has both Qlogic Infiniband and Chelsio iWARP
(which are exposed to OpenMPI natively as well as an IP interface)

With this configuration, if I build libfabric with configure options
--with-psm=auto --with-verbs=auto, as I mentioned earlier, only the PSM
interface shows up in the fi_info listing, and OpenMPI programs using ofi
MTL *do* work. However, I do not know if the traffic is going through the
Qlogic card or the Chelsio card; it is likely the former.

I am going to ask this on the libfabric list, but perhaps the following
question is relevant in the OpenMPI list:

My understanding about the OFI MTL is the following: please correct me
where I am wrong:
As I understand, ANY type of transport that exposes a verbs interface
(iWARP, RoCE, Infiniband from any manufacturer) can become a libfabric
provider (when libfabric is compiled with --with-verbs option) and thus
support the OFI MTL (and thus the cm PML?)

Is the above true?

Best regards
Durga

We learn from history that we never learn from history.

On Mon, Apr 4, 2016 at 7:29 AM, Howard Pritchard <hpprit...@gmail.com>
wrote:

> Hi Durga,
>
> I'd suggest reposting this to the libfabric-users mail list.
> You can join that list at
> http://lists.openfabrics.org/mailman/listinfo/libfabric-users
>
> I'd suggest including the output of config.log.  If you installed
> ofed in non-canonical location, you may need to give an explicit
> path as an argument to the --enable-verbs configury option.
>
> Note if you're trying to use libfabric with the Open MPI ofi
> mtl, you will need to get literally the freshest version of
> libfabric, either at github or the 1.3rc2 tarball at
>
> http://www.openfabrics.org/downloads/ofi/
>
> Good luck,
>
> Howard
>
>
> 2016-04-02 13:41 GMT-06:00 dpchoudh . <dpcho...@gmail.com>:
>
>> Hello all
>>
>> My machine has 3 network cards:
>>
>> 1. Broadcom GbE (vanilla type, with some offload capability)
>> 2. Chelsion S310 10Gb iWARP
>> 3. Qlogic DDR 4X Infiniband.
>>
>> With this setup, I built libfabric like this:
>>
>> ./configure --enable-udp=auto --enable-gni=auto --enable-mxm=auto
>> --enable-usnic=auto --enable-verbs=auto --enable-sockets=auto
>> --enable-psm2=auto --enable-psm=auto && make && sudo make install
>>
>> However, in the built libfabric, I do not see a verb provider, which I'd
>> expect for the iWARP card, at least.
>>
>> [durga@smallMPI libfabric]$ fi_info
>> psm: psm
>>     version: 0.9
>>     type: FI_EP_RDM
>>     protocol: FI_PROTO_PSMX
>> UDP: UDP-IP
>>     version: 1.0
>>     type: FI_EP_DGRAM
>>     protocol: FI_PROTO_UDP
>> sockets: IP
>>     version: 1.0
>>     type: FI_EP_MSG
>>     protocol: FI_PROTO_SOCK_TCP
>> sockets: IP
>>     version: 1.0
>>     type: FI_EP_DGRAM
>>     protocol: FI_PROTO_SOCK_TCP
>> sockets: IP
>>     version: 1.0
>>     type: FI_EP_RDM
>>     protocol: FI_PROTO_SOCK_TCP
>>
>>
>> Am I doing something wrong or misunderstanding how libfabric works?
>>
>> Thanks in advance
>> Durga
>>
>> We learn from history that we never learn from history.
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/04/28870.php
>>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/04/28884.php
>

Reply via email to