On Apr 11, 2016, at 2:38 PM, dpchoudh . wrote:
>
> If the vendor of a new type of fabric wants to include support for OpenMPI,
> then, as long as they can implement a libfabric provider, they can use the
> OFI MTL without adding any code to the OpenMPI source tree itself.
If you implement the
Hi Howard and all
Thank you very much for the information. I have a follow up question:
If the vendor of a new type of fabric wants to include support for OpenMPI,
then, as long as they can implement a libfabric provider, they can use the
OFI MTL without adding any code to the OpenMPI source tree
For the sake of completeness, let me put the answer here. I posted the
question on libfabric mailing list and with their input, installed
librdmacm-devel. After that and a rebuild, the issue went away.
Thanks
Durga
We learn from history that we never learn from history.
On Mon, Apr 4, 2016 at 3:
Hi Howard
Thank you very much for your suggestions. All the installation location in
my case are the default ones, so that is likely not the issue.
What I find a bit confusing is this:
As I mentioned, my cluster has both Qlogic Infiniband and Chelsio iWARP
(which are exposed to OpenMPI natively
Hi Durga,
I'd suggest reposting this to the libfabric-users mail list.
You can join that list at
http://lists.openfabrics.org/mailman/listinfo/libfabric-users
I'd suggest including the output of config.log. If you installed
ofed in non-canonical location, you may need to give an explicit
path as
Hello all
My machine has 3 network cards:
1. Broadcom GbE (vanilla type, with some offload capability)
2. Chelsion S310 10Gb iWARP
3. Qlogic DDR 4X Infiniband.
With this setup, I built libfabric like this:
./configure --enable-udp=auto --enable-gni=auto --enable-mxm=auto
--enable-usnic=auto --e