I successfully compiled and installed openmpi 1.2.2 SVN r14613
on a SLES 10 2.6.16 Linux kernel with gcc 4.1.0 (x86_64).
I can run the Intel MPI benchmarks OK at np=2 but at np=4,
it hangs.
If I change /usr/share/openmpi/mca-btl-openib-hca-params.ini
[QLogic InfiniPath]
use_eager_rdma = 0
Then, i
I'm using openmpi 1.2.5 with a QLogic HCA and using the
openib btl (not PSM). osu_latency and osu_bw work OK but
when I run osu_bibw with a message size of 2MB (1<<21),
it hangs in btl_openib_component_progress() waiting for something.
I tried adding printfs at each point where ibv_post_send(),
i
Here is a suggested patch for adding the QLogic QLE7240 and QLE7280
DDR HCA cards to the openib params file.
I would like the MTU to default to 4K for these HCAs but I don't see
any code using the ibv_port_attr.active_mtu field to limit the MTU
to the active MTU. If you like, I can try to make a
Roland noticed that the QLogic HCA driver was using the PCIe
vendor ID for the ibv_query_device so the IEEE OUI value is
now used. This means the config file should recognize the
vendor ID value 0x1175 too.
Signed-off-by: Ralph Campbell
--- ompi/mca/btl/openib/mca-btl-openib-hca-params.ini.old
uot; in the v1.3 series INI file?
>
> https://svn.open-mpi.org/trac/ompi/browser/trunk/ompi/mca/btl/openib/mca-btl-openib-device-params.ini
>
>
> On Aug 12, 2008, at 5:29 PM, Ralph Campbell wrote:
>
> > Roland noticed that the QLogic HCA driver was using the PCIe
> > vend
I have been looking closely at the source code for openmpi-2.0.2 and I see
what looks like support for GPU direct RDMA but when testing it with a
small GPU direct aware MPI program from Nvidia, I don't see ibv_reg_mr()
ever being called with the GPU UVM address.
Looking more closely, it appears tha
Thanks! That is what I was missing.
On Mon, Mar 13, 2017 at 7:03 PM, Akshay Venkatesh
wrote:
> Hi, Ralph.
>
> Have you already tried passing the parameter
>
> --mca btl_openib_want_cuda_gdr 1
>
> This may help your case.
>
>
> On Mon, Mar 13, 2017 at 5:52 PM, Ralp