I successfully compiled and installed openmpi 1.2.2 SVN r14613
on a SLES 10 2.6.16 Linux kernel with gcc 4.1.0 (x86_64).

I can run the Intel MPI benchmarks OK at np=2 but at np=4,
it hangs.

If I change /usr/share/openmpi/mca-btl-openib-hca-params.ini
[QLogic InfiniPath]
use_eager_rdma = 0
Then, it gets much farther before hanging on 2MB+ messages.
If I create .openmpi/mca-params.conf with
min_rdma_size = 2147483648
The benchmark completes reliably.

When the hang happens, the ipath driver thinks all the posted
work requests and completion entries have been generated
and openmpi seems to think they haven't all completed.

Can someone point me to the code where RDMA write is polled
on the destination node?

Reply via email to