So what else would you suggest for further testing, is that pulling
the xrc branch of your ofa hosted librdmacm/libibverbs/libmlx4 trees
and run librdmacm's rdma_{xclient,xserver} example? I was a bit confused
since I see this example both in the master and the xrc brach.
I wanted to keep the
Hefty, Sean sean.he...@intel.com wrote:
The rdma_xclient / rdma_xserver tests are a place for those. They support RC
and
XRC, so they're in the master branch for the RC support. I have extensions
for them in
a private branch which aren't quite ready yet.
again, and just to make sure I
again, and just to make sure I got it - for basic XRC testing which doesn't go
to MPI nor to
the OFED compatability APIs, what env/test would you recommend - is that the
xrc branch on the three libraries and rdma_x{client, server}?
Yes - please make sure you have pulled those branches
On 10/14/2011 10:22 PM, Hefty, Sean wrote:
Just an update: the issues that I was seeing were caused by missing
patches in my libraries (one in libmlx4 and the other in the ofed
compatibility layer). The XRC patches in for-next are testing out fine
for me, though it would be good if someone
I pulled the xrc patches in your for-next branch and ran some simple
tests against it. Between the last time I tested XRC and now, I'm now
seeing mvapich2 hang during MPI finalize, which I'm debugging.
Just an update: the issues that I was seeing were caused by missing patches in
my
We need to add an entry into the uverbs and device command
tables to allow user space to actually call ib_open_qp.
Signed-off-by: Sean Hefty sean.he...@intel.com
---
If possible, this should just be merged with the last patch in the XRC
series.
In my previous tests, this was not getting called.