>> Actually I think it is really not so good idea manage reference counter >> across OOB communication. > > But this is exactly what the current API *requires* that users of XRC do!!! > And I agree, it's not a good idea. :)
We do have unregister on finalization. But this code doesn't introduce any synchronization across processes on the same node, since kernel manages the receive qp. If the reference counter will be moved to app responsibility, it will enforce the app to mange the reference counter on app level , in other words , it will require some process to be responsible for the QP. In context of MPI-2 dynamics, such approach will make MPI community live much more complicated. > >> IMHO, I don't see a good reason to redefine existing API. >> I afraid, that such API change will encourage MPI developers to abandon XRC >> support. > > > The only reason reference counting was added at all was to support a > questionable usage model of sharing an xrc domain across jobs. Well, actually it is a primary reason why XRC was introduced in first place. XRC helps reduce number off all-to-all connections from NP^2 ( N - number of nodes, P - number of processes per node) to NP. You may find more details, here: http://www.open-mpi.org/papers/euro-pvmmpi-2008-xrc/euro-pvmmpi-2008-xrc.pdf and here: http://nowlab.cse.ohio-state.edu/publications/conf-papers/2008/koop-cluster08.pdf > I'm suggesting that *only those apps* that want that usage model can > implement it over the provided APIs. They can continue to use OOB > connections, pass XRC numbers, replace ibv_reg_recv_xrv() with atomic_inc(), > and let the persistent server destroy the xrc recv qp when those processes > are done. For everyone else (and it sounds like this is really everyone at > this point), there's Who will provide this persistent server ? If verbs or some other OFED library will provide such service + api add/remove/register QPs on such server , then I have no problem. Regards, Pasha.-- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html