Yes, of course; I meant the QP. Regarding the total number of outstanding RDMA work requests, I can keep a separate cap on that, so if relatively few peers are active, I push the maximum number of RDMAs at them, but if many peers are active the number of active RDMAs per peer reduces.
However I guess this still means that CQ resources sufficient for the maximum number of RDMAs I _could_ queue have to be allocated... > -----Original Message----- > From: Sean Hefty [mailto:[EMAIL PROTECTED] > Sent: Thursday, November 10, 2005 7:12 PM > To: Eric Barton > Cc: openib-general@openib.org > Subject: Re: [openib-general] Lustre over OpenIB Gen2 > > > Eric Barton wrote: > > 5. Should I pre-map all physical memory and do RDMA in > page-sized fragments? > > This avoids any mapping overhead at the expense of > having much larger > > numbers of queued RDMAs. Since I try to keep up to 8 > (by default) 1MByte > > RDMAs active concurrently to any individual peer, with > 4k pages I can have > > up to 2048 RDMA work items queued at a time per peer. > > This is 20 million outstanding RDMA work requests per node. > > > And if I pre-map, can I be guaranteed that if I put the > CQ into the error > > state, all remote access to my memory is revoked (e.g. > could a CQ I create > > after I destroy the one I just shut down somehow alias > with it such that a > > pathalogically delayed RDMA could write my memory)? > > I think that you mean QP into the error state. If the QP is > in the error state, > then further access from a remote system should be impossible. > > - Sean > _______________________________________________ openib-general mailing list openib-general@openib.org http://openib.org/mailman/listinfo/openib-general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general