Le mardi 13 novembre 2012 à 23:36 -0500, Christopher Mitchell a écrit : > Hi, > > I am working on building an Infiniband application with a server that > can handle many simultaneous clients. The server exposes a chunk of > memory that each of the clients can read via RDMA. I was previously > creating a new MR on the server for each client (and of course in that > connection's PD). However, under stress testing, I realized that > ibv_reg_mr() started failing after I simultaneously MRed the same area > enough times to cover 20.0 GB. I presume that the problem is reaching > some pinning limit, although ulimit reports "unlimited" for all > relevant possibilities.
There's a limit of the number of registered memory region per HCA. See ibv_query_device(), struct ibv_device_attr, field max_mr. > I tried creating a single global PD and a > single MR to be shared among the multiple connections, but > rdma_create_qp() fails with an invalid argument when I try to do that. > I therefore deduce that the PD specified in rdma_create_qp() must > correspond to an active connection, not simply be created by opening a > device. > > Long question short: is there any way I can share the same MR among > multiple clients, so that my shared memory region is limited to N > bytes instead of N/C (clients) bytes? > If each rdma_cm_id descriptor is tied to the same ibv_context, you might be able to share one memory pool registered with ibv_reg_mr(). This is quite usual since you are likely to have only one HCA in your system. Get a first context after binding your listening rdma_cm_id. Regards -- Yann Droneaud OPTEYA -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html