Re: mlx4 qp allocation
Hi Jack, Thanks so much for clarifying my understanding!! Best Regards, Bob On Thu, Feb 13, 2014 at 7:08 PM, Jack Morgenstein wrote: > On Thu, 13 Feb 2014 00:18:22 +0530 > Bob Biloxi wrote: > >> The VFs need to allocate the memory for Send Queue Buffer, Receive >> Queue Buffer, Completion Queue Buffer, Event Queue Buffer. >> >> Is that right? > > Yes. > >> >> Also, as the QPs, CQs etc are created by the HCA when ALLOC_RES >> command is issued, does the PF driver need to maintain anything to >> associate the QPs, CQs created by the HCA with owners(VFs) possessing >> them? > > Of course. These resources must be de-allocated if, for example, the > VM running the VF crashes -- or we have a resource leak. > > This also is used for security checking, to make sure that a VF does > not mess around with resources that do not "belong" to it. > > -Jack > -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: mlx4 qp allocation
On Thu, 13 Feb 2014 00:18:22 +0530 Bob Biloxi wrote: > The VFs need to allocate the memory for Send Queue Buffer, Receive > Queue Buffer, Completion Queue Buffer, Event Queue Buffer. > > Is that right? Yes. > > Also, as the QPs, CQs etc are created by the HCA when ALLOC_RES > command is issued, does the PF driver need to maintain anything to > associate the QPs, CQs created by the HCA with owners(VFs) possessing > them? Of course. These resources must be de-allocated if, for example, the VM running the VF crashes -- or we have a resource leak. This also is used for security checking, to make sure that a VF does not mess around with resources that do not "belong" to it. -Jack -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: mlx4 qp allocation
Hi Jack, Thanks for the reply. Now I understand. On a related note, I had the following question. Would really appreciate if you can help answer the same: Considering the resources QPs, CQs, EQs etc after going through the code my understanding is that: 1. Physical Function Driver/Hypervisor allocates memory only for the ICM space for these resources. 2. Virtual Function Driver needs to allocate corresponding system memory for the resources For e.g let's say I need 32K QPs, 64K CQs, 512 EQs, the PF driver allocates the memory only for the ICM. The VFs need to allocate the memory for Send Queue Buffer, Receive Queue Buffer, Completion Queue Buffer, Event Queue Buffer. Is that right? Also, as the QPs, CQs etc are created by the HCA when ALLOC_RES command is issued, does the PF driver need to maintain anything to associate the QPs, CQs created by the HCA with owners(VFs) possessing them? I would really appreciate your help! Thanks so much.. Best Regards, Bob On Tue, Feb 11, 2014 at 5:01 PM, Jack Morgenstein wrote: > On Wed, 29 Jan 2014 15:52:09 +0530 > Bob Biloxi wrote: > >> These paths are taken based on the return value of mlx4_is_func(dev). >> This is true for MASTER or SLAVE which I believe is Physical Function >> Driver/Virtual Function Driver. So for SRIOV, it covers all cases. >> >> The MAP_ICM portion which gets executed as part of __mlx4_qp_alloc_icm >> never gets called?? > > For slaves (VFs), the command is sent via the comm channel to the > Hypervisor. It is the Hypervisor which invokes map_icm on behalf of > that slave. > > -Jack -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: mlx4 qp allocation
On Wed, 29 Jan 2014 15:52:09 +0530 Bob Biloxi wrote: > These paths are taken based on the return value of mlx4_is_func(dev). > This is true for MASTER or SLAVE which I believe is Physical Function > Driver/Virtual Function Driver. So for SRIOV, it covers all cases. > > The MAP_ICM portion which gets executed as part of __mlx4_qp_alloc_icm > never gets called?? For slaves (VFs), the command is sent via the comm channel to the Hypervisor. It is the Hypervisor which invokes map_icm on behalf of that slave. -Jack -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
mlx4 qp allocation
Hi, I was going through the linux/drivers/net/ethernet/mellanox/mlx4/qp.c Got a few questions. Would really appreciate if someone can clarify: In the function, mlx4_qp_alloc_icm, To allocate a QP, there are 2 paths taken: using the ALLOC_RES virtual command using the MAP_ICM These paths are taken based on the return value of mlx4_is_func(dev). This is true for MASTER or SLAVE which I believe is Physical Function Driver/Virtual Function Driver. So for SRIOV, it covers all cases. The MAP_ICM portion which gets executed as part of __mlx4_qp_alloc_icm never gets called?? Am I understanding it properly? Because as per my understanding ICM needs to be allocated for all the QPs. Please help me in understanding this. Thanks so much. Best Regards, Bob -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html