Hi Ajit

Looks like you agree with this RFC to move meter objects to be managed by PMD 
pointers.

Thanks for the review!

From: Ajit Khaparde
> On Thu, Mar 25, 2021 at 1:21 AM Matan Azrad <ma...@nvidia.com> wrote:
> >
> > Hi Cristian
> >
> > From: Dumitrescu, Cristian
> > > Hi Li and Matan,
> > >
> > > > -----Original Message-----
> > > > From: Li Zhang <l...@nvidia.com>
> > > > Sent: Thursday, March 18, 2021 8:58 AM
> > > > To: dek...@nvidia.com; or...@nvidia.com; viachesl...@nvidia.com;
> > > > ma...@nvidia.com; shah...@nvidia.com; lir...@marvell.com; Singh,
> > > > Jasvinder <jasvinder.si...@intel.com>; Thomas Monjalon
> > > > <tho...@monjalon.net>; Yigit, Ferruh <ferruh.yi...@intel.com>; Andrew
> > > > Rybchenko <andrew.rybche...@oktetlabs.ru>; Dumitrescu, Cristian
> > > > <cristian.dumitre...@intel.com>
> > > > Cc: dev@dpdk.org; rasl...@nvidia.com; ron...@nvidia.com
> > > > Subject: [PATCH 2/2] [RFC]: ethdev: manage meter API object handles by
> > > > the drivers
> > > >
> > > > Currently, all the meter objects are managed by the user IDs:
> > > > meter, profile and policy.
> > > > Hence, each PMD should manage data-structure in order to map each API
> > > > ID to the private PMD management structure.
> > > >
> > > > From the application side, it has all the picture how meter is going
> > > > to be assigned to flows and can easily use direct mapping even when
> > > > the meter handler is provided by the PMDs.
> > > >
> > > > Also, this is the approach of the rte_flow API handles:
> > > > the flow handle and the shared action handle is provided by the PMDs.
> > > >
> > > > Use drivers handlers in order to manage all the meter API objects.
> > > >
> > >
> > > This seems to be take 2 of the discussion that we already had  in this 
> > > thread:
> > >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.dp
> > > dk.org%2Farchives%2Fdev%2F2021-
> > >
> March%2F200710.html&amp;data=04%7C01%7Cmatan%40nvidia.com%7Cab0
> > >
> e3cc77b9e4101344e08d8ee434bbe%7C43083d15727340c1b7db39efd9ccc17a%
> > >
> 7C0%7C0%7C637521320105450617%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiM
> > >
> C4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&
> > >
> amp;sdata=94bFRICfGEzk5s53MRUvFMQe5ZlhP2Tmnu82hwUytc4%3D&amp;re
> > > served=0, so apologies for mostly summarizing my previous feedback here.
> > >
> > > I am against this proposal because:
> > > 1. We already discussed this topic of user-provided handles vs. driver-
> provided
> > > handles at length on this exact email list back in 2017, when we first
> introduced
> > > this API, and I don't see any real reason to revisit the decision we took 
> > > then.
> >
> > Why not?
> > There is more experiences\usages now.
> > New drivers added the support and also now scalability is growing and
> growing....
> >
> >
> > > 2. For me, it is more natural and it also helps the application to 
> > > simplify its
> data
> > > structures if the user provides its own IDs rather than the user having to
> deal
> > > with the IDs provided by the driver.
> >
> > Generally I don't think other flow DPDK APIs align with your feelings here, 
> > see
> rte_flow object and rte_flow_shared_action.
> >
> > Specifically for meter:
> >         - here, meter is HW\driver offload where performance\rate either for
> meter creation\deletion or for the actual data-path is very important 
> especially
> when we talk on very big numbers, so "natural" has less importance here.
> >           We need to think on the global solution for application 
> > ->API->driver. in
> meter feature, the user has the ability to manage the IDs better than the PMDs
> for the most of the use-cases:
> >                         1. meter per flow: just save the driver handle in 
> > the app flow
> context.
> >                         2. meter per VM\USER flows\rte_flow group\any other 
> > context
> grouped multiple flows: just save the driver handle in the app context.
> >         If PMD need to map the IDs, it is more complex for sure, requires 
> > more
> memory and more lookup time.
> >
> >         - I'm not sure it is natural for all the use-cases, sometimes 
> > generating
> unique ID may complex the app.
> >
> >
> > > 3. It is much easier and portable to pass numeric and string-based IDs
> around
> > > (e.g. between processes) as opposed to pointer-based IDs, as pointers are
> only
> > > valid in one address space and not in others. There are several DPDK APIs
> that
> > > moved away from pointer handles to string IDs.
> Pardon my ignorance..
> But which DPDK APIs moved to string IDs from pointer handles?
> 
> >
> > Yes, I agree here generally.
> > But again, since meter is used only by rte_flow, it is better to align the 
> > same
> handle mechanism.
> I don't want to say - do this because rte_flow uses a pointer.
> I don't have a strong opinion for one over the other. In the end the
> logic can be adapted one way or the other.
> But having implemented rte_flow support in the PMD, I think it is a
> good idea to avoid the duplication of meter_id to pointer based handle
> conversion and bookkeeping logic in the application and the PMD.
> 
> >
> > > 4. The mapping of user IDs to internal pointers within the driver is IMO 
> > > not
> a
> > > big issue in terms of memory footprint or API call rate. Matan also
> confirmed
> > > this in the above thread when saying tis is not about either driver memory
> > > footprint or API call speed, as this mapping is easy to optimize.
> >
> > Yes, it is not very big deal, but still costs more than the new suggestion,
> especially in big scale.
> >
> > > And last but not least, this change obviously propagates in every API
> function,
> > > so it would result in big churn in API, all drivers and all apps 
> > > (including
> testpmd,
> > > etc) implementing it (for IMO no real benefit). Yes, this API is 
> > > experimental
> and
> > > therefore we can operate changes in it, but I'd rather see incremental and
> > > converging improvements rather than this.
> >
> > Yes, it changes all API, but very small part in each, will be very easy to 
> > align
> all the current dpdk components to use this concept.
> >
> > > If you guys insist with this proposal, I would like to get more opinions 
> > > from
> > > other vendors and contributors from within our DPDK community.
> >
> >
> > Yes, more opinions are very welcomed.
> >
> > Thanks

Reply via email to