On Wed, Oct 28, 2020 at 9:47 AM Liang, Ma <liang.j...@intel.com> wrote:
>
> On 28 Oct 21:27, Jerin Jacob wrote:
> > On Wed, Oct 28, 2020 at 9:19 PM Ananyev, Konstantin
> > <konstantin.anan...@intel.com> wrote:
> > > > > > > > 28/10/2020 14:49, Jerin Jacob:
> > > > > > > > > On Wed, Oct 28, 2020 at 7:05 PM Liang, Ma 
> > > > > > > > > <liang.j...@intel.com> wrote:
> > > > > > > > > >
> > > > > > > > > > Hi Thomas,
> > > > > > > > > >   I think I addressed all of the questions in relation to 
> > > > > > > > > > V9. I don't think I can solve the issue of a generic API on 
> > > > > > > > > > my own. From the
> > > > > > > > Community Call last week Jerin also said that a generic was 
> > > > > > > > investigated but that a single solution wasn't feasible.
> > > > > > > > >
> > > > > > > > > I think, From the architecture point of view, the specific
> > > > > > > > > functionally of UMONITOR may not be abstracted.
> > > > > > > > > But from the ethdev callback point of view, Can it be 
> > > > > > > > > abstracted in
> > > > > > > > > such a way that packet notification available through
> > > > > > > > > checking interrupt status register or ring descriptor 
> > > > > > > > > location, etc by
> > > > > > > > > the driver. Use that callback as a notification mechanism 
> > > > > > > > > rather
> > > > > > > > > than defining a memory-based scheme that UMONITOR expects? or 
> > > > > > > > > similar
> > > > > > > > > thoughts on abstraction.
> > > > > > >
> > > > > > > I think there is probably some sort of misunderstanding.
> > > > > > > This API is not about providing acync notification when next 
> > > > > > > packet arrives.
> > > > > > > This is about to putting core to sleep till some event (or 
> > > > > > > timeout) happens.
> > > > > > > From my perspective the closest analogy: cond_timedwait().
> > > > > > > So we need PMD to tell us what will be the address of the 
> > > > > > > condition variable
> > > > > > > we should sleep on.
> > > > > > >
> > > > > > > > I agree with Jerin.
> > > > > > > > The ethdev API is the blocking problem.
> > > > > > > > First problem: it is not well explained in doxygen.
> > > > > > > > Second problem: it is probably not generic enough (if we 
> > > > > > > > understand it well)
> > > > > > >
> > > > > > > It is an address to sleep(/wakeup) on, plus expected value.
> > > > > > > Honestly, I can't think-up of anything even more generic then 
> > > > > > > that.
> > > > > > > If you guys have something particular in mind - please share.
> > > > > >
> > > > > > Current PMD callback:
> > > > > > typedef int (*eth_get_wake_addr_t)(void *rxq, volatile void
> > > > > > **tail_desc_addr, + uint64_t *expected, uint64_t *mask, uint8_t
> > > > > > *data_sz);
> > > > > >
> > > > > > Can we make it as
> > > > > > typedef void (*core_sleep_t)(void *rxq)
> > > > > >
> > > > > > if we do such abstraction and "move the polling on memory by HW/CPU"
> > > > > > to the driver using a helper function then
> > > > > > I can think of abstracting in some way in all PMDs.
> > > > >
> > > > > Ok I see, thanks for explanation.
> > > > > From my perspective main disadvantage of such approach -
> > > > > it can't be extended easily.
> > > > > If/when will have an ability for core to sleep/wake-up on multiple 
> > > > > events
> > > > > (multiple addresses) will have to either rework that API again.
> > > >
> > > > I think, we can enumerate the policies and pass the associated
> > > > structures as input to the driver.
> > >
> > > What I am trying to say: with that API we will not be able to wait
> > > for events from multiple devices (HW queues).
> > > I.E. something like that:
> > >
> > > get_wake_addr(port=X, ..., &addr[0], ...);
> > > get_wake_addr(port=Y,..., &addr[1],...);
> > > wait_on_multi(addr, 2);
> > >
> > > wouldn't be possible.
> >
> > I see. But the current implementation dictates the only queue bound to
> > a core. Right?
> Current implementation only support 1:1 queue/core mapping is because of
> the limitation of umwait/umonitor which can not work with multiple address
> range. However, for other scheme like PASUE/Freq Scale have no such 
> limitation.
> The proposed API itself doesn't limit the 1:1 queue/core mapping.

The PMD would not know if it is 1:1 queue/core or any other shared scheme.
So the intelligence and decision making is best left to the application.
I think PMD and the underlying hardware does not need to know what kind of
power management scheme is implemented.
IMHO the original API which provides the address, value and mask should suffice.
Any other callback or handshake between PMD and application may be an overkill.

> >
> >
> > >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > Note: core_sleep_t can take some more arguments such as enumerated
> > > > > > policy if something more needs to be pushed to the driver.
> > > > > >
> > > > > > Thoughts?
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > > > This API is experimental and other vendor support can be 
> > > > > > > > > > added as needed. If there are any other open issue let me 
> > > > > > > > > > know?
> > > > > > > >
> > > > > > > > Being experimental is not an excuse to throw something
> > > > > > > > which is not satisfying.
> > > > > > > >
> > > > > > > >
> > > > > > >

Reply via email to