> >> > >> On 22-Jun-21 10:13 AM, Ananyev, Konstantin wrote: > >>> > >>>> Currently, we expect that only one callback can be active at any given > >>>> moment, for a particular queue configuration, which is relatively easy > >>>> to implement in a thread-safe way. However, we're about to add support > >>>> for multiple queues per lcore, which will greatly increase the > >>>> possibility of various race conditions. > >>>> > >>>> We could have used something like an RCU for this use case, but absent > >>>> of a pressing need for thread safety we'll go the easy way and just > >>>> mandate that the API's are to be called when all affected ports are > >>>> stopped, and document this limitation. This greatly simplifies the > >>>> `rte_power_monitor`-related code. > >>> > >>> I think you need to update RN too with that. > >> > >> Yep, will fix. > >> > >>> Another thing - do you really need the whole port stopped? > >>> From what I understand - you work on queues, so it is enough for you > >>> that related RX queue is stopped. > >>> So, to make things a bit more robust, in pmgmt_queue_enable/disable > >>> you can call rte_eth_rx_queue_info_get() and check queue state. > >> > >> We work on queues, but the data is per-lcore not per-queue, and it is > >> potentially used by multiple queues, so checking one specific queue is > >> not going to be enough. We could check all queues that were registered > >> so far with the power library, maybe that'll work better? > > > > Yep, that's what I mean: on queue_enable() check is that queue stopped or > > not. > > If not, return -EBUSY/EAGAIN or so/ > > Sorry if I wasn't clear at first time. > > I think it's still better that all queues are stopped, rather than > trying to work around the inherently racy implementation. So while i'll > add the queue stopped checks, i'll still remove all of the thread safety > stuff from here.
That's fine by me, all I asked for here - an extra check to make sure the queue is really stopped.