On 2024/9/30 16:09, Ilias Apalodimas wrote: > On Sun, 29 Sept 2024 at 05:44, Yunsheng Lin <linyunsh...@huawei.com> wrote: >> >> On 2024/9/28 15:34, Ilias Apalodimas wrote: >> >> ... >> >>> >>> Yes, that wasn't very clear indeed, apologies for any confusion. I was >>> trying to ask on a linked list that only lives in struct page_pool. >>> But I now realize this was a bad idea since the lookup would be way >>> slower. >>> >>>> If I understand question correctly, the single/doubly linked list >>>> is more costly than array as the page_pool case as my understanding. >>>> >>>> For single linked list, it doesn't allow deleting a specific entry but >>>> only support deleting the first entry and all the entries. It does support >>>> lockless operation using llist, but have limitation as below: >>>> https://elixir.bootlin.com/linux/v6.7-rc8/source/include/linux/llist.h#L13 >>>> >>>> For doubly linked list, it needs two pointer to support deleting a specific >>>> entry and it does not support lockless operation. >>> >>> I didn't look at the patch too carefully at first. Looking a bit >>> closer now, the array is indeed better, since the lookup is faster. >>> You just need the stored index in struct page to find the page we need >>> to unmap. Do you remember if we can reduce the atomic pp_ref_count to >>> 32bits? If so we can reuse that space for the index. Looking at it >> >> For 64 bits system, yes, we can reuse that. >> But for 32 bits system, we may have only 16 bits for each of them, and it >> seems that there is no atomic operation for variable that is less than 32 >> bits. >> >>> requires a bit more work in netmem, but that's mostly swapping all the >>> atomic64 calls to atomic ones. >>> >>>> >>>> For pool->items, as the alloc side is protected by NAPI context, and the >>>> free side use item->pp_idx to ensure there is only one producer for each >>>> item, which means for each item in pool->items, there is only one consumer >>>> and one producer, which seems much like the case when the page is not >>>> recyclable in __page_pool_put_page, we don't need a lock protection when >>>> calling page_pool_return_page(), the 'struct page' is also one consumer >>>> and one producer as the pool->items[item->pp_idx] does: >>>> https://elixir.bootlin.com/linux/v6.7-rc8/source/net/core/page_pool.c#L645 >>>> >>>> We only need a lock protection when page_pool_destroy() is called to >>>> check if there is inflight page to be unmapped as a consumer, and the >>>> __page_pool_put_page() may also called to unmapped the inflight page as >>>> another consumer, >>> >>> Thanks for the explanation. On the locking side, page_pool_destroy is >>> called once from the driver and then it's either the workqueue for >>> inflight packets or an SKB that got freed and tried to recycle right? >>> But do we still need to do all the unmapping etc from the delayed >>> work? Since the new function will unmap all packets in >>> page_pool_destroy, we can just skip unmapping when the delayed work >>> runs >> >> Yes, the pool->dma_map is clear in page_pool_item_uninit() after it does >> the unmapping for all inflight pages with the protection of >> pool->destroy_lock, >> so that the unmapping is skipped in page_pool_return_page() when those >> inflight >> pages are returned back to page_pool. > > Ah yes, the entire destruction path is protected which seems correct. > Instead of that WARN_ONCE in page_pool_item_uninit() can we instead > check the number of inflight packets vs what we just unmapped? IOW > check 'mask' against what page_pool_inflight() gives you and warn if > those aren't equal. Yes, it seems it is quite normal to trigger the warning from testing, it makes sense to check it against page_pool_inflight() to catch some bug of tracking/calculating inflight pages.
> > > Thanks > /Ilias >> >>>