On 2024/9/27 17:21, Ilias Apalodimas wrote:
> Hi Yunsheng
> 
> On Fri, 27 Sept 2024 at 06:58, Yunsheng Lin <linyunsh...@huawei.com> wrote:
>>
>> On 2024/9/27 2:15, Mina Almasry wrote:
>>>
>>>> In order not to do the dma unmmapping after driver has already
>>>> unbound and stall the unloading of the networking driver, add
>>>> the pool->items array to record all the pages including the ones
>>>> which are handed over to network stack, so the page_pool can
>>>> do the dma unmmapping for those pages when page_pool_destroy()
>>>> is called.
>>>
>>> One thing I could not understand from looking at the code: if the
>>> items array is in the struct page_pool, why do you need to modify the
>>> page_pool entry in the struct page and in the struct net_iov? I think
>>> the code could be made much simpler if you can remove these changes,
>>> and you wouldn't need to modify the public api of the page_pool.
>>
>> As mentioned in [1]:
>> "There is no space in 'struct page' to track the inflight pages, so
>> 'pp' in 'struct page' is renamed to 'pp_item' to enable the tracking
>> of inflight page"
> 
> I have the same feeling as Mina here. First of all, we do have an
> unsigned long in struct page we use for padding IIRC. More

I am assuming you are referring to '_pp_mapping_pad' in 'struct page',
unfortunately the field might be used when a page is mmap'ed to user
space as my understanding.

https://elixir.bootlin.com/linux/v6.7-rc8/source/include/linux/mm_types.h#L126

> importantly, though, why does struct page need to know about this?
> Can't we have the same information in page pool?
> When the driver allocates pages it does via page_pool_dev_alloc_XXXXX
> or something similar. Cant we do what you suggest here ? IOW when we
> allocate a page we put it in a list, and when that page returns to
> page_pool (and it's mapped) we remove it.

Yes, that is the basic idea, but the important part is how to do that
with less performance impact.

Reply via email to