introduce page_pool_to_pp() API to avoid caller accessing
page->pp directly.
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/freescale/fec_main.c | 8 +---
.../net/ethernet/google/gve/gve_buffer_mgmt_dqo.c | 4 ++--
drivers/net/ethernet/intel/iavf/iavf_txrx.c|
rget net-next tree instead of net tree.
2. Narrow the rcu lock as the discussion in v2.
3. Check the ummapping cnt against the inflight cnt.
V2:
1. Add a item_full stat.
2. Use container_of() for page_pool_to_pp().
Yunsheng Lin (3):
page_pool: introduce page_pool_to_pp() API
On 2024/10/15 8:14, Jakub Kicinski wrote:
> On Sat, 12 Oct 2024 20:05:31 +0800 Yunsheng Lin wrote:
>> 1. Semantics changing of supporting unlimited inflight pages
>>to limited inflight pages that are as large as the pool_size
>>of page_pool.
>
> How can this p
On 2024/9/25 15:57, Yunsheng Lin wrote:
> Patch 1 fix a possible time window problem for page_pool.
> Patch 2 fix the kernel crash problem at iommu_get_dma_domain
> reported in [1].
Hi, all
Through the discussions, it seems there are some main concerns
as below:
1. Semantics ch
On 10/2/2024 3:37 PM, Paolo Abeni wrote:
Hi,
On 10/2/24 04:34, Yunsheng Lin wrote:
On 10/1/2024 9:32 PM, Paolo Abeni wrote:
Is the problem only tied to VFs drivers? It's a pity all the page_pool
users will have to pay a bill for it...
I am afraid it is not only tied to VFs driver
On 10/1/2024 9:32 PM, Paolo Abeni wrote:
On 9/25/24 09:57, Yunsheng Lin wrote:
Networking driver with page_pool support may hand over page
still with dma mapping to network stack and try to reuse that
page after network stack is done with it and passes it back
to page_pool to avoid the penalty
On 2024/9/30 16:09, Ilias Apalodimas wrote:
> On Sun, 29 Sept 2024 at 05:44, Yunsheng Lin wrote:
>>
>> On 2024/9/28 15:34, Ilias Apalodimas wrote:
>>
>> ...
>>
>>>
>>> Yes, that wasn't very clear indeed, apologies for any confusion. I w
On 2024/9/28 15:34, Ilias Apalodimas wrote:
...
>
> Yes, that wasn't very clear indeed, apologies for any confusion. I was
> trying to ask on a linked list that only lives in struct page_pool.
> But I now realize this was a bad idea since the lookup would be way
> slower.
>
>> If I understand q
On 2024/9/27 17:58, Ilias Apalodimas wrote:
...
>>
>>> importantly, though, why does struct page need to know about this?
>>> Can't we have the same information in page pool?
>>> When the driver allocates pages it does via page_pool_dev_alloc_X
>>> or something similar. Cant we do what you su
On 2024/9/27 17:21, Ilias Apalodimas wrote:
> Hi Yunsheng
>
> On Fri, 27 Sept 2024 at 06:58, Yunsheng Lin wrote:
>>
>> On 2024/9/27 2:15, Mina Almasry wrote:
>>>
>>>> In order not to do the dma unmmapping after driver has already
>>>> unbound
adding Sumit & Christian & dma-buf maillist
On 2024/9/27 13:54, Mina Almasry wrote:
> On Thu, Sep 26, 2024 at 8:58 PM Yunsheng Lin wrote:
>>
>> On 2024/9/27 2:15, Mina Almasry wrote:
>>>
>>>> In order not to do the dma unmmapping after driver has alr
On 2024/9/27 2:15, Mina Almasry wrote:
>
>> In order not to do the dma unmmapping after driver has already
>> unbound and stall the unloading of the networking driver, add
>> the pool->items array to record all the pages including the ones
>> which are handed over to network stack, so the page_poo
...@kernel.org/T/
Fixes: f71fec47c2df ("page_pool: make sure struct device is stable")
Signed-off-by: Yunsheng Lin
CC: Robin Murphy
CC: Alexander Duyck
CC: IOMMU
---
drivers/net/ethernet/freescale/fec_main.c | 8 +-
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 6 +-
drivers/n
rn rdmsrl_safe(MSR_IA32_PCM0, msr_result);
+#else
+ return 0;
+#endif
}
1.
https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26...@kernel.org/T/
CC: Alexander Lobakin
CC: Robin Murphy
CC: Alexander Duyck
CC: IOMMU
Change log:
V2:
1. Add a item_full stat.
2. Use containe
On 2024/9/24 14:45, Gur Stavi wrote:
> With all the caching in the network stack, some pages may be
> held in the network stack without returning to the page_pool
> soon enough, and with VF disable causing the driver unbound,
> the page_pool does not stop the driver from doing it's
On 2024/9/19 18:54, Yunsheng Lin wrote:
> On 2024/9/19 1:06, Ilias Apalodimas wrote:
>> Hi Yunsheng,
>>
>> Thanks for looking into this!
>>
>> On Wed, 18 Sept 2024 at 14:24, Yunsheng Lin wrote:
>>>
>>> Networking driver with page_pool suppo
On 2024/9/19 17:42, Jesper Dangaard Brouer wrote:
>
> On 18/09/2024 19.06, Ilias Apalodimas wrote:
>>> In order not to do the dma unmmapping after driver has already
>>> unbound and stall the unloading of the networking driver, add
>>> the pool->items array to record all the pages including the on
On 2024/9/19 1:06, Ilias Apalodimas wrote:
> Hi Yunsheng,
>
> Thanks for looking into this!
>
> On Wed, 18 Sept 2024 at 14:24, Yunsheng Lin wrote:
>>
>> Networking driver with page_pool support may hand over page
>> still with dma mapping to network stack an
consider fixing the case for devmem yet.
1.
https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26...@kernel.org/T/
Fixes: f71fec47c2df ("page_pool: make sure struct device is stable")
Signed-off-by: Yunsheng Lin
CC: Robin Murphy
CC: Alexander Duyck
CC: IOMMU
---
drivers/net/
32_PCM0, msr_result);
+#else
+ return 0;
+#endif
}
1.
https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26...@kernel.org/T/
CC: Alexander Lobakin
CC: Robin Murphy
CC: Alexander Duyck
CC: IOMMU
Yunsheng Lin (2):
page_pool: fix timing for checking and disabling napi_local
On 2024/9/12 22:25, Mina Almasry wrote:
> On Thu, Sep 12, 2024 at 5:51 AM Yunsheng Lin wrote:
>>
>> Networking driver with page_pool support may hand over page
>> still with dma mapping to network stack and try to reuse that
>> page after network stack is done with
07fc6f26...@kernel.org/T/
Signed-off-by: Yunsheng Lin
CC: Robin Murphy
CC: Alexander Duyck
CC: IOMMU
---
drivers/net/ethernet/freescale/fec_main.c | 8 +-
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 6 +-
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 14 +-
drivers/net/ethernet/intel/lib
Yunsheng Lin (2):
page_pool: fix timing for checking and disabling napi_local
page_pool: fix IOMMU crash when driver has already unbound
drivers/net/ethernet/freescale/fec_main.c | 8 +-
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 6 +-
drivers/net/ethernet/intel/idpf/idpf_txrx.c
On 2024/8/21 0:02, Alexander Duyck wrote:
> On Tue, Aug 20, 2024 at 6:07 AM Yunsheng Lin wrote:
>>
>> On 2024/8/19 23:54, Alexander Duyck wrote:
>>
>> ...
>>
>>>>>>
>>>>>> "There are three types of API as proposed in this
On 2024/8/19 23:54, Alexander Duyck wrote:
...
"There are three types of API as proposed in this patchset instead of
two types of API:
1. page_frag_alloc_va() returns [va].
2. page_frag_alloc_pg() returns [page, offset].
3. page_frag_alloc() returns [va] & [page, off
On 2024/8/15 23:00, Alexander Duyck wrote:
> On Wed, Aug 14, 2024 at 8:00 PM Yunsheng Lin wrote:
>>
>> On 2024/8/14 23:49, Alexander H Duyck wrote:
>>> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
>>>> Currently the page_frag API is returning 'vi
On 2024/8/14 23:49, Alexander H Duyck wrote:
> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
>> Currently the page_frag API is returning 'virtual address'
>> or 'va' when allocing and expecting 'virtual address' or
>> 'va' as
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
Reviewed-by: Subbaraya Sundeep
Acked-by: Chuck Lever
Acked-by: Sagi Grimberg
---
drivers/net/ethernet/google/gve/gve_rx.c
On 2024/8/6 8:52, Alexander Duyck wrote:
> On Sun, Aug 4, 2024 at 10:00 AM Yunsheng Lin
> wrote:
>>
>> On 8/3/2024 1:00 AM, Alexander Duyck wrote:
>>
>>>>
>>>>>
>>>>> As far as your API extension and naming maybe you should look
On 8/3/2024 1:00 AM, Alexander Duyck wrote:
As far as your API extension and naming maybe you should look like
something like bio_vec and borrow the naming from that since that is
essentially what you are passing back and forth is essentially that
instead of a page frag which is normally a vi
On 2024/8/1 23:21, Alexander Duyck wrote:
> On Thu, Aug 1, 2024 at 6:01 AM Yunsheng Lin wrote:
>>
>> On 2024/8/1 2:13, Alexander Duyck wrote:
>>> On Wed, Jul 31, 2024 at 5:50 AM Yunsheng Lin wrote:
>>>>
>>>> Currently the page_frag API is return
On 2024/8/1 2:13, Alexander Duyck wrote:
> On Wed, Jul 31, 2024 at 5:50 AM Yunsheng Lin wrote:
>>
>> Currently the page_frag API is returning 'virtual address'
>> or 'va' when allocing and expecting 'virtual address' or
>> 'va' as
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
Reviewed-by: Subbaraya Sundeep
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_t
On 2024/7/22 4:41, Alexander Duyck wrote:
> On Fri, Jul 19, 2024 at 2:37 AM Yunsheng Lin wrote:
>>
>> Currently the page_frag API is returning 'virtual address'
>> or 'va' when allocing and expecting 'virtual address' or
>> 'va
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
Reviewed-by: Subbaraya Sundeep
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_t
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drive
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drive
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drive
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drive
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drive
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drive
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drive
h
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drive
On 2024/4/17 0:12, Alexander H Duyck wrote:
> On Mon, 2024-04-15 at 21:19 +0800, Yunsheng Lin wrote:
>> Currently most of the API for page_frag API is returning
>> 'virtual address' as output or expecting 'virtual address'
>> as input, in order to different
ding API mirroring the page_pool_alloc_va() API of
the page_pool.
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +-
drivers/net/ethernet/intel/
ding API mirroring the page_pool_alloc_va() API of
the page_pool.
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +-
drivers/net/ethernet/intel/
ding API mirroring the page_pool_alloc_va() API of
the page_pool.
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +-
drivers/net/ethernet/intel/
On 2023/12/8 17:28, Yunsheng Lin wrote:
>> +
>> +page_pool_dma_sync_for_cpu(page->pp, page, buf->offset, len);
>
> Is there a reason why page_pool_dma_sync_for_cpu() is still used when
> page_pool_create() is called with PP_FLAG_DMA_SYNC_DEV flag? Isn't syncin
On 2023/12/8 1:20, Alexander Lobakin wrote:
...
> +
> +/**
> + * libie_rx_page_pool_create - create a PP with the default libie settings
> + * @bq: buffer queue struct to fill
> + * @napi: &napi_struct covering this PP (no usage outside its poll loops)
> + *
> + * Return: 0 on success, -errno on f
On 2023/11/30 19:58, Alexander Lobakin wrote:
> From: Yunsheng Lin
> Date: Thu, 30 Nov 2023 16:46:11 +0800
>
>> On 2023/11/29 21:17, Alexander Lobakin wrote:
>>> From: Yunsheng Lin
>>> Date: Wed, 29 Nov 2023 11:17:50 +0800
>>>
>>>> On 2023/1
On 2023/11/29 21:17, Alexander Lobakin wrote:
> From: Yunsheng Lin
> Date: Wed, 29 Nov 2023 11:17:50 +0800
>
>> On 2023/11/27 22:32, Alexander Lobakin wrote:
>>>
>>> Chris, any thoughts on a global flag for skipping DMA syncs ladder?
>>
>> It seems t
On 2023/11/27 22:32, Alexander Lobakin wrote:
>
> Chris, any thoughts on a global flag for skipping DMA syncs ladder?
It seems there was one already in the past:
https://lore.kernel.org/netdev/7c55a4d7-b4aa-25d4-1917-f6f355bd7...@arm.com/T/
>
>>
>>
>>> +static inline bool page_pool_set_dma_ad
On 2023/11/27 22:08, Alexander Lobakin wrote:
> From: Yunsheng Lin
> Date: Sat, 25 Nov 2023 20:29:22 +0800
>
>> On 2023/11/24 23:47, Alexander Lobakin wrote:
>>> After commit 5027ec19f104 ("net: page_pool: split the page_pool_params
>>> into fast and slow&qu
pool::dma_sync is not set, i.e. the driver didn't ask to
> perform syncs, don't do this test and never touch the lowest bit.
> On my x86_64, this gives from 2% to 5% performance benefit with no
> negative impact for cases when IOMMU is on and the shortcut can't be
> use
On 2023/11/24 23:47, Alexander Lobakin wrote:
> After commit 5027ec19f104 ("net: page_pool: split the page_pool_params
> into fast and slow") that made &page_pool contain only "hot" params at
> the start, cacheline boundary chops frag API fields group in the middle
> again.
> To not bother with thi
PP_FLAG_PAGE_FRAG is not really needed after pp_frag_count
handling is unified and page_pool_alloc_frag() is supported
in 32-bit arch with 64-bit DMA, so remove it.
Signed-off-by: Yunsheng Lin
CC: Lorenzo Bianconi
CC: Alexander Duyck
CC: Liang Chen
CC: Alexander Lobakin
---
drivers/net
PP_FLAG_PAGE_FRAG is not really needed after pp_frag_count
handling is unified and page_pool_alloc_frag() is supported
in 32-bit arch with 64-bit DMA, so remove it.
Signed-off-by: Yunsheng Lin
CC: Lorenzo Bianconi
CC: Alexander Duyck
CC: Liang Chen
CC: Alexander Lobakin
---
drivers/net
PP_FLAG_PAGE_FRAG is not really needed after pp_frag_count
handling is unified and page_pool_alloc_frag() is supported
in 32-bit arch with 64-bit DMA, so remove it.
Signed-off-by: Yunsheng Lin
CC: Lorenzo Bianconi
CC: Alexander Duyck
CC: Liang Chen
CC: Alexander Lobakin
---
drivers/net
58 matches
Mail list logo