Re: [PATCH net-next v5 7/7] af_packet: use sk_skb_reason_drop to free rx packets

2024-06-18 Thread Jesper Dangaard Brouer
zed sk, added missing report tags. --- net/packet/af_packet.c | 10 +- 1 file changed, 5 insertions(+), 5 deletions(-) Acked-by: Jesper Dangaard Brouer

Re: [PATCH net-next v5 6/7] udp: use sk_skb_reason_drop to free rx packets

2024-06-18 Thread Jesper Dangaard Brouer
ort tags --- net/ipv4/udp.c | 10 +- net/ipv6/udp.c | 10 +- 2 files changed, 10 insertions(+), 10 deletions(-) Acked-by: Jesper Dangaard Brouer

Re: [PATCH net-next v5 5/7] tcp: use sk_skb_reason_drop to free rx packets

2024-06-18 Thread Jesper Dangaard Brouer
ort tags --- net/ipv4/syncookies.c | 2 +- net/ipv4/tcp_input.c | 2 +- net/ipv4/tcp_ipv4.c | 6 +++--- net/ipv6/syncookies.c | 2 +- net/ipv6/tcp_ipv6.c | 6 +++--- 5 files changed, 9 insertions(+), 9 deletions(-) Acked-by: Jesper Dangaard Brouer

Re: [PATCH net-next v5 4/7] net: raw: use sk_skb_reason_drop to free rx packets

2024-06-18 Thread Jesper Dangaard Brouer
Dangaard Brouer

Re: [PATCH net-next v5 3/7] ping: use sk_skb_reason_drop to free rx packets

2024-06-18 Thread Jesper Dangaard Brouer
On 17/06/2024 20.09, Yan Zhai wrote: Replace kfree_skb_reason with sk_skb_reason_drop and pass the receiving socket to the tracepoint. Signed-off-by: Yan Zhai --- net/ipv4/ping.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) Acked-by: Jesper Dangaard Brouer

Re: [PATCH net-next v5 2/7] net: introduce sk_skb_reason_drop function

2024-06-18 Thread Jesper Dangaard Brouer
s(-) Acked-by: Jesper Dangaard Brouer

Re: [PATCH net-next v5 1/7] net: add rx_sk to trace_kfree_skb

2024-06-18 Thread Jesper Dangaard Brouer
often identifies a local sender, and tells nothing about a receiver. Allow passing an extra receiving socket to the tracepoint to improve the visibility on receiving drops. Signed-off-by: Yan Zhai --- v4->v5: rename rx_skaddr -> rx_sk as Jesper Dangaard Brouer suggested v3->v4: adjusted the

Re: [PATCH v4 net-next 1/7] net: add rx_sk to trace_kfree_skb

2024-06-12 Thread Jesper Dangaard Brouer
On 11/06/2024 22.11, Yan Zhai wrote: skb does not include enough information to find out receiving sockets/services and netns/containers on packet drops. In theory skb->dev tells about netns, but it can get cleared/reused, e.g. by TCP stack for OOO packet lookup. Similarly, skb->sk often

Re: [PATCH net-next v9] virtio_net: Support RX hash XDP hint

2024-04-17 Thread Jesper Dangaard Brouer
-by: Jesper Dangaard Brouer

Re: [PATCH net-next v7] virtio_net: Support RX hash XDP hint

2024-04-15 Thread Jesper Dangaard Brouer
On 13/04/2024 06.10, Liang Chen wrote: The RSS hash report is a feature that's part of the virtio specification. Currently, virtio backends like qemu, vdpa (mlx5), and potentially vhost (still a work in progress as per [1]) support this feature. While the capability to obtain the RSS hash has

Re: [PATCH net-next v5] virtio_net: Support RX hash XDP hint

2024-02-02 Thread Jesper Dangaard Brouer
On 02/02/2024 13.11, Liang Chen wrote: The RSS hash report is a feature that's part of the virtio specification. Currently, virtio backends like qemu, vdpa (mlx5), and potentially vhost (still a work in progress as per [1]) support this feature. While the capability to obtain the RSS hash has

Re: [PATCH net-next v3 2/5] mm: add a signature in struct page

2021-04-19 Thread Jesper Dangaard Brouer
On Wed, 14 Apr 2021 13:09:47 -0700 Shakeel Butt wrote: > On Wed, Apr 14, 2021 at 12:42 PM Jesper Dangaard Brouer > wrote: > > > [...] > > > > > > > > Can this page_pool be used for TCP RX zerocopy? If yes then PageType > > > > can not

Re: [PATCH 1/2] mm: Fix struct page layout on 32-bit systems

2021-04-17 Thread Jesper Dangaard Brouer
ential problem where (on a big endian platform), the bit used to denote > PageTail could inadvertently get set, and a racing get_user_pages_fast() > could dereference a bogus compound_head(). > > Fixes: c25fff7171be ("mm: add dma_addr_t to struct page") > Signed-off-by: Ma

Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems

2021-04-16 Thread Jesper Dangaard Brouer
On Fri, 16 Apr 2021 16:27:55 +0100 Matthew Wilcox wrote: > On Thu, Apr 15, 2021 at 08:08:32PM +0200, Jesper Dangaard Brouer wrote: > > See below patch. Where I swap32 the dma address to satisfy > > page->compound having bit zero cleared. (It is the simplest fix I

Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems

2021-04-15 Thread Jesper Dangaard Brouer
On Wed, 14 Apr 2021 21:56:39 + David Laight wrote: > From: Matthew Wilcox > > Sent: 14 April 2021 22:36 > > > > On Wed, Apr 14, 2021 at 09:13:22PM +0200, Jesper Dangaard Brouer wrote: > > > (If others want to reproduce). First I could not reproduce on

Re: [PATCH net v3] i40e: fix the panic when running bpf in xdpdrv mode

2021-04-15 Thread Jesper Dangaard Brouer
("i40e: main driver core") > > Co-developed-by: Shujin Li > > Signed-off-by: Shujin Li > > Signed-off-by: Jason Xing > > Reviewed-by: Jesse Brandeburg > > @Jakub/@DaveM - feel free to apply this directly. Acked-by: Jesper Danga

Re: [PATCH net-next v3 2/5] mm: add a signature in struct page

2021-04-14 Thread Jesper Dangaard Brouer
t this code path for (TCP RX zerocopy) uses page->private for tricks. And our patch [3/5] use page->private for storing xdp_mem_info. IMHO when the SKB travel into this TCP RX zerocopy code path, we should call page_pool_release_page() to release its DMA-mapping. > > [1] > > https://lore.kernel.org/linux-mm/20210316013003.25271-1-arjunroy.k...@gmail.com/ > > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems

2021-04-14 Thread Jesper Dangaard Brouer
arm was needed to cause the issue by enabling CONFIG_ARCH_DMA_ADDR_T_64BIT. Details below signature. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer From file: arch/arm/Kconfig config XEN bool "Xen

Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems

2021-04-14 Thread Jesper Dangaard Brouer
work.kernel.org/project/netdevbpf/patch/20210409223801.104657-3-mcr...@linux.microsoft.com/ [3] https://lore.kernel.org/linux-mm/20210410024313.gx2531...@casper.infradead.org/ -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH net-next v3 2/5] mm: add a signature in struct page

2021-04-11 Thread Jesper Dangaard Brouer
. I still worry about page->index, see [2]. [2] https://lore.kernel.org/netdev/2021044307.5087f958@carbon/ -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems

2021-04-11 Thread Jesper Dangaard Brouer
I worry about @index. As I mentioned in other thread[1] netstack use page_is_pfmemalloc() (code copy-pasted below signature) which imply that the member @index have to be kept intact. In above, I'm unsure @index is untouched. [1] https://lore.kernel.org/lkml/20210410082158.79ad09a6@carbon/ -- Best regards,

Re: Bogus struct page layout on 32-bit

2021-04-10 Thread Jesper Dangaard Brouer
ed__(4))); > > This presumably affects any 32-bit architecture with a 64-bit phys_addr_t > / dma_addr_t. Advice, please? I'm not sure that the 32-bit behavior is with 64-bit (dma) addrs. I don't have any 32-bit boards with 64-bit DMA. Cc. Ivan, wasn't your board (572x ?) 32-bit with driver 'cpsw' this case (where Ivan added XDP+page_pool) ? -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH net-next v2 3/5] page_pool: Allow drivers to hint on SKB recycling

2021-04-09 Thread Jesper Dangaard Brouer
On Fri, 9 Apr 2021 22:01:51 +0300 Ilias Apalodimas wrote: > On Fri, Apr 09, 2021 at 11:56:48AM -0700, Jakub Kicinski wrote: > > On Fri, 2 Apr 2021 20:17:31 +0200 Matteo Croce wrote: > > > Co-developed-by: Jesper Dangaard Brouer > > > Co-developed-by: Matteo Croce

Re: [PATCH net v2 1/1] xdp: fix xdp_return_frame() kernel BUG throw for page_pool memory model

2021-03-31 Thread Jesper Dangaard Brouer
PU: 0 PID: 3884 Comm: modprobe Tainted: G U E > 5.12.0-rc2+ #45 > > Changes in v2: > - This patch fixes the issue by making xdp_return_frame_no_direct() is >only called if napi_direct = true, as recommended for better by >Jesper Dangaard Brouer. Thanks! >

Re: [RFC PATCH 0/6] Use local_lock for pcp protection and reduce stat overhead

2021-03-31 Thread Jesper Dangaard Brouer
On Wed, 31 Mar 2021 08:38:05 +0100 Mel Gorman wrote: > On Tue, Mar 30, 2021 at 08:51:54PM +0200, Jesper Dangaard Brouer wrote: > > On Mon, 29 Mar 2021 13:06:42 +0100 > > Mel Gorman wrote: > > > > > This series requires patches in Andrew's tree so the

Re: [RFC PATCH 0/6] Use local_lock for pcp protection and reduce stat overhead

2021-03-30 Thread Jesper Dangaard Brouer
(But as performance is the same or slightly better, I will not complain). > drivers/base/node.c| 18 +-- > include/linux/mmzone.h | 29 +++-- > include/linux/vmstat.h | 65 ++- > mm/mempolicy.c | 2 +- > mm/page_alloc.c| 173 ++++

Re: [PATCH net-next v2 0/6] stmmac: Add XDP support

2021-03-30 Thread Jesper Dangaard Brouer
) I'm interested in playing with the hardwares Split Header (SPH) feature. As this was one of the use-cases for XDP multi-frame work. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH net 1/1] xdp: fix xdp_return_frame() kernel BUG throw for page_pool memory model

2021-03-29 Thread Jesper Dangaard Brouer
for disabling napi_direct of > xdp_return_frame") > Signed-off-by: Ong Boon Leong > --- This looks correct to me. Acked-by: Jesper Dangaard Brouer > net/core/xdp.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/net/core/xdp.c b/net/core/xdp.

[PATCH mel-git 2/3] net: page_pool: use alloc_pages_bulk in refill code path

2021-03-24 Thread Jesper Dangaard Brouer
/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Mel Gorman --- net/core/page_pool.c | 72 -- 1 file changed, 46 insertions(+), 26 deletions(-) diff --git a/net/core

[PATCH mel-git 1/3] net: page_pool: refactor dma_map into own function page_pool_dma_map

2021-03-24 Thread Jesper Dangaard Brouer
In preparation for next patch, move the dma mapping into its own function, as this will make it easier to follow the changes. V2: make page_pool_dma_map return boolean (Ilias) Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Mel Gorman Reviewed-by: Ilias Apalodimas --- net/core

[PATCH mel-git 3/3] net: page_pool: convert to use alloc_pages_bulk_array variant

2021-03-24 Thread Jesper Dangaard Brouer
Using the API variant alloc_pages_bulk_array from page_pool was done in a separate patch to ease benchmarking the variants separately. Maintainers can squash patch if preferred. Signed-off-by: Jesper Dangaard Brouer --- include/net/page_pool.h |2 +- net/core/page_pool.c| 22

[PATCH mel-git 0/3] page_pool using alloc_pages_bulk API

2021-03-24 Thread Jesper Dangaard Brouer
20200408 (Red Hat 9.3.1-2) Intent is for Mel to pickup these patches. --- Jesper Dangaard Brouer (3): net: page_pool: refactor dma_map into own function page_pool_dma_map net: page_pool: use alloc_pages_bulk in refill code path net: page_pool: convert to use alloc_pages_bulk_array

Re: [PATCH 0/3 v5] Introduce a bulk order-0 page allocator

2021-03-23 Thread Jesper Dangaard Brouer
On Tue, 23 Mar 2021 16:08:14 +0100 Jesper Dangaard Brouer wrote: > On Tue, 23 Mar 2021 10:44:21 + > Mel Gorman wrote: > > > On Mon, Mar 22, 2021 at 09:18:42AM +, Mel Gorman wrote: > > > This series is based on top of Matthew Wilcox's series "Rationalis

Re: [PATCH net-next 0/6] page_pool: recycle buffers

2021-03-23 Thread Jesper Dangaard Brouer
138 insertions(+), 26 deletions(-) > > > > Just for the reference, I've performed some tests on 1G SoC NIC with > > this patchset on, here's direct link: [0] > > > > Thanks for the testing! > Any chance you can get a perf measurement on this? I guess you mean perf-report (--stdio) output, right? > Is DMA syncing taking a substantial amount of your cpu usage? (+1 this is an important question) > > > > [0] https://lore.kernel.org/netdev/20210323153550.130385-1-aloba...@pm.me > > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 2/3] mm/page_alloc: Add a bulk page allocator

2021-03-23 Thread Jesper Dangaard Brouer
iler to uninline the static function. My tests show you should inline __rmqueue_pcplist(). See patch I'm using below signature, which also have some benchmark notes. (Please squash it into your patch and drop these notes). -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer

Re: [PATCH 0/3 v5] Introduce a bulk order-0 page allocator

2021-03-23 Thread Jesper Dangaard Brouer
ck. I will rebase and check again. The current performance tests that I'm running, I observe that the compiler layout the code in unfortunate ways, which cause I-cache performance issues. I wonder if you could integrate below patch with your patchset? (just squash it) -- Best regards, Jesper Dangaard Broue

Re: [PATCH net-next 6/6] mvneta: recycle buffers

2021-03-23 Thread Jesper Dangaard Brouer
>rxq->mem); > } > > return skb; This cause skb_mark_for_recycle() to set 'skb->pp_recycle=1' multiple times, for the same SKB. (copy-pasted function below signature to help reviewers). This makes me question if we need an API for setting this per page f

Re: [PATCH 0/3 v5] Introduce a bulk order-0 page allocator

2021-03-23 Thread Jesper Dangaard Brouer
t wrote into this cache-line. As the bulk size goes up, as Matthew pointed out, this cache-line might be pushed into L2-cache, and then need to be accessed again when prep_new_page() is called. Another observation is that moving prep_new_page() into loop reduced function size with 253 bytes (which a

Re: [PATCH net-next] page_pool: let the compiler optimize and inline core functions

2021-03-23 Thread Jesper Dangaard Brouer
ool *pool) > { > struct ptr_ring *r = >ring; > @@ -181,7 +180,6 @@ static void page_pool_dma_sync_for_device(struct > page_pool *pool, > } > > /* slow path */ > -noinline > static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, >

Re: [PATCH 0/3 v5] Introduce a bulk order-0 page allocator

2021-03-22 Thread Jesper Dangaard Brouer
lated to the stats counters got added/moved inside the loop, in this patchset. Previous results from: https://lore.kernel.org/netdev/20210319181031.44dd3113@carbon/ On Fri, 19 Mar 2021 18:10:31 +0100 Jesper Dangaard Brouer wrote: > BASELINE > single_page alloc+put: 207 cy

Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator

2021-03-19 Thread Jesper Dangaard Brouer
list */ > + if (page_list) { > + list_for_each_entry(page, page_list, lru) { > + prep_new_page(page, 0, gfp, 0); > + } > + } > > return allocated; > > @@ -5056,7 +5086,10 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_nid, >

Re: [PATCH 0/7 v4] Introduce a bulk order-0 page allocator with two in-tree users

2021-03-17 Thread Jesper Dangaard Brouer
On Wed, 17 Mar 2021 16:52:32 + Alexander Lobakin wrote: > From: Jesper Dangaard Brouer > Date: Wed, 17 Mar 2021 17:38:44 +0100 > > > On Wed, 17 Mar 2021 16:31:07 + > > Alexander Lobakin wrote: > > > > > From: Mel Gorman > > > Date: F

Re: [PATCH 0/7 v4] Introduce a bulk order-0 page allocator with two in-tree users

2021-03-17 Thread Jesper Dangaard Brouer
he sunrpc and page_pool pre-requisites (patches 4 and 6) > > directly to the subsystem maintainers. While sunrpc is low-risk, I'm > > vaguely aware that there are other prototype series on netdev that affect > > page_pool. The conflict should be obvious in linux-next. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

[PATCH mel-git] net: page_pool: use alloc_pages_bulk in refill code path

2021-03-15 Thread Jesper Dangaard Brouer
(3,810,013 pps -> 4,308,208 pps). [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Mel Gorman --- net/core/page_pool.c | 73 -- 1 file chan

[PATCH mel-git] Followup: Update [PATCH 7/7] in Mel's series

2021-03-15 Thread Jesper Dangaard Brouer
18% before, but I don't think the rewrite of the specific patch have anything to do with this. Notes on tests: https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org#test-on-mel-git-tree --- Jesper Dangaard Brouer (1): net: page_pool:

Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator

2021-03-15 Thread Jesper Dangaard Brouer
ool use-case doesn't have a sparse array to populate (like NFS/SUNRPC) then I can still use this API that Chuck is suggesting. Thus, I'm fine with this :-) (p.s. working on implementing Alexander Duyck's suggestions, but I don't have it ready yet, I will try to send new patch tomorrow. And I do r

Re: [PATCH 7/7] net: page_pool: use alloc_pages_bulk in refill code path

2021-03-15 Thread Jesper Dangaard Brouer
unmap the page before you call > > put_page on it? > > Oops, I completely missed that. Alexander is right here. Well, the put_page() case can never happen as the pool->alloc.cache[] is known to be empty when this function is called. I do agree that the code looks cumbersome and should free the DMA mapping, if it could happen. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 7/7] net: page_pool: use alloc_pages_bulk in refill code path

2021-03-15 Thread Jesper Dangaard Brouer
x. He's more > familiar with this particular code and can verify the performance is > still ok for high speed networks. Yes, I'll take a look at this, and updated the patch accordingly (and re-run the performance tests). -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator

2021-03-12 Thread Jesper Dangaard Brouer
ightly in different parts of the kernel. I started in networking area of the kernel, and I was also surprised when I started working in MM area that the coding style differs. I can tell you that the indentation style Mel choose is consistent with the code styling in MM area. I usually respect that even-though I prefer the networking style as I was "raised" with that style. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator

2021-03-12 Thread Jesper Dangaard Brouer
ia LRU (page->lru member). If you are planning to use llist, then how to handle this API change later? Have you notice that the two users store the struct-page pointers in an array? We could have the caller provide the array to store struct-page pointers, like we do with kmem_cache_alloc_bulk API. You likely have good reasons for returning the pages as a list (via lru), as I can see/imagine that there are some potential for grabbing the entire PCP-list. > > > + list_add(>lru, alloc_list); > > > + alloced++; > > > + } > > > + > > > + if (!alloced) > > > + goto failed_irq; > > > + > > > + if (alloced) { > > > + __count_zid_vm_events(PGALLOC, zone_idx(zone), > > > alloced); > > > + zone_statistics(zone, zone); > > > + } > > > + > > > + local_irq_restore(flags); > > > + > > > + return alloced; > > > + > > > +failed_irq: > > > + local_irq_restore(flags); > > > + > > > +failed: > > > > Might we need some counter to show how often this path happens? > > > > I think that would be overkill at this point. It only gives useful > information to a developer using the API for the first time and that > can be done with a debugging patch (or probes if you're feeling > creative). I'm already unhappy with the counter overhead in the page > allocator. zone_statistics in particular has no business being an > accurate statistic. It should have been a best-effort counter like > vm_events that does not need IRQs to be disabled. If that was a > simply counter as opposed to an accurate statistic then a failure > counter at failed_irq would be very cheap to add. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH 4/5] net: page_pool: refactor dma_map into own function page_pool_dma_map

2021-03-03 Thread Jesper Dangaard Brouer
On Wed, 3 Mar 2021 09:18:25 + Mel Gorman wrote: > On Tue, Mar 02, 2021 at 08:49:06PM +0200, Ilias Apalodimas wrote: > > On Mon, Mar 01, 2021 at 04:11:59PM +, Mel Gorman wrote: > > > From: Jesper Dangaard Brouer > > > > > > In preparation for next

[PATCH RFC V2 net-next 0/2] Use bulk order-0 page allocator API for page_pool

2021-03-01 Thread Jesper Dangaard Brouer
carry these patches? (to keep it together with the alloc_pages_bulk API) --- Jesper Dangaard Brouer (2): net: page_pool: refactor dma_map into own function page_pool_dma_map net: page_pool: use alloc_pages_bulk in refill code path net/core/page_pool.c

[PATCH RFC V2 net-next 2/2] net: page_pool: use alloc_pages_bulk in refill code path

2021-03-01 Thread Jesper Dangaard Brouer
(3,677,958 pps -> 4,368,926 pps). [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org Signed-off-by: Jesper Dangaard Brouer --- net/core/page_pool.c | 63 -- 1 file changed, 40 insertions(+),

[PATCH RFC V2 net-next 1/2] net: page_pool: refactor dma_map into own function page_pool_dma_map

2021-03-01 Thread Jesper Dangaard Brouer
In preparation for next patch, move the dma mapping into its own function, as this will make it easier to follow the changes. V2: make page_pool_dma_map return boolean (Ilias) Signed-off-by: Jesper Dangaard Brouer --- net/core/page_pool.c | 45 ++--- 1

Re: [PATCH RFC net-next 3/3] mm: make zone->free_area[order] access faster

2021-02-26 Thread Jesper Dangaard Brouer
On Thu, 25 Feb 2021 15:38:15 + Mel Gorman wrote: > On Thu, Feb 25, 2021 at 04:16:33PM +0100, Jesper Dangaard Brouer wrote: > > > On Wed, Feb 24, 2021 at 07:56:51PM +0100, Jesper Dangaard Brouer wrote: > > > > Avoid multiplication (imul) operations when accessing:

Re: [PATCH RFC net-next 2/3] net: page_pool: use alloc_pages_bulk in refill code path

2021-02-26 Thread Jesper Dangaard Brouer
On Wed, 24 Feb 2021 22:15:22 +0200 Ilias Apalodimas wrote: > Hi Jesper, > > On Wed, Feb 24, 2021 at 07:56:46PM +0100, Jesper Dangaard Brouer wrote: > > There are cases where the page_pool need to refill with pages from the > > page allocator. Some workloads cause the page_

Re: [PATCH RFC net-next 3/3] mm: make zone->free_area[order] access faster

2021-02-25 Thread Jesper Dangaard Brouer
On Wed, Feb 24, 2021 at 07:56:51PM +0100, Jesper Dangaard Brouer wrote: > > Avoid multiplication (imul) operations when accessing: > > zone->free_area[order].nr_free > > > > This was really tricky to find. I was puzzled why perf reported that > > rmqueue_bul

[PATCH RFC net-next 3/3] mm: make zone->free_area[order] access faster

2021-02-24 Thread Jesper Dangaard Brouer
a 1-cycle shl, saving 2-cycles. It does trade some space to do this. Used: gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2) Signed-off-by: Jesper Dangaard Brouer --- include/linux/mmzone.h |6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/li

[PATCH RFC net-next 2/3] net: page_pool: use alloc_pages_bulk in refill code path

2021-02-24 Thread Jesper Dangaard Brouer
(3,677,958 pps -> 4,368,926 pps). [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org Signed-off-by: Jesper Dangaard Brouer --- net/core/page_pool.c | 65 -- 1 file changed, 41 insertions(+),

[PATCH RFC net-next 1/3] net: page_pool: refactor dma_map into own function page_pool_dma_map

2021-02-24 Thread Jesper Dangaard Brouer
In preparation for next patch, move the dma mapping into its own function, as this will make it easier to follow the changes. Signed-off-by: Jesper Dangaard Brouer --- net/core/page_pool.c | 49 + 1 file changed, 29 insertions(+), 20 deletions

[PATCH RFC net-next 0/3] Use bulk order-0 page allocator API for page_pool

2021-02-24 Thread Jesper Dangaard Brouer
This is a followup to Mel Gorman's patchset: - Message-Id: <20210224102603.19524-1-mgor...@techsingularity.net> - https://lore.kernel.org/netdev/20210224102603.19524-1-mgor...@techsingularity.net/ Showing page_pool usage of the API for alloc_pages_bulk(). --- Jesper Dangaard Bro

Re: [RFC PATCH 0/3] Introduce a bulk order-0 page allocator for sunrpc

2021-02-24 Thread Jesper Dangaard Brouer
If you change local_irq_save(flags) to local_irq_disable() then you can likely get better performance for 1 page requests via this API. This limits the API to be used in cases where IRQs are enabled (which is most cases). (For my use-case I will not do 1 page requests). -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH v4 net-next 08/11] skbuff: introduce {,__}napi_build_skb() which reuses NAPI cache heads

2021-02-11 Thread Jesper Dangaard Brouer
--git a/net/core/skbuff.c b/net/core/skbuff.c > index 860a9d4f752f..9e1a8ded4acc 100644 > --- a/net/core/skbuff.c > +++ b/net/core/skbuff.c > @@ -120,6 +120,8 @@ static void skb_under_panic(struct sk_buff *skb, unsigned > int sz, void *addr) > } > > #define NAPI_SKB_C

Re: [v3 net-next 08/10] skbuff: reuse NAPI skb cache on allocation path (__build_skb())

2021-02-10 Thread Jesper Dangaard Brouer
t; > > - /* record skb to CPU local list */ > > > + kasan_poison_object_data(skbuff_head_cache, skb); > > > nc->skb_cache[nc->skb_count++] = skb; > > > > > > -#ifdef CONFIG_SLUB > > > - /* SLUB writes into objects when freeing */ > > > -

Re: [PATCH net-next 3/3] net: page_pool: simplify page recycling condition tests

2021-01-25 Thread Jesper Dangaard Brouer
n > --- > net/core/page_pool.c | 14 -- > 1 file changed, 4 insertions(+), 10 deletions(-) Acked-by: Jesper Dangaard Brouer > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index f3c690b8c8e3..ad8b0707af04 100644 > --- a/net/core/page_pool.c >

Re: [PATCH net-next] sfc: reduce the number of requested xdp ev queues

2021-01-21 Thread Jesper Dangaard Brouer
> + tx_per_ev = EFX_MAX_EVQ_SIZE / EFX_TXQ_MAX_ENT(efx); > n_xdp_tx = num_possible_cpus(); > - n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, EFX_MAX_TXQ_PER_CHANNEL); > + n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, tx_per_ev); > > vec_count = pci_msix_vec_count(efx->pci_dev); > if (vec_count < 0) -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH net-next] sfc: reduce the number of requested xdp ev queues

2020-12-16 Thread Jesper Dangaard Brouer
On Tue, 15 Dec 2020 18:49:55 + Edward Cree wrote: > On 15/12/2020 09:43, Jesper Dangaard Brouer wrote: > > On Mon, 14 Dec 2020 17:29:06 -0800 > > Ivan Babrou wrote: > > > >> Without this change the driver tries to allocate too many queues, > >>

Re: [PATCH net-next] sfc: reduce the number of requested xdp ev queues

2020-12-15 Thread Jesper Dangaard Brouer
size. >*/ > - > + tx_per_ev = EFX_MAX_EVQ_SIZE / EFX_TXQ_MAX_ENT(efx); > n_xdp_tx = num_possible_cpus(); > - n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, EFX_MAX_TXQ_PER_CHANNEL); > + n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, tx_per_ev); > > vec_count = pci_msix_vec

Re: [PATCH] net: xdp: Give compiler __always_inline hint for xdp_rxq_info_init()

2020-12-01 Thread Jesper Dangaard Brouer
ruct xdp_rxq_info *xdp_rxq) > +static __always_inline void xdp_rxq_info_init(struct xdp_rxq_info *xdp_rxq) > { > memset(xdp_rxq, 0, sizeof(*xdp_rxq)); > } -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: XDP maintainer match (Was [PATCH v2 0/2] hwmon: (max127) Add Maxim MAX127 hardware monitoring)

2020-11-19 Thread Jesper Dangaard Brouer
On Thu, 19 Nov 2020 09:59:28 -0800 Jakub Kicinski wrote: > On Thu, 19 Nov 2020 09:09:53 -0800 Joe Perches wrote: > > On Thu, 2020-11-19 at 17:35 +0100, Jesper Dangaard Brouer wrote: > > > On Thu, 19 Nov 2020 07:46:34 -0800 Jakub Kicinski > > > wrote: > &

XDP maintainer match (Was [PATCH v2 0/2] hwmon: (max127) Add Maxim MAX127 hardware monitoring)

2020-11-19 Thread Jesper Dangaard Brouer
our best to fix get_maintainer. > > XDP folks, any opposition to changing the keyword / filename to: > > [^a-z0-9]xdp[^a-z0-9] > > ? I think it is a good idea to change the keyword (K:), but I'm not sure this catch what we want, maybe it does. The pattern match are meant to catch drivers containing XDP related bits. Previously Joe Perches suggested this pattern match, which I don't fully understand... could you explain Joe? (?:\b|_)xdp(?:\b|_) For the filename (N:) regex match, I'm considering if we should remove it and list more files explicitly. I think normal glob * pattern works, which should be sufficient. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH] net/xdp: remove unused macro REG_STATE_NEW

2020-11-10 Thread Jesper Dangaard Brouer
On Mon, 09 Nov 2020 13:44:48 -0800 John Fastabend wrote: > Alex Shi wrote: > > > > > > 在 2020/11/7 上午12:13, Jesper Dangaard Brouer 写道: > > > Hmm... REG_STATE_NEW is zero, so it is implicitly set via memset zero. > > > But it is true that it is tech

Re: [PATCH] net/xdp: remove unused macro REG_STATE_NEW

2020-11-06 Thread Jesper Dangaard Brouer
Shi > Cc: "David S. Miller" > Cc: Jakub Kicinski > Cc: Alexei Starovoitov > Cc: Daniel Borkmann > Cc: Jesper Dangaard Brouer > Cc: John Fastabend > Cc: net...@vger.kernel.org > Cc: b...@vger.kernel.org > Cc: linux-kernel@vger.kernel.org > --

Re: [PATCH 4/6] perf: Optimize get_recursion_context()

2020-10-30 Thread Jesper Dangaard Brouer
On Fri, 30 Oct 2020 16:13:49 +0100 Peter Zijlstra wrote: > "Look ma, no branches!" > > Cc: Jesper Dangaard Brouer > Cc: Steven Rostedt > Signed-off-by: Peter Zijlstra (Intel) > --- Cool trick! :-) Acked-by: Jesper Dangaard Brouer >

Re: [PATCH] arm64: bpf: Fix branch offset in JIT

2020-09-14 Thread Jesper Dangaard Brouer
lacking BPF regression testing for ARM64 :-( This bug surfaced when Red Hat QA tested our kernel backports, on different archs. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

[PATCH] tools build feature: cleanup feature files on make clean

2020-08-27 Thread Jesper Dangaard Brouer
ght be using it. Did change the output from "CLEAN config" to "CLEAN feature-detect", to make it more clear what happens. This is related to the complaint and troubleshooting in link: Link: https://lore.kernel.org/lkml/20200818122007.2d1cfe2d@carbon/ Signed-off-by: Jesper Dangaard

Re: Kernel build error on BTFIDS vmlinux

2020-08-18 Thread Jesper Dangaard Brouer
On Tue, 18 Aug 2020 15:45:43 +0200 Jiri Olsa wrote: > On Tue, Aug 18, 2020 at 12:56:08PM +0200, Jiri Olsa wrote: > > On Tue, Aug 18, 2020 at 11:14:10AM +0200, Jiri Olsa wrote: > > > On Tue, Aug 18, 2020 at 10:55:55AM +0200, Jesper Dangaard Brouer wrote: > > > >

Tools build error due to "Auto-detecting system features" missing cleanup

2020-08-18 Thread Jesper Dangaard Brouer
the issue locally in tools/build/, but this isn't triggered when calling make clean in other tools directories that use the feature tests. What is the correct make clean fix? - - Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.li

Kernel build error on BTFIDS vmlinux

2020-08-18 Thread Jesper Dangaard Brouer
, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer $ ./tools/bpf/resolve_btfids/resolve_btfids -vv vmlinux.err.bak section(1) .text, size 12588824, link 0, flags 6, type=1 section(2) .rodata, size 4424758, link 0, flags 3, type=1

Re: [PATCH v2] MAINTAINERS: XDP: restrict N: and K:

2020-07-11 Thread Jesper Dangaard Brouer
> +F: include/uapi/linux/xdp_diag.h > F: kernel/bpf/cpumap.c > F: kernel/bpf/devmap.c > F: net/core/xdp.c > -N: xdp > -K: xdp > +F: net/xdp/ > +F: samples/bpf/xdp* > +F: tools/testing/selftests/bfp/*xdp* Typo, should be "bpf" > +F: tools/testing/selftests/bfp/*/*xdp* > +K: (?:\b|_)xdp(?:\b|_) > > XDP SOCKETS (AF_XDP) > M: Björn Töpel > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: generic DMA bypass flag v4

2020-07-10 Thread Jesper Dangaard Brouer
e benchmark (before I go on vacation). I hoped Björn could test/benchmark this(?), given (as mentioned) this also affect XSK / AF_XDP performance. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH] Replace HTTP links with HTTPS ones: XDP (eXpress Data Path)

2020-07-09 Thread Jesper Dangaard Brouer
fallback} > --nol 0003-Replace-HTTP-links-with-HTTPS-ones-XDP-eXpress-Data-.patch > Jonathan Corbet (maintainer:DOCUMENTATION) > Alexei Starovoitov (supporter:XDP (eXpress Data Path)) > Daniel Borkmann (supporter:XDP (eXpress Data Path)) > "David S. Miller" (supporter:X

Re: [PATCH bpf-next V3 0/2] BPF selftests test runner 'test_progs' use proper shell exit codes

2020-07-08 Thread Jesper Dangaard Brouer
On Tue, 7 Jul 2020 00:23:48 -0700 Andrii Nakryiko wrote: > On Tue, Jul 7, 2020 at 12:12 AM Jesper Dangaard Brouer > wrote: > > > > This patchset makes it easier to use test_progs from shell scripts, by using > > proper shell exit codes. The process's exit status should b

[PATCH bpf-next V3 1/2] selftests/bpf: test_progs use another shell exit on non-actions

2020-07-07 Thread Jesper Dangaard Brouer
annot tell the difference between a non-existing test and the test failing. This patch uses value 2 as shell exit indication. (Aside note unrecognized option parameters use value 64). Fixes: 6c92bd5cd465 ("selftests/bpf: Test_progs indicate to shell on non-actions") Signed-off-by: Jesper Dangaar

[PATCH bpf-next V3 2/2] selftests/bpf: test_progs avoid minus shell exit codes

2020-07-07 Thread Jesper Dangaard Brouer
of minus-1. These cases are put in the same group of infrastructure setup errors. Fixes: fd27b1835e70 ("selftests/bpf: Reset process and thread affinity after each test/sub-test") Fixes: 811d7e375d08 ("bpf: selftests: Restore netns after each test") Signed-off-by: Jesper Dangaar

[PATCH bpf-next V3 0/2] BPF selftests test runner 'test_progs' use proper shell exit codes

2020-07-07 Thread Jesper Dangaard Brouer
fore with different tests (that are part of test_progs). CI people writing these shell-scripts could pickup these hints and report them, if that makes sense. --- Jesper Dangaard Brouer (2): selftests/bpf: test_progs use another shell exit on non-actions selftests/bpf: test_progs avoid minus s

Re: [PATCH bpf-next V2 2/2] selftests/bpf: test_progs avoid minus shell exit codes

2020-07-07 Thread Jesper Dangaard Brouer
On Mon, 6 Jul 2020 15:17:57 -0700 Andrii Nakryiko wrote: > On Mon, Jul 6, 2020 at 10:00 AM Jesper Dangaard Brouer > wrote: > > > > There are a number of places in test_progs that use minus-1 as the argument > > to exit(). This improper use as a process exit status is m

[PATCH bpf-next V2 2/2] selftests/bpf: test_progs avoid minus shell exit codes

2020-07-06 Thread Jesper Dangaard Brouer
error cases apart. Fixes: fd27b1835e70 ("selftests/bpf: Reset process and thread affinity after each test/sub-test") Fixes: 811d7e375d08 ("bpf: selftests: Restore netns after each test") Signed-off-by: Jesper Dangaard Brouer --- tools/testing/selftests/bpf/test_progs.c |

[PATCH bpf-next V2 0/2] BPF selftests test runner 'test_progs' use proper shell exit codes

2020-07-06 Thread Jesper Dangaard Brouer
fore with different tests (that are part of test_progs). CI people writing these shell-scripts could pickup these hints and report them, if that makes sense. --- Jesper Dangaard Brouer (2): selftests/bpf: test_progs use another shell exit on non-actions selftests/bpf: test_progs avoid minus s

[PATCH bpf-next V2 1/2] selftests/bpf: test_progs use another shell exit on non-actions

2020-07-06 Thread Jesper Dangaard Brouer
annot tell the difference between a non-existing test and the test failing. This patch uses value 2 as shell exit indication. (Aside note unrecognized option parameters use value 64). Fixes: 6c92bd5cd465 ("selftests/bpf: Test_progs indicate to shell on non-actions") Signed-off-by: Jesper Dangaar

Re: WARNING in bpf_xdp_adjust_tail

2020-07-06 Thread Jesper Dangaard Brouer
: 004ce559 R15: 7f8bc39726d4 > Kernel Offset: disabled > > > --- > This bug is generated by a bot. It may contain errors. > See https://goo.gl/tpsmEJ for more information about syzbot. > syzbot engineers can be reached at syzkal...@googlegroups.com. > > syzbot will keep track of this bug report. See: > https://goo.gl/tpsmEJ#status for how to communicate with syzbot. > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH net-next 0/4] mvpp2: XDP support

2020-07-01 Thread Jesper Dangaard Brouer
/mvpp2/mvpp2.h| 49 +- > .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 600 ++++-- > 3 files changed, 588 insertions(+), 62 deletions(-) > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH v6 00/19] The new cgroup slab memory controller

2020-06-19 Thread Jesper Dangaard Brouer
quences of sharing slab caches. At Red Hat we have experienced very hard to find kernel bugs, that point to memory corruption at a completely wrong kernel code, because other kernel code were corrupting the shared slab cache. (Hint a workaround is to enable SLUB debugging to disable this sharing

Re: [PATCH v6 00/19] The new cgroup slab memory controller

2020-06-19 Thread Jesper Dangaard Brouer
On Thu, 18 Jun 2020 18:30:13 -0700 Roman Gushchin wrote: > On Thu, Jun 18, 2020 at 11:31:21AM +0200, Jesper Dangaard Brouer wrote: > > On Thu, 18 Jun 2020 10:43:44 +0200 > > Jesper Dangaard Brouer wrote: > > > > > On Wed, 17 Jun 2020 18:29:28 -07

Re: [PATCH v6 00/19] The new cgroup slab memory controller

2020-06-18 Thread Jesper Dangaard Brouer
On Thu, 18 Jun 2020 10:43:44 +0200 Jesper Dangaard Brouer wrote: > On Wed, 17 Jun 2020 18:29:28 -0700 > Roman Gushchin wrote: > > > On Wed, Jun 17, 2020 at 01:24:21PM +0200, Vlastimil Babka wrote: > > > On 6/17/20 5:32 AM, Roman Gushchin wrote: > > > >

Re: [PATCH v6 00/19] The new cgroup slab memory controller

2020-06-18 Thread Jesper Dangaard Brouer
euse objects=2 : 110 - 53 - 133 cycles(tsc) - SLUB-patched : bulk_quick_reuse objects=3 : 88 - 95 - 42 cycles(tsc) - SLUB-patched : bulk_quick_reuse objects=4 : 91 - 85 - 36 cycles(tsc) - SLUB-patched : bulk_quick_reuse objects=8 : 32 - 66 - 32 cycles(tsc) SLUB-original - bulk-

Re: [PATCH] xdp_rxq_info_user: Fix null pointer dereference. Replace malloc/memset with calloc.

2020-06-12 Thread Jesper Dangaard Brouer
demonstrating access to > xdp_rxq_info") > Signed-off-by: Gaurav Singh Acked-by: Jesper Dangaard Brouer -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH] xdp_rxq_info_user: Replace malloc/memset w/calloc

2020-06-12 Thread Jesper Dangaard Brouer
On Fri, 12 Jun 2020 03:14:58 -0700 Joe Perches wrote: > On Fri, 2020-06-12 at 08:42 +0200, Jesper Dangaard Brouer wrote: > > On Thu, 11 Jun 2020 20:36:40 -0400 > > Gaurav Singh wrote: > > > > > Replace malloc/memset with calloc > > > > >

Re: [PATCH] xdp_rxq_info_user: Replace malloc/memset w/calloc

2020-06-12 Thread Jesper Dangaard Brouer
ou need to update/improve the description, to also mention/describe that this also solves the bug you found. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

Re: [PATCH] xdp_rxq_info_user: Replace malloc/memset w/calloc

2020-06-11 Thread Jesper Dangaard Brouer
memset(rec, 0, sizeof(*rec)); > + rec = calloc(1, sizeof(struct stats_record)); > if (!rec) { > fprintf(stderr, "Mem alloc error\n"); > exit(EXIT_FAIL_MEM); -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer

  1   2   3   4   5   6   >