From: Alexander Lobakin <aloba...@pm.me>
Date: Fri, 22 Jan 2021 11:47:45 +0000

> From: Eric Dumazet <eric.duma...@gmail.com>
> Date: Thu, 21 Jan 2021 16:41:33 +0100
> 
> > On 1/21/21 2:47 PM, Xuan Zhuo wrote:
> > > This patch is used to construct skb based on page to save memory copy
> > > overhead.
> > > 
> > > This function is implemented based on IFF_TX_SKB_NO_LINEAR. Only the
> > > network card priv_flags supports IFF_TX_SKB_NO_LINEAR will use page to
> > > directly construct skb. If this feature is not supported, it is still
> > > necessary to copy data to construct skb.
> > > 
> > > ---------------- Performance Testing ------------
> > > 
> > > The test environment is Aliyun ECS server.
> > > Test cmd:
> > > ```
> > > xdpsock -i eth0 -t  -S -s <msg size>
> > > ```
> > > 
> > > Test result data:
> > > 
> > > size    64      512     1024    1500
> > > copy    1916747 1775988 1600203 1440054
> > > page    1974058 1953655 1945463 1904478
> > > percent 3.0%    10.0%   21.58%  32.3%
> > > 
> > > Signed-off-by: Xuan Zhuo <xuanz...@linux.alibaba.com>
> > > Reviewed-by: Dust Li <dust...@linux.alibaba.com>
> > > ---
> > >  net/xdp/xsk.c | 104 
> > > ++++++++++++++++++++++++++++++++++++++++++++++++----------
> > >  1 file changed, 86 insertions(+), 18 deletions(-)
> > > 
> > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > > index 4a83117..38af7f1 100644
> > > --- a/net/xdp/xsk.c
> > > +++ b/net/xdp/xsk.c
> > > @@ -430,6 +430,87 @@ static void xsk_destruct_skb(struct sk_buff *skb)
> > >   sock_wfree(skb);
> > >  }
> > >  
> > > +static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
> > > +                                       struct xdp_desc *desc)
> > > +{
> > > + u32 len, offset, copy, copied;
> > > + struct sk_buff *skb;
> > > + struct page *page;
> > > + void *buffer;
> > > + int err, i;
> > > + u64 addr;
> > > +
> > > + skb = sock_alloc_send_skb(&xs->sk, 0, 1, &err);
> > > + if (unlikely(!skb))
> > > +         return ERR_PTR(err);
> > > +
> > > + addr = desc->addr;
> > > + len = desc->len;
> > > +
> > > + buffer = xsk_buff_raw_get_data(xs->pool, addr);
> > > + offset = offset_in_page(buffer);
> > > + addr = buffer - xs->pool->addrs;
> > > +
> > > + for (copied = 0, i = 0; copied < len; i++) {
> > > +         page = xs->pool->umem->pgs[addr >> PAGE_SHIFT];
> > > +
> > > +         get_page(page);
> > > +
> > > +         copy = min_t(u32, PAGE_SIZE - offset, len - copied);
> > > +
> > > +         skb_fill_page_desc(skb, i, page, offset, copy);
> > > +
> > > +         copied += copy;
> > > +         addr += copy;
> > > +         offset = 0;
> > > + }
> > > +
> > > + skb->len += len;
> > > + skb->data_len += len;
> > 
> > > + skb->truesize += len;
> > 
> > This is not the truesize, unfortunately.
> > 
> > We need to account for the number of pages, not number of bytes.
> 
> The easiest solution is:
> 
>       skb->truesize += PAGE_SIZE * i;
> 
> i would be equal to skb_shinfo(skb)->nr_frags after exiting the loop.

Oops, pls ignore this. I forgot that XSK buffers are not
"one per page".
We need to count the number of pages manually and then do

        skb->truesize += PAGE_SIZE * npages;

Right.

> > > +
> > > + refcount_add(len, &xs->sk.sk_wmem_alloc);
> > > +
> > > + return skb;
> > > +}
> > > +
> 
> Al

Thanks,
Al

Reply via email to