On Fri, Sep 14, 2018 at 2:39 PM Stephen Hemminger
<step...@networkplumber.org> wrote:
>
> On Fri, 14 Sep 2018 13:59:39 -0400
> Willem de Bruijn <willemdebruijn.ker...@gmail.com> wrote:
>
> > diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
> > index e5d236595206..8cb8e02c8ab6 100644
> > --- a/drivers/net/vxlan.c
> > +++ b/drivers/net/vxlan.c
> > @@ -572,6 +572,7 @@ static struct sk_buff *vxlan_gro_receive(struct sock 
> > *sk,
> >                                        struct list_head *head,
> >                                        struct sk_buff *skb)
> >  {
> > +     const struct net_offload *ops;
> >       struct sk_buff *pp = NULL;
> >       struct sk_buff *p;
> >       struct vxlanhdr *vh, *vh2;
> > @@ -606,6 +607,12 @@ static struct sk_buff *vxlan_gro_receive(struct sock 
> > *sk,
> >                       goto out;
> >       }
> >
> > +     rcu_read_lock();
> > +     ops = net_gro_receive(dev_offloads, ETH_P_TEB);
> > +     rcu_read_unlock();
> > +     if (!ops)
> > +             goto out;
>
> Isn't rcu_read_lock already held here?
> RCU read lock is always held in the receive handler path

There is a critical section on receive, taken in
netif_receive_skb_core, but gro code runs before that. All the
existing gro handlers call rcu_read_lock.

> > +
> >       skb_gro_pull(skb, sizeof(struct vxlanhdr)); /* pull vxlan header */
> >
> >       list_for_each_entry(p, head, list) {
> > @@ -621,6 +628,7 @@ static struct sk_buff *vxlan_gro_receive(struct sock 
> > *sk,
> >       }
> >
> >       pp = call_gro_receive(eth_gro_receive, head, skb);
> > +
> >       flush = 0;
>
> whitespace change crept into this patch.

Oops, thanks.

Reply via email to