On Tue, Jun 26, 2018 at 8:19 PM Edward Cree <ec...@solarflare.com> wrote: > > __netif_receive_skb_taps() does a depressingly large amount of per-packet > work that can't easily be listified, because the another_round looping > makes it nontrivial to slice up into smaller functions. > Fortunately, most of that work disappears in the fast path: > * Hardware devices generally don't have an rx_handler > * Unless you're tcpdumping or something, there is usually only one ptype > * VLAN processing comes before the protocol ptype lookup, so doesn't force > a pt_prev deliver > so normally, __netif_receive_skb_taps() will run straight through and return > the one ptype found in ptype_base[hash of skb->protocol]. > > Signed-off-by: Edward Cree <ec...@solarflare.com> > ---
> -static int __netif_receive_skb_core(struct sk_buff *skb, bool pfmemalloc) > +static int __netif_receive_skb_taps(struct sk_buff *skb, bool pfmemalloc, > + struct packet_type **pt_prev) A lot of code churn can be avoided by keeping local variable pt_prev and calling this ppt_prev or so, then assigning just before returning on success. Also, this function does more than just process network taps. > { > - struct packet_type *ptype, *pt_prev; > rx_handler_func_t *rx_handler; > struct net_device *orig_dev; > bool deliver_exact = false; > + struct packet_type *ptype; > int ret = NET_RX_DROP; > __be16 type; > > @@ -4514,7 +4515,7 @@ static int __netif_receive_skb_core(struct sk_buff > *skb, bool pfmemalloc) > skb_reset_transport_header(skb); > skb_reset_mac_len(skb); > > - pt_prev = NULL; > + *pt_prev = NULL; > > another_round: > skb->skb_iif = skb->dev->ifindex; > @@ -4535,25 +4536,25 @@ static int __netif_receive_skb_core(struct sk_buff > *skb, bool pfmemalloc) > goto skip_taps; > > list_for_each_entry_rcu(ptype, &ptype_all, list) { > - if (pt_prev) > - ret = deliver_skb(skb, pt_prev, orig_dev); > - pt_prev = ptype; > + if (*pt_prev) > + ret = deliver_skb(skb, *pt_prev, orig_dev); > + *pt_prev = ptype; > }