On Fri, 2017-03-31 at 16:33 +0200, Paolo Abeni wrote:

> I did the above to avoid increasing the udp_sock struct size; this will
> costs more than a whole cacheline.

Yes, but who cares :)

Also note that we discussed about having a secondary receive queue in
the future, to decouple the fact that producers/consumer have to grab a
contended spinlock for every enqueued and dequeued packet.

With a secondary queue, the consumer can transfer one queue into another
in one batch.

Or simply use ptr_ring / skb_array now these infras are available thanks
to Michael.

So we will likely increase UDP socket size in a near future...

> 
> I did not hit others false sharing issues because:
> - gro_receive/gro_complete are touched only for packets coming from 
> devices with udp tunnel offload enabled, that hit the tunnel offload
> path on the nic; such packets will most probably land in the udp tunnel
>  and will not use 'forward_deficit'


> - encap_destroy is touched only socket shutdown
> - encap_rcv is protected by the 'udp_encap_needed' static key
> 
> I think this latter is problematic, so I'm ok with the patch you
> suggested.
> 
> The above change could still make sense, the udp code is already
> checking for udplite sockets with either pcflag and protocol;
> testing always the same data will make the code more cleaner.

Where are we testing sk->sk_prototocol in receive path ?

Thanks Paolo !


Reply via email to