On 29/10/2020 21:15, Flavio Leitner wrote:
> On Wed, Oct 28, 2020 at 02:17:06PM -0400, Mark Gray wrote:
>> From: Aaron Conole <acon...@redhat.com>
>>
>> Currently, the channel handlers are polled globally.  On some
>> systems, this causes a thundering herd issue where multiple
>> handler threads become active, only to do no work and immediately
>> sleep.
>>
>> The approach here is to push the netlink socket channels to discreet
>> handler threads to process, rather than polling on every thread.
>> This will eliminate the need to wake multiple threads.
>>
>> To check:
>>
>>   ip netns add left
>>   ip netns add right
>>   ip link add center-left type veth peer name left0 netns left
>>   ip link add center-right type veth peer name right0 netns right
>>   ip link set center-left up
>>   ip link set center-right up
>>   ip -n left ip link set left0 up
>>   ip -n left ip addr add 172.31.110.10/24 dev left0
>>   ip -n right ip link set right0 up
>>   ip -n right ip addr add 172.31.110.11/24 dev right0
>>
>>   ovs-vsctl add-br br0
>>   ovs-vsctl add-port br0 center-right
>>   ovs-vsctl add-port br0 center-left
>>
>>   # in one terminal
>>   perf record -e sched:sched_wakeup,irq:softirq_entry -ag
>>
>>   # in a separate terminal
>>   ip netns exec left arping -I left0 -c 1 172.31.110.11
>>
>>   # in the perf terminal after exiting
>>   perf script
>>
>> Look for the number of 'handler' threads which were made active.
>>
>> Suggested-by: Ben Pfaff <b...@ovn.org>
>> Suggested-by: Flavio Leitner <f...@sysclose.org>
>> Co-authored-by: Mark Gray <mark.d.g...@redhat.com>
>> Reported-by: David Ahern <dsah...@gmail.com>
>> Reported-at: 
>> https://mail.openvswitch.org/pipermail/ovs-dev/2019-December/365857.html
>> Cc: Matteo Croce <technobo...@gmail.com>
>> Fixes: 69c51582f ("dpif-netlink: don't allocate per thread netlink sockets")
>> Signed-off-by: Aaron Conole <acon...@redhat.com>
>> Signed-off-by: Mark Gray <mark.d.g...@redhat.com>
>> ---
> 
> I think the patch looks good, so I gave it a try on the lab.
> 
> I added 1000 client netns and another 1000 server netns, each using
> an OVS internal port. Created a flow table such that random src ports
> would drop, otherwise forward the packet to the other netns.
> Then I started iperf3 in all client containers in parallel
> triggering upcalls from different ports. 
> 
> Although I ran few times, please consider these ball park numbers:
> 
>         Current                   Patched
>   # Wakes -  Thread:         # Wakes -  Thread
>      270  -  handler10          400  - handler32
>      471  -  handler11          403  - handler33
>      150  -  handler12          383  - handler34
>                                 396  - handler35
>                                 430  - handler36
> 
> The patch distributes ports to each handler and I could see that
> the wake up and the load were balanced as a consequence.
> 
> Now the second test. I used one client netns and one server netns
> and did an UDP burst of packets with similar flow table. So, each
> new packet would trigger an upcall on the same port. The current
> code would likely trigger multiple handlers while with the patch
> applied would be limited to a single thread.
> 
> The difference is expressive in this scenario: 
> 
>                         Current           Patched
> Max # UDP packets[*]      62k               3k4
> Max # flow installed     ~60k               11k
> 
> [*] Max number of UDP packets sent in burst without increasing
>     upcall "lost" counter.
> 
> I tested without the patch and setting n-handler-threads to 1 and
> the results are close enough to the patched version.
> 
> Well, I think the patch is working as designed to fix the thundering
> herd issue. However, considering that in most use cases there is
> only one NIC port attached to OVS receiving all the host traffic,
> then that gap might be a problem.
> 
> Thoughts?

Yes, this is an issue. Apologies for not replying sooner (I just
realized that I never replied to this). In the mean-time, I have being
working on an alternative approach which I just posted as an RFC at:

https://mail.openvswitch.org/pipermail/ovs-dev/2021-April/382618.html

Please have a look.

> 
> fbl
> 

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to