I think I might have found out the root cause for this delay problem (emphasis on might have). There are two types of masks assigned to OVS, pmd-mask to run PMD threads and ovs-lcore-mak which is used by OVS main thread, handlers and revalidators. The documentation says that number of handler should be total CPUs minus revalidators, so it picks up 13 revalidators and 35 handlers for my system (48 vCPUs), but when I use "tuna -t ovs-vswitchd -CP" they are all just using one vCPU (first one in the ovs-lcore-mask).
Shouldn't the number of revalidators and handlers depend on the number of lcores? And they should all be scheduled on different CPUs? I have not tested changing the number of handlers and revalidators and observing that effect yet. Please let me know if I am on right track. -----Original Message----- From: Muhammad Alp Arslan ([email protected]) [mailto:[email protected]] Sent: Thursday, May 10, 2018 11:36 AM To: 'Ben Pfaff' <[email protected]> Cc: '[email protected]' <[email protected]> Subject: RE: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9 So far from what I have observed this is not related to how much traffic is flowing, rather for how much traffic it gets a megaflow miss, and have to be processed by the OF Classifier. If I add a high priority rule that allow/deny all traffic, then I can add as many rules I want with 10-15 ms delay. However, in the real scenario where the traffic pattern is abrupt, and I have to do layer 3 & layer 4 matches to allow/drop certain types of traffic I start seeing a latency that grows exponentially. All I can guess from my limited knowledge is that there is some kind of a lock on the OF tables while the packet is being processed and matched. Would be really helpful if there is a work around for this. I can provide more information, like number of flows, number of flows per table, EMC hits, megaflow hits/miss etc. -----Original Message----- From: Ben Pfaff [mailto:[email protected]] Sent: Thursday, May 10, 2018 2:31 AM To: [email protected] Cc: [email protected] Subject: Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9 Using a controller might yield better latency for adding and removing flows, but I'm a little surprised to hear that there's terribly slow latency when traffic is flowing. Usually, the main thread in OVS (which is the one that does OpenFlow table management) doesn't have much to do, even if there is a high traffic load. I don't have a general-purpose controller to recommend. There are several out there and I imagine that any one of them could be suitable for this project. On Mon, May 07, 2018 at 09:41:17PM +0500, [email protected] wrote: > What would you recommend as a controller, also would adding rules > using a controller be faster than "ovs-ofctl"? My application > continuously add and delete flows based on the traffic patterns, and > it works fine if there is no traffic flowing through the OVS, but as > soon as I turn on the 40 G links, the time to apply the rules starting > hitting several seconds sometimes up to 100s. It keeps increasing as the existing number of flows increase. > > Can an SDN controller solve this issue? Or is it an inherent OVS > limitation where it takes more time to add rules if more packets are > going to the OF classifier? > > -----Original Message----- > From: Ben Pfaff [mailto:[email protected]] > Sent: Monday, May 7, 2018 9:36 PM > To: [email protected] > Cc: [email protected] > Subject: Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9 > > OVS doesn't have that built in. Usually we think of it as the > responsibility of the controller. > > On Mon, May 07, 2018 at 09:22:47PM +0500, [email protected] > wrote: > > I mean OpenFlow flows that are persistent across OVS or system restarts. > > > > -----Original Message----- > > From: Ben Pfaff [mailto:[email protected]] > > Sent: Monday, May 7, 2018 9:18 PM > > To: [email protected] > > Cc: [email protected] > > Subject: Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS > > 2.9 > > > > What do you mean by "persist the OVS flows"? I have a couple of > > guesses but I'd like to hear from you. > > > > On Mon, May 07, 2018 at 02:17:18AM +0500, > > [email protected] > > wrote: > > > Thank you Ben for the correction. I am running tests with > > > different scenarios to better understand what's happening inside the OVS-DPDK. > > > One thing that I would like to ask is, is there a way to persist > > > the OVS > > flows? > > > Can OVN help me do that? I don't have any virtual networks, just > > > in and out ports. > > > > > > -----Original Message----- > > > From: Ben Pfaff [mailto:[email protected]] > > > Sent: Friday, May 4, 2018 11:29 AM > > > To: [email protected] > > > Cc: [email protected] > > > Subject: Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS > > > 2.9 > > > > > > It's mostly for historical reasons. > > > > > > We do try to document in ovs-vswitchd(8) that the user should not > > > manage datapaths themselves: > > > > > > ovs-vswitchd does all the necessary management of Open vSwitch > > > datapaths itself. Thus, external tools, such ovs-dpctl(8), are > > > not needed for managing datapaths in conjunction with > > > ovs-vswitchd, and their use to modify datapaths when ovs-vswitchd > > > is running can interfere with its operation. (ovs-dpctl may > > > still be useful for diagnostics.) > > > > > > I guess that the wording should be updated to reflect the "ovs-appctl" > > > interface too. > > > > > > I sent a patch to improve the docs here: > > > https://patchwork.ozlabs.org/patch/908532/ > > > > > > On Thu, May 03, 2018 at 06:44:43PM +0500, > > > [email protected] > > > wrote: > > > > If "ovs-vswitchd" manages the data paths, why does it have a > > > > utility that lets me create more of them. And when I create them > > > > I cannot use them. I am stuck in a loop :) . > > > > > > > > -----Original Message----- > > > > From: Ben Pfaff [mailto:[email protected]] > > > > Sent: Thursday, May 3, 2018 4:41 PM > > > > To: [email protected] > > > > Cc: [email protected] > > > > Subject: Re: [ovs-discuss] Multiple dpdk-netdev datapath with > > > > OVS > > > > 2.9 > > > > > > > > On Wed, May 02, 2018 at 10:02:04PM +0500, > > > > [email protected] > > > > wrote: > > > > > I am trying to create multiple dpdk-netdev based data paths > > > > > with OVS > > > > > 2.9 and DPDK 16.11 running on CentOS 7.4. I am able to create > > > > > multiple data paths using "ovs-appctl dpctl/add-dp netdev@netdev1" > > > > > and I can see a new data path created with "ovs-appctl dpctl/show". > > > > > However I cannot add any interfaces (dpdk or otherwise), and I > > > > > cannot set this data path as datapath_type to any bridge. > > > > > > > > That's not useful or a good idea. ovs-vswitchd manages > > > > datapaths > > itself. > > > > Adding and removing them yourself will not help. > > > > > > > > > Just a precap to why I am trying to do this, I am working with > > > > > a lot of OVS OpenFlow rules (around 0.5 million) matching > > > > > layer 3 and layer > > > > > 4 fields. The incoming traffic is more than 40G (4 x10G Intel > > > > > x520s), and has multiple parallel flows (over a million IPs). > > > > > With this the OVS performance decreases and each port is > > > > > forwarding only around 250 Mb/s. I am using multiple RX queues > > > > > (4-6), with single RX queue it drops to 70 Mb/s. Now if I > > > > > shutdown three 10G interfaces, an interesting thing happen, > > > > > and OVS starts forwarding over 7Gb/s for that single > > > > > interface. That got me thinking, maybe the reason for low > > > > > performance is 40 G traffic hitting a single bridges flow > > > > > tables, how about creating multiple bridges with multiple flow > > > > > tables. With this setup the situation remained same, and now > > > > > the only common thing between the > > > > > 4 interfaces is the data path. They are not sharing anything else. > > > > > They are polled by dedicated vCPUs, and they are in different > tables. > > > > > > > > > > > > > > > > > > > > Can anyone explain this bizarre scenario of why the OVS is > > > > > able to forward more traffic over single interface polled by 6 > > > > > vCPUs, compared to 4 interfaces polled by 24 vCPUs. > > > > > > > > > > Also is there a way to create multiple data paths and remove > > > > > this dependency also. > > > > > > > > You can create multiple bridges with "ovs-vsctl add-br". OVS > > > > doesn't use multiple datapaths. > > > > > > > > Maybe someone who understands the DPDK port better can suggest > > > > some reason for the performance characteristics that you see. > > > > > > > > > > _______________________________________________ discuss mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
