Enabling disabling EMC has no effect on this scenario. As far as I know
there is one EMC per PMD thread, so the interfaces have their own EMC's, the
bigger question is why does traffic on one interface effect the performance
of the other? Are they sharing anything? The only thing I can think of is
the datapath and the megaflow table, and I am looking for some way to do
that, is this doesn't work my only other option is to have 4 VMs with
pass-through interfaces and run OVS-DPDK inside VMs. 


-----Original Message-----
From: O'Reilly, Darragh [mailto:[email protected]] 
Sent: Thursday, May 3, 2018 5:49 PM
To: [email protected]; [email protected]
Subject: RE: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

On Wed, May 02, 2018 at 10:02:04PM +0500, [email protected]
wrote:

> Can anyone explain this bizarre scenario of why the OVS is able to 
> forward more traffic over single interface polled by 6 vCPUs, compared 
> to 4 interfaces polled by 24 vCPUs.

Not really, but I would look at the cache stats: ovs-appctl
dpif-netdev/pmd-stats-show


_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to