> On May 24, 2017, at 3:29 AM, Avi Cohen (A) <[email protected]> wrote:
> 
> Hello
> Let me  ask it in a different way:
> I want to understand the reasons for the  differences in performance between 
> OVS-DPDK and standard OVS My setup is:  ovs/ovs-dpdk is running @ host 
> communicating with a VM
> 
> OVS-DPDK
> 1. packet is received via physical port to the device. 
> 
> 2.DMA  transfer   to mempools on huge-pages  allocated by dpdk-ovs - in  
> user-space.
> 
> 3. OVS-DPDK  copies this packet to the shared-vring of the associated  guest 
> (shared between ovs-dpdk userspace process and guest) 
> 
> 4. guest OS copies the packet to  userspace application on VM .
> 
> Standard OVS
> 
> 1. packet is received via physical port to the device. 
> 
> 2.packet is processed by the OVS and transferred to a virtio device connected 
> to the VM - whar are the additional overhead here ?  QEMU processing  - 
> translation , VM exit ??  other ?
> 
> 3. guest OS copies the packet to  userspace application on VM .
> 
> 
> Question:  what are the additional overhead in the standard OVS   that cause 
> to poor performance related to the OVS-DPDK setup ?
> I'm not talking about  the PMD improvements (OVS-DPDK)  running on the host - 
> but on overhead in the VM context in the standard OVS setup

The primary reasons are OVS is not using DPDK and OVS is using the Linux kernel 
as well :-)

> 
> Best Regards
> avi

Regards,
Keith

Reply via email to