Hi everyone,

I am testing the PVP performance of OvS-DPDK on the Cascade Lake server. It is 
running the latest Ubuntu 20.04 and Linux kernel version 5.4.0-39-generic. I am 
using DPDK version 19.11 and the latest OvS master branch from Github. The grub 
command line parameters is as follows:
ro default_hugepagesz=1G hugepagesz=1G hugepages=24 isolcpus=12-71 
nohz_full=12-71 rcu_nocbs=12-71 intel_iommu=on intel_pstate=disable 
intel_idle.max_cstate=0 processor.max_cstate=0 security=selinux selinux=1 
vt.handoff=1

The test environment setup for PVP scenario is as follows:
DPDK configuration for OvS
sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:hw-offload=false
sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:max-idle=500000
sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x02
sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:n-rxq=1
sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:n-txq=1
sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x1100 
[This is an isolated CPU]

I am performing the PVP testing under two scenarios, EMC Disabled and EMC 
insertion for every flow and configure the setup accordingly:
sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:emc-insert-inv-prob=0
sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:emc-insert-inv-prob=1

PHY-VM-PHY Configuration
I have configured it exactly as shown in Using Open vSwitch with 
DPDK<http://docs.openvswitch.org/en/latest/howto/dpdk/> guide. The Guest VM 
installed is Ubuntu 18.04 and the XML file used to launch the VM is as follows:

<domain type='kvm'>

  <name>virt_ubuntu_vm</name>

  <uuid>80849065-dfc1-4f98-bc8a-794cf2566999</uuid>

  <metadata>

    <libosinfo:libosinfo 
xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0";>

      <libosinfo:os id="http://ubuntu.com/ubuntu/20.04"/>

    </libosinfo:libosinfo>

  </metadata>

  <memory unit='KiB'>4194304</memory>

  <currentMemory unit='KiB'>4194304</currentMemory>

  <memoryBacking>

    <hugepages>

      <page size='1048576' unit='KiB' nodeset='0'/>

    </hugepages>

    <discard/>

  </memoryBacking>

  <vcpu placement='static'>4</vcpu>

  <cputune>

    <shares>4096</shares>

    <vcpupin vcpu='0' cpuset='14'/>

    <vcpupin vcpu='1' cpuset='16'/>

    <vcpupin vcpu='2' cpuset='18'/>

    <vcpupin vcpu='3' cpuset='20'/>

    <emulatorpin cpuset='14,16,18,20'/>

  </cputune>

  <os>

    <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>

    <boot dev='hd'/>

  </os>

  <features>

    <acpi/>

    <apic/>

  </features>

  <cpu mode='host-model' check='partial'>

    <topology sockets='1' cores='4' threads='1'/>

    <numa>

      <cell id='0' cpus='0-3' memory='4194304' unit='KiB' memAccess='shared'/>

    </numa>

  </cpu>

  <on_poweroff>destroy</on_poweroff>

  <on_reboot>restart</on_reboot>

  <on_crash>destroy</on_crash>

  <devices>

    <emulator>/usr/bin/qemu-system-x86_64</emulator>

    <disk type='file' device='disk'>

      <driver name='qemu' type='raw' cache='none'/>

      <source file='/home/malvika/qemu/bin/images/new_virt_hda4.img'/>

      <target dev='vda' bus='virtio'/>

      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' 
function='0x0'/>

    </disk>

    </interface>

    <interface type='vhostuser'>

      <mac address='00:00:00:00:00:01'/>

      <source type='unix' path='/home/malvika/var/run/openvswitch/vhost-user1' 
mode='client'/>

      <model type='virtio'/>

      <driver queues='2'>

        <host mrg_rxbuf='off'/>

      </driver>

      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' 
function='0x0'/>

    </interface>

    <interface type='vhostuser'>

      <mac address='00:00:00:00:00:02'/>

      <source type='unix' path='/home/malvika/var/run/openvswitch/vhost-user2' 
mode='client'/>

      <model type='virtio'/>

      <driver queues='2'>

        <host mrg_rxbuf='off'/>

      </driver>

      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' 
function='0x0'/>

    </interface>

  </devices>

</domain>

Once the VM has booted, I follow the steps of the DPDK vHost User Ports 
guide<http://docs.openvswitch.org/en/latest/topics/dpdk/vhost-user/> - allocate 
hugepages to the Guest VM, install DPDK, bind the vhost-user interfaces to a 
vfio-pci driver (as opposed to the uio driver shown in the guide) and run the 
IO forwarding mode in the testpmd application.

I had 2 questions mainly:

  1.  For 1 flow, 1K flows an 10K flows, what performance numbers should I 
expect to see with my current system and OvS-DPDK configuration?
  2.  Is this configuration correct to achieve the best (high throughput) PVP 
performance on the Cascade Server? If not, then what should I do differently in 
order to achieve it?

I would really appreciate any input or suggestions from Intel folks as well as 
other community members. Please let me know if you need any more information 
from my side.

Thank you for your time,
Malvika
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to