On 30/11/2018 17:17, LIU Yulong wrote:
> 
> 
> On Fri, Nov 30, 2018 at 5:36 PM Lam, Tiago <tiago....@intel.com
> <mailto:tiago....@intel.com>> wrote:
> 
>     On 30/11/2018 02:07, LIU Yulong wrote:
>     > Hi,
>     >
>     > Thanks for the reply, please see my inline comments below.
>     >
>     >
>     > On Thu, Nov 29, 2018 at 6:00 PM Lam, Tiago <tiago....@intel.com
>     <mailto:tiago....@intel.com>
>     > <mailto:tiago....@intel.com <mailto:tiago....@intel.com>>> wrote:
>     >
>     >     On 29/11/2018 08:24, LIU Yulong wrote:
>     >     > Hi,
>     >     >
>     >     > We recently tested ovs-dpdk, but we met some bandwidth
>     issue. The
>     >     bandwidth
>     >     > from VM to VM was not close to the physical NIC, it's about
>     >     4.3Gbps on a
>     >     > 10Gbps NIC. For no dpdk (virtio-net) VMs, the iperf3 test
>     can easily
>     >     > reach 9.3Gbps. We enabled the virtio multiqueue for all
>     guest VMs.
>     >     In the
>     >     > dpdk vhostuser guest, we noticed that the interrupts are
>     >     centralized to
>     >     > only one queue. But for no dpdk VM, interrupts can hash to
>     all queues.
>     >     > For those dpdk vhostuser VMs, we also noticed that the PMD
>     usages were
>     >     > also centralized to one no matter server(tx) or client(rx).
>     And no
>     >     matter
>     >     > one PMD or multiple PMDs, this behavior always exists.
>     >     >
>     >     > Furthuremore, my colleague add some systemtap hook on the
>     openvswitch
>     >     > function, he found something interesting. The function
>     >     > __netdev_dpdk_vhost_send will send all the packets to one
>     >     virtionet-queue.
>     >     > Seems that there are some algorithm/hash table/logic does not do
>     >     the hash
>     >     > very well. 
>     >     >
>     >
>     >     Hi,
>     >
>     >     When you say "no dpdk VMs", you mean that within your VM
>     you're relying
>     >     on the Kernel to get the packets, using virtio-net. And when
>     you say
>     >     "dpdk vhostuser guest", you mean you're using DPDK inside the
>     VM to get
>     >     the packets. Is this correct?
>     >
>     >
>     > Sorry for the inaccurate description. I'm really new to DPDK. 
>     > No DPDK inside VM, all these settings are for host only.
>     > (`host` means the hypervisor physical machine in the perspective of
>     > virtualization.
>     > On the other hand `guest` means the virtual machine.)
>     > "no dpdk VMs" means the host does not setup DPDK (ovs is working in
>     > traditional way),
>     > the VMs were boot on that. Maybe a new name `VMs-on-NO-DPDK-host`?
> 
>     Got it. Your "no dpdk VMs" really is referred to as OvS-Kernel, while
>     your "dpdk vhostuser guest" is referred to as OvS-DPDK.
> 
>     >
>     >     If so, could you also tell us which DPDK app you're using
>     inside of
>     >     those VMs? Is it testpmd? If so, how are you setting the
>     `--rxq` and
>     >     `--txq` args? Otherwise, how are you setting those in your app
>     when
>     >     initializing DPDK?
>     >
>     >
>     > Inside VM, there is no DPDK app, VM kernel also
>     > does not set any config related to DPDK. `iperf3` is the tool for
>     > bandwidth testing.
>     >
>     >     The information below is useful in telling us how you're
>     setting your
>     >     configurations in OvS, but we are still missing the configurations
>     >     inside the VM.
>     >
>     >     This should help us in getting more information,
>     >
>     >
>     > Maybe you have noticed that, we only setup one PMD in the pasted
>     > configurations.
>     > But VM has 8 queues. Should the pmd quantity match the queues?
> 
>     It shouldn't match the queues inside the VM per say. But in this case,
>     since you have configured 8 rx queues on your physical NICs as well, and
>     since you're looking for higher throughputs, you should increase that
>     number of PMDs and pin those rxqs - take a look at [1] on how to do
>     that. Later on, increasing the size of your queues could also help.
> 
> 
> I'll test it. 
> Yes, as you noticed that the vhostuserclient  port has n_rxq="8",
> options:
> {n_rxq="8",vhost-server-path="/var/lib/vhost_sockets/vhu76f9a623-9f"}.
> And the physical NIC has both n_rxq="8", n_txq="8".
> options: {dpdk-devargs="0000:01:00.0", n_rxq="8", n_txq="8"}
> options: {dpdk-devargs="0000:05:00.1", n_rxq="8", n_txq="8"}
> But, furthermore, when remove such configuration for vhostuserclient 
> port and physical NIC,
> the bandwidth is same to 4.3Gbps no matter one PMD or multiple PMDs.
>  
> 
>     Just as a curiosity, I see you have a configured MTU of 1500B on the
>     physical interfaces. Is that the same MTU you're using inside the VM?
>     And are you using the same configurations (including that 1500B MTU)
>     when running your OvS-Kernel setup?
> 
> 
> MTU inside VM is 1450. Is that OK for the high throughput?

This will depend on what you're trying to achieve with this setup, and
the type of traffic. If this is mainly for internal, east-west traffic,
and you can afford to set higher MTUs on your set up, that will help you
achieve higher throughputs - try setting both VM and physical interfaces
MTU to 9000B, for example.

This doesn't look to be the case according to your cofigurations, but if
for some reason you're setting the MTU in the VM to larger values than
in the the physical NICs (1550B in the VM and 1500B in the physical
NICs, for example), then you could incur in double segmentation (first
done in the Vm and then in the host), which would hurt your performance
and throughputs overall.

Hope this helps,

Tiago.
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to