On 04/10/2017 08:03 AM, Darrell Ball wrote:
> 
> 
> On 4/5/17, 7:52 AM, "ovs-dev-boun...@openvswitch.org on behalf of O Mahony, 
> Billy" <ovs-dev-boun...@openvswitch.org on behalf of 
> billy.o.mah...@intel.com> wrote:
> 
>     > -----Original Message-----
>     > From: Kevin Traynor [mailto:ktray...@redhat.com]
>     > Sent: Wednesday, April 5, 2017 2:58 PM
>     > To: O Mahony, Billy <billy.o.mah...@intel.com>; Maxime Coquelin
>     > <maxime.coque...@redhat.com>; d...@openvswitch.org
>     > Subject: Re: [ovs-dev] [PATCH] netdev-dpdk: Enable INDIRECT_DESC on
>     > DPDK vHostUser.
>     > 
>     > On 03/20/2017 11:18 AM, O Mahony, Billy wrote:
>     > > Hi Maxime,
>     > >
>     > >> -----Original Message-----
>     > >> From: Maxime Coquelin [mailto:maxime.coque...@redhat.com]
>     > >> Sent: Friday, March 17, 2017 9:48 AM
>     > >> To: O Mahony, Billy <billy.o.mah...@intel.com>; d...@openvswitch.org
>     > >> Subject: Re: [ovs-dev] [PATCH] netdev-dpdk: Enable INDIRECT_DESC on
>     > >> DPDK vHostUser.
>     > >>
>     > >> Hi Billy,
>     > >>
>     > >> On 03/01/2017 01:36 PM, Billy O'Mahony wrote:
>     > >>> Hi All,
>     > >>>
>     > >>> I'm creating this patch on the basis of performance results outlined
>     > >>> below. In summary it appears that enabling INDIRECT_DESC on DPDK
>     > >>> vHostUser ports leads to very large increase in performance when
>     > >>> using linux stack applications in the guest with no noticable
>     > >>> performance drop for DPDK based applications in the guest.
>     > >>>
>     > >>> Test#1 (VM-VM iperf3 performance)
>     > >>>  VMs use DPDK vhostuser ports
>     > >>>  OVS bridge is configured for normal action.
>     > >>>  OVS version 603381a (on 2.7.0 branch but before release,
>     > >>>      also seen on v2.6.0 and v2.6.1)  DPDK v16.11  QEMU v2.5.0 (also
>     > >>> seen with v2.7.1)
>     > >>>
>     > >>>  Results:
>     > >>>   INDIRECT_DESC enabled    5.30 Gbit/s
>     > >>>   INDIRECT_DESC disabled   0.05 Gbit/s
>     > >> This is indeed a big gain.
>     > >> However, isn't there a problem when indirect descriptors are 
> disabled?
>     > >> 0.05 Gbits/s is very low, no?
>     > >
>     > > [[BO'M]] Yes the disabling of the indirect descriptors feature 
> appears to be
>     > what causes the very low test result. And the root cause may actually be
>     > related to this bug
>     > https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1668829 .
>     > However, turning on the indirect descriptors certainly helps greatly.
>     > >
>     > >>
>     > >> Could you share the iperf3 command line you used?
>     > >
>     > >  [[BO'M]] In the server VM "iperf3 -s" and in the client "iperf3 -c 
> <server ip
>     > addr> -t 30". -t 30 is the duration (secs) of the test. OVS-DPDK bridge 
> was set
>     > to use normal action.
>     > >
>     > Hi Billy,
>     > 
>     > I ran the iperf test on master and I see very different results than 
> you got?
>     > 
>     > mrg on/indirect off: 7.10 Gbps
>     > mrg off/indirect off: 5.05 Gbps
>     > mrg off/indirect on: 7.15 Gbps
>     
>     [[BO'M]] 
>     Hi Kevin,
>     
>     By those figures the performance is still +40% in the right direction for 
> using indirect descriptors.
>     
>     What version of qemu did you use? (if as per the Launchpad bug qemu is a 
> root cause here). Also in that case kernel versions may be significant.
>     
>     I was using qemu 2.5 (tagged release, built locally) and Ubuntu 16.04.01 
> with 4.04 kernel in the guest. 
>     
>     I can retry the tests with head of master when I get a chance but the 
> patch is still offering a large improvement.
>     
>     Cheers,
>     Billy.
> 
> I tried it as well, on one server…
> Test#1 (VM-VM iperf3 performance):
> DPDK v16.11  QEMU v2.5.0, OVS 2.7.0 branch also before release/similar 
> relevant content.
> mrg off
> Indirect off: 0.164 Gbps
> Indirect on: 2.01    Gbps
> 
> Are there any significant reasons not to merge this patch ?
>  

No, I think it's ok to merge. I did some additional testing with % loss
and DPDK in the guest and did not see any significant difference.

Acked-by: Kevin Traynor <ktray...@redhat.com>

>     > 
>     > Kevin.
>     > 
>     > >>
>     > >>> Test#2  (Phy-VM-Phy RFC2544 Throughput)  DPDK PMDs are polling NIC,
>     > >>> DPDK loopback app running in guest.
>     > >>>  OVS bridge is configured with port forwarding to VM (via
>     > >>> dpdkvhostuser
>     > >> ports).
>     > >>>  OVS version 603381a (on 2.7.0 branch but before release),
>     > >>>      other versions not tested.
>     > >>>  DPDK v16.11
>     > >>>  QEMU v2.5.0 (also seen with v2.7.1)
>     > >>>
>     > >>>  Results:
>     > >>>   INDIRECT_DESC enabled    2.75 Mpps @64B pkts (0.176 Gbit/s)
>     > >>>   INDIRECT_DESC disabled   2.75 Mpps @64B pkts (0.176 Gbit/s)
>     > >>
>     > >> Is this with 0% packet loss?
>     > > [[BO'M]] Yes. To an accuracy of .05 Mpps.
>     > >>
>     > >> Regards,
>     > >> Maxime
>     > > _______________________________________________
>     > > dev mailing list
>     > > d...@openvswitch.org
>     > > 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddev&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=N-90MtQTpByf75yO-qoXyVLC7NqPpmWBKW3hZRlofyk&s=x-LS86EsgYxoUprrL5UFic980t77wrS9wrIVG56nxfY&e=
>  
>     > >
>     
>     _______________________________________________
>     dev mailing list
>     d...@openvswitch.org
>     
> https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddev&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=N-90MtQTpByf75yO-qoXyVLC7NqPpmWBKW3hZRlofyk&s=x-LS86EsgYxoUprrL5UFic980t77wrS9wrIVG56nxfY&e=
>  
>     
> 

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to