> -----Original Message-----
> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
> Sent: Tuesday, February 20, 2018 2:55 PM
> To: ovs-dev@openvswitch.org; O Mahony, Billy <billy.o.mah...@intel.com>
> Subject: Re: Re: [ovs-dev] [RFC 0/2] Ingress Scheduling
> 
> > This patch set implements the 'preferential read' part of the feature
> > of ingress scheduling described at OvS 2017 Fall Conference
> > https://www.slideshare.net/LF_OpenvSwitch/lfovs17ingress-scheduling-
> 82280320.
> >
> > It allows configuration to specify an ingress priority for and entire
> > interface. This protects traffic on higher priority interfaces from
> > loss and latency as PMDs get overloaded.
> >
> > Results so far a are very promising; For a uniform traffic
> > distribution as total offered load increases loss starts on the lowest
> > priority port first and the highest priority port last.
> >
> > When using four physical ports with each port forwarded to one of the
> > other ports. The following packets loss is seen. The EMC was bypassed
> > in this case and a small delay loop was added to each packet to
> > simulate more realistic per packet processing cost of 1000cycles approx.
> >
> > Port     dpdk_0  dpdk_1  dpdk_2  dpdk_3
> > Traffic
> > Dist.       25%     25%     25%     25%
> > Priority      0       1       2       3
> > n_rxq         8       8       8       8
> >
> > Total
> > Load Kpps   Loss Rate Per Port (Kpps)
> > 2110          0       0       0       0
> > 2120          5       0       0       0
> > 2920        676       0       0       0
> > 2930        677       5       0       0
> > 3510        854     411       0       0
> > 3520        860     415       3       0
> > 4390       1083     716     348       0
> > 4400       1088     720     354       1
> >
> >
> > Even in the case where most traffic is on the priority port this
> > remains the
> > case:
> >
> > Port     dpdk_0  dpdk_1  dpdk_2  dpdk_3
> > Traffic
> > Dist.       10%     20%     30%     40%
> > Priority      0       1       2       3
> > n_rxq         8       8       8       8
> >
> > Total
> > Load Kpps   Loss Rate Per Port (Kpps)
> >      2400     0       0       0       0
> >      2410     5       0       0       0
> >      2720   225       5       0       0
> >      2880   252     121       9       0
> >      3030   269     176      82       3
> >      4000   369     377     384     392
> >      5000   471     580     691     801
> >
> > The latency characteristics of the traffic on the higher priority
> > ports is also protected.
> >
> > Port     dpdk_0  dpdk_1  dpdk_2  dpdk_3
> > Traffic
> > Dist.       10%     20%     30%     40%
> > Priority      0       1       2       3
> > n_rxq         8       8       8       8
> >
> > Total        dpdk0    dpdk1    dpdk2    dpdk3
> > Load Kpps
> >      2400      113      122      120      125
> >      2410    36117      571      577      560
> >      2720   323242    14424     3265     3235
> >      2880   391404    33508    10075     4600
> >      3030   412597    35450    17061     7965
> >      4000   414729    36070    17740    11106
> >      5000   416801    36445    18233    11653
> >
> > Some General setup notes:
> > Fortville. (X710 DA4. firmware-version: 6.01 0x800034af 1.1747.0)
> > Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz One pmd Port fwding port
> > 0<->1, 2 <-> 3 Frame 64B, UDP 221 streams per port.
> > OvS base - 4c80644 http://github.com/istokes/ovs dpdk_merge_2_9. Added
> 600cycles approx pkt processing in order to bring per packet cost to ~1000
> cycles.
> > DPDK v17.11.1
> >
> 
> Hi Billy, thanks for your work on this feature.
> I have one question here. Have you tasted heterogeneous configurations?

[[BO'M]] I have performed some tests for virtual devices which shows a similar 
effect. VMs with prioritized vhost ports achieve higher tcp transfer at the 
cost of b/w between non-prioritized. But I have not tested a mixture of either 
physical & virtual or different types of NIC. I do plan such tests as part of a 
closer characterization of how TCP transfers are affected.

> I mean situations where PMD thread polls different types of ports (NICs).
> It's known that RX operations has different cost for different port types 
> (like
> virtual and physical) and also for different hardware NICs because of 
> different
> implementations of DPDK PMD drivers.

[[BO'M]] That is an interesting point. Any scheme that protects (ie reads more 
frequently) from an expensive-to-read port will have to have a greater cost on 
the remaining ports than if it were protecting a cheap-to-read port . Some 
packets are always going to be more costly than other depending on how they are 
processed but also as you point out on their ingress and egress port types. And 
if you want to process one more 'expensive' packets it's going to cost you more 
than one cheaper packet. The different levels of prioritization do offer a 
little flexibility here effectively allowing the spending of some extra cycles 
or a lot of extra cycles to protect traffic on different ports. 

> 
> So, my concern: Is it possible that setting higher priority for a slower port 
> type
> (maybe vhost) may degrade overall performance of current PMD thread.
> It'll be nice to have this kind of testing results.
> 
> Best regards, Ilya Maximets.
> 
> > Billy O'Mahony (2):
> >   ingress scheduling: schema and docs
> >   ingress scheduling: Provide per interface ingress priority
> >
> >  Documentation/howto/dpdk.rst    | 18 ++++++++++++
> >  include/openvswitch/ofp-parse.h |  3 ++
> >  lib/dpif-netdev.c               | 47 +++++++++++++++++++++---------
> >  lib/netdev-bsd.c                |  1 +
> >  lib/netdev-dpdk.c               | 64
> +++++++++++++++++++++++++++++++++++++++--
> >  lib/netdev-dummy.c              |  1 +
> >  lib/netdev-linux.c              |  1 +
> >  lib/netdev-provider.h           | 11 ++++++-
> >  lib/netdev-vport.c              |  1 +
> >  lib/netdev.c                    | 23 +++++++++++++++
> >  lib/netdev.h                    |  2 ++
> >  vswitchd/bridge.c               |  2 ++
> >  vswitchd/vswitch.ovsschema      |  9 ++++--
> >  vswitchd/vswitch.xml            | 40 ++++++++++++++++++++++++++
> >  14 files changed, 205 insertions(+), 18 deletions(-)
> >
> > --
> > 2.7.4
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to