-----Original Message-----
From: "Bodireddy, Bhanuprakash" <bhanuprakash.bodire...@intel.com>
Date: Tuesday, August 8, 2017 at 1:45 AM
To: Darrell Ball <db...@vmware.com>, "d...@openvswitch.org" 
<d...@openvswitch.org>
Subject: RE: [ovs-dev] [PATCH v3 0/6] netdev-dpdk: Use intermediate queue 
during packet transmission.

    Hi Darrell,
    
    >Sorry, I was multitasking last week and did not get a chance to finish the
    >responses on Friday
    >
    >I looked thru. the code for all the patches The last 3 patches of V3 
needed a
    >manual merge; as you know, the series needs a rebase after recent commits.
    
    I  will rebase and send out v4. 
    
    >For a full o/p batch case, I see about a 10% drop in pps; is that what you 
see ?
    
    I see ~200 - 250 kpps drop in P2P case with single flow and see significant 
improvements when
    the number of flows reach the rx batch size. 

    
    Can you please let me know if 'full o/p batch' above means simple P2P test 
with single flow?
    It would be helpful if you can share your traffic profile for me to 
reproduce locally.

What I meant is simply when the natural TX batch is 32, which I can get by 
pushing a single
dp_netdev flow P2P, as you mentioned. This would be the worst case for 
intermediate queuing,
since the code would be just overhead.


    
    >After applying each patch, we should be able to build and nothing should be
    >broken, which is not the case since patch 4 has a function only used in 
patch 6.
    >I have some comments on the individual patches.
    
    I might have introduced this problem when I reordered patches. I will fix 
this.
    
    - Bhanuprakash.
    
    >
    >Darrell
    >
    >-----Original Message-----
    >From: <ovs-dev-boun...@openvswitch.org> on behalf of Bhanuprakash
    >Bodireddy <bhanuprakash.bodire...@intel.com>
    >Date: Thursday, June 29, 2017 at 3:39 PM
    >To: "d...@openvswitch.org" <d...@openvswitch.org>
    >Subject: [ovs-dev] [PATCH v3 0/6] netdev-dpdk: Use intermediate queue
    >during     packet transmission.
    >
    >    After packet classification, packets are queued in to batches depending
    >    on the matching netdev flow. Thereafter each batch is processed to
    >    execute the related actions. This becomes particularly inefficient if
    >    there are few packets in each batch as rte_eth_tx_burst() incurs 
expensive
    >    MMIO writes.
    >
    >    This patch series implements intermediate queue for DPDK and vHost User
    >ports.
    >    Packets are queued and burst when the packet count exceeds threshold.
    >Also
    >    drain logic is implemented to handle cases where packets can get stuck 
in
    >    the tx queues at low rate traffic conditions. Care has been taken to 
see
    >    that latency is well with in the acceptable limits. Testing shows 
significant
    >    performance gains with this implementation.
    >
    >    This path series combines the earlier 2 patches posted below.
    >      DPDK patch: https://urldefense.proofpoint.com/v2/url?u=https-
    >3A__mail.openvswitch.org_pipermail_ovs-2Ddev_2017-
    >2DApril_331039.html&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09C
    >GX7JQ5Ih-uZnsw&m=mfmTud95lZzZdILQFvPpn7UBeTpD_q-
    >YENVoGQXZFog&s=Pqg7ZCr3Ypmyww79tJOxn1XTp5PG0FmK-
    >zwcW6lJJ2U&e=
    >      vHost User patch: https://urldefense.proofpoint.com/v2/url?u=https-
    >3A__mail.openvswitch.org_pipermail_ovs-2Ddev_2017-
    >2DMay_332271.html&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09C
    >GX7JQ5Ih-uZnsw&m=mfmTud95lZzZdILQFvPpn7UBeTpD_q-
    >YENVoGQXZFog&s=-
    >_WLDFeO_nwkaOdrNHFtl_3uEwEDvEgUsQzabGB6fm8&e=
    >
    >    Performance Numbers with intermediate queue:
    >
    >                          DPDK ports
    >                         ===========
    >
    >      Throughput for P2P scenario, for two 82599ES 10G port with 64 byte
    >packets
    >
    >      Number
    >      flows       MASTER     With PATCH
    >      ======    =========    =========
    >        10       10727283    13393844
    >        32        7042253    11228799
    >        50        7515491     9607791
    >       100        5838699     9430730
    >       500        5285066     7845807
    >      1000        5226477     7135601
    >
    >       Latency test
    >
    >       MASTER
    >       =======
    >       Pkt size  min(ns)  avg(ns)  max(ns)
    >        512      4,631      5,022    309,914
    >       1024      5,545      5,749    104,294
    >       1280      5,978      6,159     45,306
    >       1518      6,419      6,774    946,850
    >
    >       PATCH
    >       =====
    >       Pkt size  min(ns)  avg(ns)  max(ns)
    >        512      4,711      5,064    182,477
    >       1024      5,601      5,888    701,654
    >       1280      6,018      6,491    533,037
    >       1518      6,467      6,734    312,471
    >
    >                   vHost User ports
    >                  ==================
    >
    >      Throughput for PV scenario, with 64 byte packets
    >
    >       Number
    >       flows       MASTER    With PATCH
    >      ========  =========   =========
    >        10        5945899     7833914
    >        32        3872211     6530133
    >        50        3283713     6618711
    >       100        3132540     5857226
    >       500        2964499     5273006
    >      1000        2931952     5178038
    >
    >      Latency test.
    >
    >      MASTER
    >      =======
    >      Pkt size  min(ns)  avg(ns)  max(ns)
    >       512      10,011   12,100   281,915
    >      1024       7,870    9,313   193,116
    >      1280       7,862    9,036   194,439
    >      1518       8,215    9,417   204,782
    >
    >      PATCH
    >      =======
    >      Pkt size  min(ns)  avg(ns)  max(ns)
    >       512      10,492   13,655   281,538
    >      1024       8,407    9,784   205,095
    >      1280       8,399    9,750   194,888
    >      1518       8,367    9,722   196,973
    >
    >    Performance number reported by Eelco Chaudron <eechaudro at
    >redhat.com> at
    >      https://urldefense.proofpoint.com/v2/url?u=https-
    >3A__mail.openvswitch.org_pipermail_ovs-2Ddev_2017-
    >2DJune_333949.html&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09C
    >GX7JQ5Ih-uZnsw&m=mfmTud95lZzZdILQFvPpn7UBeTpD_q-
    >YENVoGQXZFog&s=Ptu7JwRYKLoyoJmcupfCGFgtwH9gEtceUhrfYLUD8aM&e=
    >      https://urldefense.proofpoint.com/v2/url?u=https-
    >3A__mail.openvswitch.org_pipermail_ovs-2Ddev_2017-
    >2DMay_332271.html&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09C
    >GX7JQ5Ih-uZnsw&m=mfmTud95lZzZdILQFvPpn7UBeTpD_q-
    >YENVoGQXZFog&s=-
    >_WLDFeO_nwkaOdrNHFtl_3uEwEDvEgUsQzabGB6fm8&e=
    >      https://urldefense.proofpoint.com/v2/url?u=https-
    >3A__mail.openvswitch.org_pipermail_ovs-2Ddev_2017-
    >2DApril_331039.html&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09C
    >GX7JQ5Ih-uZnsw&m=mfmTud95lZzZdILQFvPpn7UBeTpD_q-
    >YENVoGQXZFog&s=Pqg7ZCr3Ypmyww79tJOxn1XTp5PG0FmK-
    >zwcW6lJJ2U&e=
    >
    >    -----------------------------------
    >    v2->v3
    >      * Modified dpdk_do_tx_copy to buffer the packets.
    >      * Moved "Intermediate queue support" patch towards the end of series.
    >      * Flush the queues of non-pmd thread in dpif_netdev_run().
    >      * Removed the invalid comment for netdev_dpdk_vhost_txq_flush().
    >      * Updated vhost_txq_flush() to do extra validation and increment the 
drop
    >count
    >        and free the memory.
    >      * Updated vring_state_changed() with flush function and appropriate
    >comments.
    >      * Did sanity testing
    >        - (P2P, PVP(testpmd, linux forwarding)
    >        - Inter VM (VM2VM)
    >        - PVP MQ test (with Queue enabled/disabled)
    >        - Tested flush logic on non PMD ports by pinging internal ports.
    >    v1->v2
    >      * Add Acked-by tag from Eelco Chaudron.
    >      * Rebased on master due to HW offload changes.
    >      * Introduced union for packet count and buffers and changed the 
variable
    >        names appropriately.
    >      * No functional change changes.
    >
    >    Bhanuprakash Bodireddy (6):
    >      netdev: Add netdev_txq_flush function.
    >      netdev-dpdk: Add netdev_dpdk_txq_flush function.
    >      netdev-dpdk: Add netdev_dpdk_vhost_txq_flush function.
    >      netdev-dpdk: Add intermediate queue support.
    >      netdev-dpdk: Enable intermediate queue for vHost User port.
    >      dpif-netdev: Flush the packets in intermediate queue.
    >
    >     lib/dpif-netdev.c     |  51 ++++++++++-
    >     lib/netdev-bsd.c      |   1 +
    >     lib/netdev-dpdk.c     | 228
    >+++++++++++++++++++++++++++++++++++++++++++-------
    >     lib/netdev-dummy.c    |   1 +
    >     lib/netdev-linux.c    |   1 +
    >     lib/netdev-provider.h |   8 ++
    >     lib/netdev-vport.c    |   2 +-
    >     lib/netdev.c          |   9 ++
    >     lib/netdev.h          |   1 +
    >     9 files changed, 272 insertions(+), 30 deletions(-)
    >
    >    --
    >    2.4.11
    >
    >    _______________________________________________
    >    dev mailing list
    >    d...@openvswitch.org
    >    https://urldefense.proofpoint.com/v2/url?u=https-
    >3A__mail.openvswitch.org_mailman_listinfo_ovs-
    >2Ddev&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-
    >uZnsw&m=mfmTud95lZzZdILQFvPpn7UBeTpD_q-YENVoGQXZFog&s=fPef9jD-
    >T1CxbeMwf2XEfsSxYKAZRe-Xr68-P4Sh5ws&e=
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    
    

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to