Hi Darrell,

>
>    Under low rate traffic conditions, there can be 2 issues.
>      (1) Packets potentially can get stuck in the intermediate queue.
>      (2) Latency of the packets can increase significantly due to
>           buffering in intermediate queue.
>
>    This commit handles the (1) issue by flushing the tx port queues from
>    PMD processing loop. Also this commit addresses issue (2) by flushing
>    the tx queues after every rxq port processing. This reduces the latency
>    with out impacting the forwarding throughput.
>
>       MASTER
>      --------
>       Pkt size  min(ns)   avg(ns)   max(ns)
>        512      4,631      5,022    309,914
>       1024      5,545      5,749    104,294
>       1280      5,978      6,159     45,306
>       1518      6,419      6,774    946,850
>
>      MASTER + COMMIT
>      -----------------
>       Pkt size  min(ns)   avg(ns)   max(ns)
>        512      4,711      5,064    182,477
>       1024      5,601      5,888    701,654
>       1280      6,018      6,491    533,037
>       1518      6,467      6,734    312,471
>
>    PMDs can be teared down and spawned at runtime and so the rxq and txq
>    mapping of the PMD threads can change. In few cases packets can get
>    stuck in the queue due to reconfiguration and this commit helps flush
>    the queues.
>
>    Suggested-by: Eelco Chaudron <echau...@redhat.com>
>    Reported-at: https://urldefense.proofpoint.com/v2/url?u=https-
>3A__mail.openvswitch.org_pipermail_ovs-2Ddev_2017-
>2DApril_331039.html&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09C
>GX7JQ5Ih-uZnsw&m=bHrBe9xQ4KZyIP8eXmMQgmAki-
>7TrHqH1PHcy7KBp9M&s=FLHjFbETDpuejnwNxIJem8vPtHo7KDb0q0YJSIpsMb
>8&e=
>    Signed-off-by: Bhanuprakash Bodireddy
><bhanuprakash.bodire...@intel.com>
>    Signed-off-by: Antonio Fischetti <antonio.fische...@intel.com>
>    Co-authored-by: Antonio Fischetti <antonio.fische...@intel.com>
>    Signed-off-by: Markus Magnusson <markus.magnus...@ericsson.com>
>    Co-authored-by: Markus Magnusson <markus.magnus...@ericsson.com>
>    Acked-by: Eelco Chaudron <echau...@redhat.com>
>    ---
>     lib/dpif-netdev.c | 7 +++++++
>     1 file changed, 7 insertions(+)
>
>    diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
>    index 7e1f5bc..f03bd3e 100644
>    --- a/lib/dpif-netdev.c
>    +++ b/lib/dpif-netdev.c
>    @@ -3603,6 +3603,8 @@ dpif_netdev_run(struct dpif *dpif)
>                     for (i = 0; i < port->n_rxq; i++) {
>                         dp_netdev_process_rxq_port(non_pmd, port->rxqs[i].rx,
>                                                    port->port_no);
>    +
>    +                    dp_netdev_flush_txq_ports(non_pmd);
>
>
>
>Is this a temporary change ?; seems counter to the objective ?
>Should be latency based, as discussed on another thread couple months ago
>?; configurable by port type and port ?

This is a temporary change and is made to see the latency is well within limits.
With this change, the performance improvement is *only* observed  when rx batch 
size is significant (unlikely in real use cases). 
The  incremental patch series( on top of this) should address that by buffering 
packets in Intermediate queue across multiple rx batches. Also latency configs 
would be introduced as you just mentioned above to tune according to their 
requirements.

This needs significant testing as we need to strike a fine balance between 
throughput and latency and shall be done as part of next series.

- Bhanuprakash.

>
>
>
>                     }
>                 }
>             }
>    @@ -3760,6 +3762,8 @@ reload:
>             for (i = 0; i < poll_cnt; i++) {
>                 dp_netdev_process_rxq_port(pmd, poll_list[i].rx,
>                                            poll_list[i].port_no);
>    +
>    +            dp_netdev_flush_txq_ports(pmd);
>             }
>
>
>
>Same comment as above.
>
>
>
>             if (lc++ > 1024) {
>    @@ -3780,6 +3784,9 @@ reload:
>             }
>         }
>
>    +    /* Flush the queues as part of reconfiguration logic. */
>    +    dp_netdev_flush_txq_ports(pmd);
>    +
>         poll_cnt = pmd_load_queues_and_ports(pmd, &poll_list);
>         exiting = latch_is_set(&pmd->exit_latch);
>         /* Signal here to make sure the pmd finishes
>    --
>    2.4.11
>
>    _______________________________________________
>    dev mailing list
>    d...@openvswitch.org
>    https://urldefense.proofpoint.com/v2/url?u=https-
>3A__mail.openvswitch.org_mailman_listinfo_ovs-
>2Ddev&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-
>uZnsw&m=bHrBe9xQ4KZyIP8eXmMQgmAki-
>7TrHqH1PHcy7KBp9M&s=9f249RikCnGphA_CpKIFbbtkbo2W6axBPaub91khHe
>M&e=
>
>
>
>
>

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to