Looking at the reload flag only every 1024 loops can be a long time
under load, since we might be handling 32 packets per rxq, per iteration,
which means up to poll_cnt * 32 * 1024 packets.
Look at the flag every loop, no major performance impact seen.

Signed-off-by: David Marchand <david.march...@redhat.com>
Acked-by: Eelco Chaudron <echau...@redhat.com>
Acked-by: Ian Stokes <ian.sto...@intel.com>
---
Changelog since v2:
- rebased on master

Changelog since v1:
- added acks, no change

Changelog since RFC v2:
- fixed commitlog on the number of packets

---
 lib/dpif-netdev.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index b4384b4..647a8ee 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -5480,7 +5480,6 @@ reload:
                 poll_block();
             }
         }
-        lc = UINT_MAX;
     }
 
     pmd->intrvl_tsc_prev = 0;
@@ -5529,11 +5528,6 @@ reload:
                 emc_cache_slow_sweep(&((pmd->flow_cache).emc_cache));
             }
 
-            atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
-            if (reload) {
-                break;
-            }
-
             for (i = 0; i < poll_cnt; i++) {
                 uint64_t current_seq =
                          netdev_get_change_seq(poll_list[i].rxq->port->netdev);
@@ -5544,6 +5538,12 @@ reload:
                 }
             }
         }
+
+        atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
+        if (OVS_UNLIKELY(reload)) {
+            break;
+        }
+
         pmd_perf_end_iteration(s, rx_packets, tx_packets,
                                pmd_perf_metrics_enabled(pmd));
     }
-- 
1.8.3.1

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to