Why is this kernel 5.15 based?

On 2024-09-11 06:18, webmas...@jbsky.fr wrote:
From: Julien Blais <webmas...@jbsky.fr>

This patch is submitted to the kernel.

For your information, this fixes the problem identified during a test with
  iperf, the packet sending stall and above all the use of a 100% CPU
  because of the softIRQ.

Before:
root@wrt1900acv1:~# cat /proc/interrupts
            CPU0       CPU1
  24:   18652404   15171396      MPIC   5 Level     armada_370_xp_per_cpu_tick
  26:     292136         78      MPIC  31 Level     mv64xxx_i2c
  27:         26          0      MPIC  41 Level     ttyS0
  33:    1024678          0      MPIC  45 Level     ehci_hcd:usb1
  34:   23007175          0      MPIC   8 Level     eth0

After:
root@OpenWrt:/# cat /proc/interrupts
            CPU0       CPU1
  24:     194183     161947      MPIC   5 Level     armada_370_xp_per_cpu_tick
  25:          0          0      MPIC   3 Level     arm-pmu
  26:        246          0      MPIC  31 Level     mv64xxx_i2c
  27:       5488          0      MPIC  41 Level     ttyS0
  33:          0          0      MPIC  45 Level     ehci_hcd:usb1
  34:    2015130    6330694      MPIC   8 Level     eth0

Signed-off-by: Julien Blais <webmas...@jbsky.fr>
---
  .../700-mvneta-tx-queue-workaround.patch      | 38 -------------------
  ...1-mvneta-fix-napi-poll-infinite-loop.patch | 38 +++++++++++++++++++
  2 files changed, 38 insertions(+), 38 deletions(-)
  delete mode 100644 
target/linux/mvebu/patches-5.15/700-mvneta-tx-queue-workaround.patch
  create mode 100644 
target/linux/mvebu/patches-5.15/701-mvneta-fix-napi-poll-infinite-loop.patch

diff --git 
a/target/linux/mvebu/patches-5.15/700-mvneta-tx-queue-workaround.patch 
b/target/linux/mvebu/patches-5.15/700-mvneta-tx-queue-workaround.patch
deleted file mode 100644
index 32e8ef4b7d..0000000000
--- a/target/linux/mvebu/patches-5.15/700-mvneta-tx-queue-workaround.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-The hardware queue scheduling is apparently configured with fixed
-priorities, which creates a nasty fairness issue where traffic from one
-CPU can starve traffic from all other CPUs.
-
-Work around this issue by forcing all tx packets to go through one CPU,
-until this issue is fixed properly.
-
-Signed-off-by: Felix Fietkau <n...@nbd.name>
----
---- a/drivers/net/ethernet/marvell/mvneta.c
-+++ b/drivers/net/ethernet/marvell/mvneta.c
-@@ -5006,6 +5006,16 @@ static int mvneta_setup_tc(struct net_de
-       }
- }
-
-+#ifndef CONFIG_ARM64
-+static u16 mvneta_select_queue(struct net_device *dev, struct sk_buff *skb,
-+                             struct net_device *sb_dev)
-+{
-+      /* XXX: hardware queue scheduling is broken,
-+       * use only one queue until it is fixed */
-+      return 0;
-+}
-+#endif
-+
- static const struct net_device_ops mvneta_netdev_ops = {
-       .ndo_open            = mvneta_open,
-       .ndo_stop            = mvneta_stop,
-@@ -5016,6 +5026,9 @@ static const struct net_device_ops mvnet
-       .ndo_fix_features    = mvneta_fix_features,
-       .ndo_get_stats64     = mvneta_get_stats64,
-       .ndo_eth_ioctl        = mvneta_ioctl,
-+#ifndef CONFIG_ARM64
-+      .ndo_select_queue    = mvneta_select_queue,
-+#endif
-       .ndo_bpf             = mvneta_xdp,
-       .ndo_xdp_xmit        = mvneta_xdp_xmit,
-       .ndo_setup_tc        = mvneta_setup_tc,
diff --git 
a/target/linux/mvebu/patches-5.15/701-mvneta-fix-napi-poll-infinite-loop.patch 
b/target/linux/mvebu/patches-5.15/701-mvneta-fix-napi-poll-infinite-loop.patch
new file mode 100644
index 0000000000..0fa191ea14
--- /dev/null
+++ 
b/target/linux/mvebu/patches-5.15/701-mvneta-fix-napi-poll-infinite-loop.patch
@@ -0,0 +1,38 @@
+From bbeea07de50e925df3877f63a24aee1d35828a02 Mon Sep 17 00:00:00 2001
+From: Julien Blais <webmas...@jbsky.fr>
+Date: Wed, 11 Sep 2024 13:27:09 +0200
+Subject: [PATCH v2] mvneta: fix "napi poll" infinite loop
+
+In percpu mode, when there's a network load, one of the cpus can be
+solicited without having anything to process.
+If 0 is returned to napi poll, napi will ignore the next requests,
+causing an infinite loop with ISR handling.
+
+Without this change, patches hang around fixing the queue at 0 and
+the interrupt remains stuck on the 1st CPU.
+The percpu conf is useless in this case, so we might as well remove it.
+
+Signed-off-by: Julien Blais <webmas...@jbsky.fr>
+---
+ drivers/net/ethernet/marvell/mvneta.c | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/net/ethernet/marvell/mvneta.c 
b/drivers/net/ethernet/marvell/mvneta.c
+index 3f124268b..b6e89b888 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -3186,7 +3186,10 @@ static int mvneta_poll(struct napi_struct *napi, int 
budget)
+
+       if (rx_done < budget) {
+               cause_rx_tx = 0;
+-              napi_complete_done(napi, rx_done);
++              if (rx_done)
++                      napi_complete_done(napi, rx_done);
++              else
++                      napi_complete(napi);
+
+               if (pp->neta_armada3700) {
+                       unsigned long flags;
+--
+2.39.2
+


_______________________________________________
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/mailman/listinfo/openwrt-devel

Reply via email to