On 29.08.2023 18:01, Konstantin Khorenko wrote:
Before this patch .get_stats() for venetdev was not defended from
racing, Documentation/networking/netdevices.txt states ndo_get_stats()
is to be syncronized by dev_base_lock rwlock, but we do READ stats, so
can go in parallel.

At the same time the algorithm of gathering per-cpu stats into a single
value used per-device structure, so in case .get_stats() is called in
parallel, stats could be corrupted.

Example, 2 cpu node:

        cpu A                                   cpu B
.get_stats()
memset(dev->...->stats, 0)
stats.rx_bytes += cpuA.rx_bytes
                                        .get_stats()
                                        memset(dev->...->stats, 0)
stats.rx_bytes += cpuB.rx_bytes
return stats
                                        stats.rx_bytes += cpuA.rx_bytes
                                        stats.rx_bytes += cpuB.rx_bytes
                                        return stats

=> process running on cpu A will print rx_bytes only taken from cpu B
(so lower than expected), and the process running on cpu B will print
too high numbers because cpu B values will be added twice.

Let's make venetdev provide ndo_get_stats64() callback which gets a
storage as an argument, thus does not suffer from a race of filling
a per-netdevice struct in parallel.

https://jira.vzint.dev/browse/PSBM-150027


Reviewed-by: Pavel Tikhomirov <ptikhomi...@virtuozzo.com>

Signed-off-by: Konstantin Khorenko <khore...@virtuozzo.com>
---
v2:
   * get rid of extra function venet_stats_one()
   * get rid of extra struct venet_device_stats and filling it

  drivers/net/venetdev.c | 20 ++++++++------------
  include/linux/venet.h  |  1 -
  2 files changed, 8 insertions(+), 13 deletions(-)

diff --git a/drivers/net/venetdev.c b/drivers/net/venetdev.c
index 3ca4f8a49a5c..9785b9ba7247 100644
--- a/drivers/net/venetdev.c
+++ b/drivers/net/venetdev.c
@@ -568,25 +568,21 @@ static int venet_xmit(struct sk_buff *skb, struct 
net_device *dev)
        return 0;
  }
-static struct net_device_stats *get_stats(struct net_device *dev)
+static void venet_get_stats64(struct net_device *dev,
+                             struct rtnl_link_stats64 *total)
  {
        int i;
-       struct venet_stats *stats;
- stats = (struct venet_stats *)dev->ml_priv;
-       memset(&stats->stats, 0, sizeof(struct net_device_stats));
        for_each_possible_cpu(i) {
                struct net_device_stats *dev_stats;
dev_stats = venet_stats(dev, i);
-               stats->stats.rx_bytes   += dev_stats->rx_bytes;
-               stats->stats.tx_bytes   += dev_stats->tx_bytes;
-               stats->stats.rx_packets += dev_stats->rx_packets;
-               stats->stats.tx_packets += dev_stats->tx_packets;
-               stats->stats.tx_dropped += dev_stats->tx_dropped;
+               total->rx_bytes += dev_stats->rx_bytes;
+               total->tx_bytes += dev_stats->tx_bytes;
+               total->rx_packets += dev_stats->rx_packets;
+               total->tx_packets += dev_stats->tx_packets;
+               total->tx_dropped += dev_stats->tx_dropped;
        }
-
-       return &stats->stats;
  }
/* Initialize the rest of the LOOPBACK device. */
@@ -712,7 +708,7 @@ static const struct ethtool_ops venet_ethtool_ops = {
static const struct net_device_ops venet_netdev_ops = {
        .ndo_start_xmit = venet_xmit,
-       .ndo_get_stats = get_stats,
+       .ndo_get_stats64 = venet_get_stats64,
        .ndo_open = venet_open,
        .ndo_stop = venet_close,
        .ndo_init = venet_init_dev,
diff --git a/include/linux/venet.h b/include/linux/venet.h
index 51e0abeb03d7..f0a00832f6de 100644
--- a/include/linux/venet.h
+++ b/include/linux/venet.h
@@ -21,7 +21,6 @@
  struct ve_struct;
  struct venet_stat;
  struct venet_stats {
-       struct net_device_stats stats;
        struct net_device_stats *real_stats;
  };

--
Best regards, Tikhomirov Pavel
Senior Software Developer, Virtuozzo.
_______________________________________________
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel

Reply via email to