> -----Original Message-----
> From: Roopa Prabhu [mailto:ro...@cumulusnetworks.com]
> Sent: Wednesday, August 10, 2016 9:09 AM
> To: Jiri Pirko <j...@resnulli.us>
> Cc: netdev@vger.kernel.org; da...@davemloft.net; Nogah Frankel
> <nog...@mellanox.com>; Ido Schimmel <ido...@mellanox.com>; Elad
> Raz <el...@mellanox.com>; Yotam Gigi <yot...@mellanox.com>; Or
> Gerlitz <ogerl...@mellanox.com>; niko...@cumulusnetworks.com;
> linvi...@tuxdriver.com; tg...@suug.ch; go...@cumulusnetworks.com;
> sfel...@gmail.com; s...@queasysnail.net; Eran Ben Elisha
> <era...@mellanox.com>; a...@plumgrid.com; eduma...@google.com;
> han...@stressinduktion.org; f.faine...@gmail.com;
> d...@cumulusnetworks.com
> Subject: Re: [patch net-next v6 2/3] net: core: add SW stats to if_stats_msg
> 
> On 8/9/16, 2:25 PM, Jiri Pirko wrote:
> > From: Nogah Frankel <nog...@mellanox.com>
> >
> > Add a nested attribute of SW stats to if_stats_msg
> > under IFLA_STATS_LINK_SW_64.
> >
> > Signed-off-by: Nogah Frankel <nog...@mellanox.com>
> > Reviewed-by: Ido Schimmel <ido...@mellanox.com>
> > Signed-off-by: Jiri Pirko <j...@mellanox.com>
> > ---
> >  include/uapi/linux/if_link.h |  1 +
> >  net/core/rtnetlink.c         | 21 +++++++++++++++++++++
> >  2 files changed, 22 insertions(+)
> >
> > diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
> > index a1b5202..1c9b808 100644
> > --- a/include/uapi/linux/if_link.h
> > +++ b/include/uapi/linux/if_link.h
> > @@ -825,6 +825,7 @@ enum {
> >     IFLA_STATS_LINK_64,
> >     IFLA_STATS_LINK_XSTATS,
> >     IFLA_STATS_LINK_XSTATS_SLAVE,
> > +   IFLA_STATS_LINK_SW_64,
> 
> sorry, don't mean to drag this discussion forever...but...
> 
> IIRC on the switchdev call, I thought we had agreed to just put these kind of
> breakdown
> stats in one more layer of nesting.. which can be extended for everybody.
> 
> IFLA_STATS_LINK_DRIVER_XSTATS   (or some other name) [
>                 IFLA_STATS_LINK_SW_64  (struct rtnl_link_stats64)    <---- 
> you get
> the well defined struct here as you wanted.
>                 IFLA_STATS_LINK_HW_ACL_DROPS  /* u64 */        <----- we can
> keep extending this for other stats people want.
> 
>                 /* just keeps it extensible */
> ]
> 
> because in addition to all other reasons I have mentioned before,
> technically IFLA_STATS_LINK_64 (top-level aggregate stats)
> are also software only stats for say most logical devices etc.
> 
> 
> and instead of adding one ndo for your software only stats now and then
> again for other ethtool
> like hw stats from all drivers, it could also be like the below:
> 
> new ndo_get_hw_stats
> 
> IFLA_STATS_LINK_HW_XSTATS   (or some other name) [
>                 IFLA_STATS_LINK_CPU_64  (struct rtnl_link_stats64)    <---- 
> you
> get the well defined struct here as you wanted.
>                 IFLA_STATS_LINK_HW_ACL_DROPS  /* u64 */              <----- 
> we can
> keep extending this for other stats people want.
>                 /* just keeps it extensible */
> ]
> 
> It gets you what you want and keeps the api extensible IMO. your call.
> 

I don't think extra nesting is a good idea because when we are called for 
IFLA_STATS_LINK_DRIVER_XSTATS  we can't distinguish between subtypes of it.
So for each driver stats we are going to need different XSTATS types anyway.

We shouldn't return all the stats a driver can provide when we are asked for 
one.
HW_ACL_DROPS shouldn't be queried when user want SW stats nor the  other
way around.
 (And I think that stats exposed here should be well defined so they could 
be exposed to ifstat. The place for not well define stats is in ethtool)

> >     __IFLA_STATS_MAX,
> >  };
> >
> > diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
> > index 189cc78..910f802 100644
> > --- a/net/core/rtnetlink.c
> > +++ b/net/core/rtnetlink.c
> > @@ -3583,6 +3583,21 @@ static int rtnl_fill_statsinfo(struct sk_buff
> *skb, struct net_device *dev,
> >             dev_get_stats(dev, sp);
> >     }
> >
> > +   if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_SW_64,
> *idxattr)) {
> > +           if (dev_have_sw_stats(dev)) {
> > +                   struct rtnl_link_stats64 *sp;
> > +
> > +                   attr = nla_reserve_64bit(skb,
> IFLA_STATS_LINK_SW_64,
> > +                                            sizeof(struct
> rtnl_link_stats64),
> > +                                            IFLA_STATS_UNSPEC);
> > +                   if (!attr)
> > +                           goto nla_put_failure;
> > +
> > +                   sp = nla_data(attr);
> > +                   dev_get_sw_stats(dev, sp);
> > +           }
> > +   }
> > +
> >     if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_XSTATS,
> *idxattr)) {
> >             const struct rtnl_link_ops *ops = dev->rtnl_link_ops;
> >
> > @@ -3644,6 +3659,7 @@ nla_put_failure:
> >
> >  static const struct nla_policy ifla_stats_policy[IFLA_STATS_MAX + 1] = {
> >     [IFLA_STATS_LINK_64]    = { .len = sizeof(struct rtnl_link_stats64) },
> > +   [IFLA_STATS_LINK_SW_64] = { .len = sizeof(struct
> rtnl_link_stats64) },
> >  };
> >
> >  static size_t if_nlmsg_stats_size(const struct net_device *dev,
> > @@ -3685,6 +3701,11 @@ static size_t if_nlmsg_stats_size(const struct
> net_device *dev,
> >             }
> >     }
> >
> > +   if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_SW_64, 0)) {
> > +           if (dev_have_sw_stats(dev))
> > +                   size += nla_total_size_64bit(sizeof(struct
> rtnl_link_stats64));
> > +   }
> > +
> >     return size;
> >  }
> >

Reply via email to