We are happy to use netlink if it makes sense. What you are saying is: Netlink is used all over the kernel. Bc it is used all over the kernel, it must be the right answer for all userspace/kernel interactions. Therefore, if something doesn't use netlink, it must be wrong (ex. procfs, sysfs, debugfs).
We want to put the rx byte/packet count in debug fs because it is intended for developers. We do not want to put these in a common infrastructure bc as we have found internally, lacking tx statistics confuses users. We are being asked to write the ib core statistics infrastructure, that if it was available today, we would not use. So unless someone else jumps in here, we are going to keep these in the debug fs. Upinder On Jan 12, 2014, at 1:32 AM, Or Gerlitz <[email protected]> wrote: > On 08/01/2014 20:57, Upinder Malhi (umalhi) wrote: >> Or, >> Yeah, I did think about extending the existing infrastructure to export >> HW specific stats and exposing some stats via standard infrastructure. >> Besides the below, there are few other drawbacks with exposing statistics >> via netlink: >> 1) Having a two part solution, users space and kernel, will make changing >> the code more difficult. Anytime another attributed is exposed, code in the >> kernel needs to be added to handle backwards compatibility with userspace >> (as I said we are going to add more stuff incrementally). > > There are thousands if not millions LOCs over the kernel and user space tools > which use netlink. Indeed when you have two for tango you sometimes change > one side, sometimes the other and sometimes both. A claim that "its easier > to maintain things when all the code resides in the kernel" can't be really > taken seriously into account. Netlink is used all over the place, so > everyone's wrong? > > > >> 2) The Cisco VIC series cards, that is our NIC, cannot do flow stats well. >> Specially, it only reports Rx byte count for a flow and doesn't report any >> statistics on the Tx side. Hence, exposing these via a standard interface >> to a user is going to be confusing and misleading. > > 1st use the standard/existing interface to report the open sessions and later > we'll take it from there re the byte counts. > > >> >> Hence, at least for Cisco VIC, we want to keep these flow stats in debugfs >> where they can be easily extended and extra effort is required to get to >> them. >> >> Upinder >> >> On Jan 8, 2014, at 1:13 AM, Or Gerlitz <[email protected]> wrote: >> >>> On 08/01/2014 00:29, Upinder Malhi (umalhi) wrote: >>>> Or, The flows contain Cisco VIC specific stuff - Ex. the hardware flow id; >>>> and they will contain more cisco specific things. Hence, they are >>>> exported via debugfs. >>> You should be able to enhance the rdma netlink infrastructure to allow for >>> exporting HW dependent attributes to user space -- did you look on that? >>> >>> Also, you should make sure to expose the non HW specific attributes of the >>> sessions through the standard infrastructure. >>> >>> Or. >>> >>> >>> >>> >>>> Upinder >>>> >>>> On Dec 22, 2013, at 2:23 AM, Or Gerlitz <[email protected]> wrote: >>>> >>>>> On 20/12/2013 23:37, Upinder Malhi (umalhi) wrote: >>>>>> This patch depends >>>>>> onhttp://www.spinics.net/lists/linux-rdma/msg18193.html >>>>> Why use proprietary debugfs code to display flows? you can (and should) >>>>> use the rdma subsystem netlink infrastructure for that, see these >>>>> two commits >>>>> >>>>> >>>>> 753f618 RDMA/cma: Add support for netlink statistics export >>>>> b2cbae2 RDMA: Add netlink infrastructure >>>>> >>>>> >>> -- >>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in >>> the body of a message to [email protected] >>> More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in > the body of a message to [email protected] > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html
