> On 9 Dec 2016, at 11:48, Robert Wilton <rwil...@cisco.com> wrote:
> 
> Hi Lada,
> 
> 
> On 09/12/2016 10:33, Ladislav Lhotka wrote:
>> Hi Rob,
>> 
>> I didn't follow the previous discussion closely but a natural solution
>> seems to be to define a leaf like "debug" - it could be just boolean or
>> an enumeration of debug levels. And then add the diagnostic
>> data as an augment that conditionally depends on the value of the
>> "debug" leaf.
>> 
>> Would this work?
> I'm not sure.
> 
> Historically debug has been separate from configuration.  I.e. you wouldn't 
> normally expect debug settings to persist after rebooting the device.

Yes, good point. So what you can do is to have a "debug" RPC, show "debug" 
value in state data, and still make the augment with diagnostics conditionally 
dependent on the debug leaf.

Lada

> Possibly debug could be modeled using I2RS, but that would be tying its fate 
> to I2RS, and I'm not sure whether I2RS will gain widespread adoption.
> 
> Although I raised my concern specification related to Ethernet clause 45 
> registers, I think that my question is really more general about whether 
> debug/diagnostics should be modeled in YANG and if so how that should be 
> done.   Perhaps, we should just concentrate on getting the basis config and 
> operational state models sorted out initially and then figure out how 
> diagnostics should be modeled at a later date?
> 
> Rob
> 
>> Lada
>> 
>> Robert Wilton <rwil...@cisco.com> writes:
>> 
>>> On 07/12/2016 16:13, Juergen Schoenwaelder wrote:
>>>> On Wed, Dec 07, 2016 at 02:39:00PM +0000, Robert Wilton wrote:
>>>>> Alas, xpath filtering is optional, and I'm not sure how many devices have
>>>>> implemented support for it.  Further, this still requires every client to 
>>>>> be
>>>>> coded to avoid receiving the information that they are very unlikely to be
>>>>> interested in.
>>>>> 
>>>>> I would much prefer a solution where the clients don't get this (mostly
>>>>> noise) data unless they explicitly ask for it.  Otherwise for an Ethernet
>>>>> interface you might return 10 leaves of potentially useful opstate
>>>>> information, along with 50+ leaves of quite low layer diagnostics
>>>>> information that is likely to be of very little use except for to the 
>>>>> select
>>>>> few people that are actively involved in trying to diagnose hardware 
>>>>> faults.
>>>>> 
>>>> I expect that there will be many augmentations to lets say /interfaces
>>>> (or /interfaces-state). How does a data model writer determine which
>>>> ones are 'noise' and which ones are not? How does a a data model
>>>> writer determine which ones are costly to implement and which ones are
>>>> not (since this may vary widely between systems)?
>>> It isn't so much as it being noise. I see that the two quite different
>>> sets of config false nodes are:
>>> 
>>> (1) The first set of nodes are those that are useful to a client to
>>> manage a device and to determine whether the device's actual behaviour
>>> matches the expected behaviour based on the configuration.  This is the
>>> same set of data that has traditionally been modeled via SNMP, and I
>>> think of as being the operational state of the device.
>>> 
>>> (2) If it is determined from the nodes above that a device is behaving
>>> in an abnormal way, then the second set of nodes are aimed at device
>>> developers/engineers to ascertain why the device is not behaving as
>>> specified.  A lot of this internal diagnostics information may only be
>>> of use to a developer who is familiar with the code (or intricacies of
>>> the hardware), and/or has a deep level of understanding of how a
>>> particular feature works.  I would see that most of this information is
>>> very likely to be device specific, and presented in a device specific
>>> way, and may include internal debug and dumps of internal data-structures.
>>> 
>>> I believe that most of the time, operators are only interested in
>>> interacting with that first set of data because that is all that is
>>> useful and required to manage their devices, and hence that is what I
>>> think should be modeled in the operational state datastore. However, in
>>> many cases, having a standard automated way of fetching that second set
>>> of data is still useful, because it facilitates more efficient
>>> diagnostic tools to be created in the future.
>>> 
>>> Specifically for Ethernet, 802.3 specifies a clear separation between
>>> what they expect to be made available via a management protocol vs what
>>> is regarded as being an internal API, specifically:
>>>   -802.3 Clause 30 specifies the main management objects, that I would
>>> expect to be broadly accessible to management clients, and what we are
>>> planning on basing the Ethernet YANG on.  (This is consistent with what
>>> is available in the Ethernet related MIBs and the OpenConfig Ethernet
>>> model).
>>>   - the internal interface is defined as the MDIO registers in 802.3
>>> Clause 45.  Looking at these register definitions, although the
>>> registers themselves can be given meaningful names, the values
>>> themselves would probably just be returned as opaque 16 bit register
>>> values, since the YANG "bits" type isn't flexible enough to represent
>>> the packed values they sometimes contain (specifically where they are
>>> using a set of bits to represent an enumerated value).
>>> 
>>> 
>>>> Perhaps it is useful to tell client writers that well behaving clients
>>>> should ask for what they need (and understand) and that they should
>>>> avoid asking generic 'give me everything you have' questions.
>>> I really think that there are two separate classes of data here, and it
>>> makes more sense to treat them as such.
>>> 
>>> Defining RPCs to fetch the internal diagnostics data on demand seems OK
>>> to me, or alternatively marking the nodes as being diagnostics related
>>> and putting that in a separate datastore would seem to be reasonable.
>>> 
>>>> If you have to deal with lazy clients that like to grab everything,
>>>> you can control them via NACM. Simply exclude access to some branches.
>>> This will likely just mean that every device would have to support this
>>> NACM to handle the mainline case.  That sounds like a lot of extra
>>> unnecessary work for everyone.
>>> 
>>> Thanks,
>>> Rob
>>> 
>>>> /js
>>>> 
>>> _______________________________________________
>>> netmod mailing list
>>> netmod@ietf.org
>>> https://www.ietf.org/mailman/listinfo/netmod
> 

--
Ladislav Lhotka, CZ.NIC Labs
PGP Key ID: E74E8C0C




_______________________________________________
netmod mailing list
netmod@ietf.org
https://www.ietf.org/mailman/listinfo/netmod

Reply via email to