And then maybe it is useful to have a standard mechanism to see what system 
config is available in a server. That could perhaps just be a read from 
operational with system config annotated with a new origin.
(or perhaps a read from intended, or a read from a new DS)

Jason

From: Sterne, Jason (Nokia - CA/Ottawa)
Sent: Friday, August 27, 2021 5:53 PM
To: 'Jan Lindblad (jlindbla)' <jlind...@cisco.com>; Andy Bierman 
<a...@yumaworks.com>
Cc: Kent Watsen <kent+i...@watsen.net>; netmod@ietf.org
Subject: RE: [netmod] system configuration sync mechanism

Hi Jan,

I like your examples of U and L systems.

For the overall purpose of the draft -> I think it comes back to some servers 
wanting to have some sort of "built in" list entries (invisible in <running> 
today) that can be referenced in other parts of the (user explicitly created) 
config.  So the operator doesn't have to actually create "qos-policy 1" but 
they can reference that built in system qos policy in some interface config for 
example. The reference is a leafref. So now what do we do about the fact that 
the leafref constraint is violated ? Do we care ?  (yes if we want a valid 
running datastore).

A server can just accept the config (which references a qos policy that isn't 
visible in a <get-config> of <running>). That's how some systems behave today.  
And that's all fine for that server.

But in some cases an operator may have a client, OSS, or workflow that requires 
offline validation of the config.  IMO, in those cases, the operator should 
have to explicitly define any qos policies they are referencing (if they want 
offline validity of <running>). The server could allow the user to explicitly 
configure qos-policy 1 (even though it already exists under the hood) and then 
it would always return that policy in a <get-config>, making the leafref valid.

Jason


From: Jan Lindblad (jlindbla) <jlind...@cisco.com<mailto:jlind...@cisco.com>>
Sent: Monday, August 23, 2021 6:50 AM
To: Andy Bierman <a...@yumaworks.com<mailto:a...@yumaworks.com>>; Sterne, Jason 
(Nokia - CA/Ottawa) <jason.ste...@nokia.com<mailto:jason.ste...@nokia.com>>
Cc: Kent Watsen <kent+i...@watsen.net<mailto:kent+i...@watsen.net>>; 
netmod@ietf.org<mailto:netmod@ietf.org>
Subject: Re: [netmod] system configuration sync mechanism

Hi,

Sorry for getting late into this already unwieldy thread. Similar discussions 
have been flaring up regularly for as long as this work group has existed, and 
we have never been able to put it to final rest. At the heart of the issue is 
the age old division between "unpredictable" and "lazy" managed systems 
(NETCONF servers).

Unpredictable systems: systems that modify and extend their running 
configuration spontaneously (outside standards)
Lazy systems: systems that treat running as sacred scriptures of the gods 
(operator community and management systems)

There are NETCONF servers out there of both types (and across the spectrum in 
between), with huge implementation investments and future aspirations. Nothing 
is going to remove either approach from the market any time soon. My point is 
that when new protocol extensions are presented, such as the system datastore, 
their implications need to be evaluated for both categories of systems.

There has been a large number of point questions discussed in this thread, and 
I don't envy the draft authors to try to make sense of it all. An interim call, 
as suggested by Jason, may be the best answer, if the draft team can put 
together a material with decision points to cover.

Jason made several important points that I'd like to underscore imho:

One of the pretty fundamental issues IMO is whether we want good ol' standard 
<running> to always be valid (including for an offline client or tool to be 
able to validate instance data retrieved from a server against the YANG models).

I find this an essential concept for running. Much depends on this tenet.

I agree there can be dynamically added system config (e.g. create a new qos 
policy, and some queue list entries are automatically created inside that 
policy).

This is the defining trait of "unpredictable" systems, and there are many of 
those. There are also many "lazy" systems, which would never allow this.

Best Regards,
/jan



TL;DR defining typical "unpredictable" (U) and "lazy" (L) NETCONF server 
behavior.

+ Factory reset
U: creates a default user, scans the hardware and injects default line card 
configs in running
L: creates a default user, scans the hardware and injects default line card 
configs in running

+ Insert new line card
U: creates a default config for inserted line card and interfaces, maybe adds a 
suitable default speed leaf depending on hardware type
L: no change of running, but reflects insertion in operational and maybe with a 
notification

+ Configure parameters of interface on inserted line card
U: only changed parameters need to be written as running already has the 
interface
L: entire interface needs to be created in running with desired parameters

+ Remove that line card
U: removes the config for the ejected line card
L: no change of running, but operational now reflects the interface state as 
[hardware missing]

+ Reinsert the line card
U: creates a default config for inserted line card, maybe adds a suitable 
default speed leaf depending on hardware type
L: no change of running, but interface comes back on line as previously 
configured and operational now reflects the interface state as [up]

+ Reconfigure the interface type of existing interface
U: rejected
L: accepted, but operational state now reflects the interface state as 
[hardware mismatch]

+ Reconfigure the name of an interface
U: rejected
L: accepted if the name could be valid for this device, but operational state 
now reflects the interface state as [hardware missing]

+ Install backup
U: If the set of interfaces in the backup is a subset of currently present 
hardware, it is activated, otherwise rejected
L: accepted. If any hardware is currently missing as configured in the backup, 
their operational state is shown as [hardware missing]

A variant that falls between U and L might be a system that considers the 
insertion of a line card an "act of configuration". Hardware manipulation could 
be considered a kind of (so far proprietary) protocol with defined 
configuration semantics. The behavior of such a system might be exactly like a 
"lazy" system except in the "Configure parameters of interface on inserted line 
card" use case, where it behaves like an "unpredictable" system when there is 
no prior config for the card.

Thanks for reading the TLDR.
/jan


On 18 Aug 2021, at 01:34, Andy Bierman 
<a...@yumaworks.com<mailto:a...@yumaworks.com>> wrote:

Hi,

I guess I do not agree with the premise of the draft, which is that the client
needs to take over control of the system-controlled configuration.  I will
wait for a draft update and see if that helps understand it better.


Andy

On Tue, Aug 17, 2021 at 11:21 AM Kent Watsen 
<kent+i...@watsen.net<mailto:kent%2bi...@watsen.net>> wrote:

>IMO this draft overlaps the factory-default datastore.
>Unfortunately, RFC 8808 does not document NMDA, Appendix A3 details
>https://datatracker.ietf.org/doc/html/rfc8342#appendix-A.3
>It does not say if <factory-default> datastore feeds into <running> or into 
><intended>.
>It is not clear how <system> would interact with other datastores.
[Qin]: As described in Appendix-A.3, two ways to interact with other datastore 
are discussed, one is interact implicitly, the other is to use
RPC to trigger application of the datastore's data, in factory default setting 
case, <factory-reset> rpc will reset the contents of all relevant datastores to 
factory default state.
The extreme case of factory default state is no configuration at all for each 
datastore.

Right.  Also, the word “flow” doesn’t seem quite right…at least in my mind, it 
suggests an ongoing relationship, whereas <factory-default> is really for 
one-time initializations.

From https://datatracker.ietf.org/doc/html/rfc8808#section-3:

   Management operations:  The contents of the datastore is set by the
      server in an implementation-dependent manner.  The contents cannot
      be changed by management operations via the Network Configuration
      Protocol (NETCONF), RESTCONF, the CLI, etc., unless specialized,
      dedicated operations are provided.  The datastore can be read
      using the standard NETCONF/RESTCONF protocol operations.  The
      "factory-reset" operation copies the factory default contents to
      <running> and, if present, <startup> and/or <candidate>.  The
      contents of these datastores is then propagated automatically to
      any other read-only datastores, e.g., <intended> and
      <operational>.


>It is not clear why it is even needed since <factory-default> contains only 
>system settings.
[Qin]: I agree <factory-default> could have system setting. But unspecified for 
some reasons.
Based on earlier discussion on factory default, what content is included in 
<factory-default> and how to format this content, e.g., YANG instance file 
format
Have been ruled out of the scope. See the diff in v-07
https://www.ietf.org/rfcdiff?url2=draft-ietf-netmod-factory-default-07.txt


Regardless, <factory-default> cannot be used for immutable “system" defined 
objects, since it’s contents initialize client-editable datastores.


K.


_______________________________________________
netmod mailing list
netmod@ietf.org<mailto:netmod@ietf.org>
https://www.ietf.org/mailman/listinfo/netmod

_______________________________________________
netmod mailing list
netmod@ietf.org
https://www.ietf.org/mailman/listinfo/netmod

Reply via email to