Hi Kent,

I think that the answer depends on when this module needs to be published:

If it needs to be published now, then I think that it should follow the standard IETF config/state module conventions and use separate /foo and /foo-state trees. This should make the module most widely usable. Would it help if the model defined two "require-instance false" dependency leaf refs, one to the potential underlay node in the config tree, and a second to the potential underlay node in the state tree?

Otherwise, if this model can be delayed until the revised datastores and I2RS work progresses then it could use a single combined config/state tree. It seems that the revised datastore solution would allow for a simpler model to be constructed to represent network topologies.

Rob


On 14/02/2017 13:55, Kent Watsen wrote:

[moving yang-doctors to BCC]


OPTION 1: separate /foo and /foo-state trees
--------------------------------------------

This option was/is described here:
https://www.ietf.org/mail-archive/web/i2rs/current/msg04316.html.

PROS:
   a) does NOT break legacy clients (how we got here)
   b) consistent with convention used in many IETF modules
   c) able to show if/how opstate may differ from configured values

CONS:
   a) questionably valid YANG leafref usage
What does this mean?
I'm referring to how the description statement explains that
the server may look to operational state in order to resolve
the leafref, which is to result in behavior similar to
pre-configuration in RFC 7223.



   b) complex server implementation (to handle require-instance false)
Can you elaborate on this one?
This is primarily a reflection of the CON listed above, in that
it seems that a server would need to have special handling for
when dependencies transition from being present to not-present
and vice versa, much like the code to handle when a physical
card is plugged in or removed.

Note: I should've listed this as a CON for OPTION 2 as well.



   c) eventually the module would need to migrate to the long-term
      solution, which would result in needing to also rewrite all
      modules that have augmented it (e.g., ietf-te-topology).
   d) leafref path expressions really only work for configuration data,
      though a clever server could have a special ability to peak at
      the opstate values when doing validations.  Of course, with
      require-instance is false, the value of leafref based validation
      checking is negated anyway, even for config true nodes, so this
      may not matter much.



OPTION 2: explicit client-option to also return tagged opstate data
-------------------------------------------------------------------

This option takes a couple forms.  The first is module-specific and
the second is generic.  In both cases, the idea is modeled after the
with-defaults solution (RFC6243), wherein the client passes a special
flag into <get-config> causing the server to also return opstate data,
having a special metadata flag set, intermingled with the
configuration
data.


2A: Module-specific version

    module foo {
       import ietf-netconf { prefix nc; }
       import ietf-yang-metadata { prefix md; }
       md:annotation server-provided {
          type boolean;
       }
       container nodes {
          config true;
          list node {
             key "name";
             leaf name { type string; }
             leaf dependency {
                type leafref {
                  path "../node/name"
                  require-instance false;
                }
             }
          }
       }
       augment /nc:get-config/nc:input {
          leaf with-server-provided {
             type boolean;
          }
       }
    }
I don't think this solution is substantially different from the
solution in draft-ietf-i2rs-yang-network-topo-10.  You have just moved
a config false leaf to a meta-data annotation.  This solution suffers
from the same problems as the solution in
draft-ietf-i2rs-yang-network-topo-10.
There are two primary differences:

1) It doesn't break legacy clients, because it requires the client to
    explicitly pass a 'with-server-provided' flag in the <get-config>
    request in order to get back the extended response.  Likewise, it
    doesn't break backup/restore workflows, as the server can discard
    any 'server-provided' nodes passed in an <edit-config> operation.
    Lastly, it doesn't break <lock>/<unlock>, as there is no comingling
    of opstate data in the 'running' datastore.

2) It doesn't say anything about how the opstate data is stored on the
    server.  The opstate data is not modeled at all.  This approach
    only defines a presentation-layer format for how opstate data can
    be returned via an RPC.  The server is free to persist the opstate
    data anyway it wants, perhaps in an internal datastore called
    'operational-state' or in an uber-datastore with the opstate data
    flagged with a datastore='oper-state' attribute.  Regardless, it's
    an implementation detail, and the conceptual datastore model is
    preserved.



/martin
Kent




For instance:

   <get-config>
     <source>
       <running/>
     </source>
     <with-server-provided/>
    </get-config>

    <data>
      <nodes>
        <node>
          <name>overlay-node</name>
          <dependency>underlay-node</dependency>
        </node>
        <node foo:server-provided='true'>
          <name>underlay-node</name>
        </node>
      </nodes>
    </data>

PROS:
   a) does NOT break legacy clients (how we got here)
   b) having all data in one merged tree is simpler to process
      than two separate queries.
   c) module doesn't have to be rewritten for revised-datastores;
      the 'with-server-provided' switch would just not be passed
      by new opstate-aware clients.

CONS:
   a) inconsistent with convention used in many IETF modules
   b) unclear how to model 'with-server-provided' for RESTCONF
      (just use a description statement to define a query param?)
   c) unable to return the opstate value for any configured node
      (is it needed here?)
   d) requires server to support metadata, which is a relatively
      new concept and maybe not well supported by servers.
   e) only changes presentation-layer (doesn't change the fact
      that 'server-provided' data is not configuration), thus the
      leafref path expressions still don't work quite the way as
      desired, though a clever server could have a special ability
      to peak at the opstate values when doing validations. Of
      course, with require-instance is false, the value of leafref
      based validation checking is negated anyway, even for config
      true nodes, so this may not matter much.




2B: Generic version

The generic version is much the same, but rather than letting the
solution be limited to this one module, the idea is to generalize
it so it could be a server-level feature.  Having a generic RPC to
return data from more than one DS at a time was something that was
discussed ~1.5 years ago when we were kicking off the opstate effort.

The PROS and CONS are similar, but there are additional CONS in the
generic case.  The main ones being 1) how to simultaneously return
both the config and opstate values for a node (split at the leaves)
and 2) how to handle some YANG statements such as presence containers
and choice nodes.  For this reason, (2B) is NOT considered a viable
solution and is only here so that it's clear that it was discussed.



If there are any other options people want to suggest, please do so
now!

Thanks,
Kent


_______________________________________________
i2rs mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/i2rs
.


_______________________________________________
i2rs mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/i2rs

Reply via email to