There are actually a number of interesting implications with regards to NACM.  
NACM could indeed be a key to the solution if it provides sufficient 
flexibility with regards to articulating and enforcing authorization rules.  
Regarding this, I have a number of questions/comments:


-          If a subtree contains objects that a client does not have write 
privileges for, will the client be prevented from locking the subtree? How 
about the case when the client does not even have read privileges?

The current NACM-bis draft states in section 3.7.2 that this is not the case – 
i.e. a client is able to lock an entire subtree, even in cases when there is 
not a single object in that subtree that the client actually has access 
privileges to.

To me, this does not seem right.  It just invites abuse.

Now, there is still the possibility to restrict access to the operation 
overall.  But again, this means that you have to give a users an all-or-nothing 
choice.  Too inflexible.  By the same token that partial locks were supported 
to avoid requiring anyone who needs the ability to conduct a transaction to 
lock down the entire server, there should be the ability to restrict access to 
the lock and partial-lock operation by targeted subtree.  However, this 
capability is currently missing.


-          Where can NACM rules originate from?   The current model seems to 
assume that rules will always be explicitly configured from an external client 
and constitute part of configuration. Now, what about the case when rules are 
to be enforced automatically by the server?   I think NACM should allow for 
that, and having a mix of both.  (The server-provided topologies are one 
potential example where this would be very useful.  In fact, if this capability 
were supported, I don’t think we would be having the server-provided discussion 
in the topology context – NACM would be all we need.  However, there are other 
use cases for this.  Think e.g. an intrusion protection component on the 
server, that you would not want to override by an external user administration 
client application that you would still want to allow as well.  Or cases when 
some authorizations get signaled, e.g. in the case of autonomic peer-to-peer 
type of applications.)

--- Alex

From: i2rs [mailto:[email protected]] On Behalf Of Andy Bierman
Sent: Thursday, February 16, 2017 12:49 PM
To: Susan Hares <[email protected]>
Cc: [email protected]
Subject: Re: [i2rs] topo model use of NACM

Hi,

I am most concerned about getting the architecture right.
We have ignored server-created nodes until now.
I am glad I2RS WG is trying to deal with the problem.
Just make sure we have a reusable solution.

Also concerned about tool automation.
There was some discussion of a 'server-created' extension at some point I think.
This would help, because the server-created leaf is not really deterministic.
It is just a convention.

e.g.


  container networks {
     list network {
         i2rs:server-created;
         ...
         leaf server-created { ... }
         ...
     }
  }


Andy


On Thu, Feb 16, 2017 at 11:47 AM, Susan Hares 
<[email protected]<mailto:[email protected]>> wrote:
Andy:
<chair hat off, individual contributor hat on>

AFAIK – I believe the revised data store model is right approach.   It is an 
important question to ask whether the ability to have a mixture of 
“server-provided” and “configured” is important for all topology models.  I 
hope Xufeng and other topology models will comment on this point.


Does the NETCONF data store in the revised data-store future include the 
control plane data stores? I thought the answer was “no” it does not.   Here’s 
some text from draft-ietf-netconf-rfc6536bis that leads me to believe that

On NACM, draft-ietf-netconf-rfc6536bis it says:

   It is necessary to control access to specific nodes and subtrees
   within the NETCONF datastore, regardless of which protocol operation,
   standard or proprietary, was used to access the datastore.

3.2<https://tools.ietf.org/html/draft-ietf-netconf-rfc6536bis-00#section-3.2>.  
Datastore Access

   The same access control rules apply to all datastores, for example,

   the candidate configuration datastore or the running configuration

   datastore.



   Only the standard NETCONF datastores (candidate, running, and

   startup) are controlled by NACM.  Local or remote files or datastores

   accessed via the <url> parameter are not controlled by NACM.  A

   standalone RESTCONF server (i.e., not co-located with a NETCONF

   server) applies NACM rules to a conceptual datastore, since

   datastores are not supported in RESTCONF.


===========

The I2RS security environment actually looks at 3 policies on the server

Network Access <-----> server <-----> routing-system access
              (aka I2RS agent)
                    |<----> System access

It also looks at application access to the client

  Network access<----> client <----> application-access


The protocol only needs to consider the NACM Access, but the routing 
infrastructure need to consider the server to/from routing system, and server 
to/from system.  My understanding is that the Routing system access control 
module (RACM) and the system access control module (SACM) functions were not in 
NACM.

Thanks again for posting,

Sue

From: i2rs [mailto:[email protected]<mailto:[email protected]>] On 
Behalf Of Andy Bierman
Sent: Thursday, February 16, 2017 2:00 PM
To: [email protected]<mailto:[email protected]>
Subject: [i2rs] topo model use of NACM

Hi,

The use of NACM for server-provided data is under-specified (at best)


from sec. 4.1:


   Finally, there is an object "server-provided".  This object is state

   that indicates how the network came into being.  Network data can

   come into being in one of two ways.  In one way, network data is

   configured by client applications, for example in case of overlay

   networks that are configured by an SDN Controller application.  In

   annother way, it is populated by the server, in case of networks that

   can be discovered.



   If server-provided is set to false, the network was configured by a

   client application, for example in the case of an overlay network

   that is configured by a controller application.  If server-provided

   is set to true, the network was populated by the server itself,

   respectively an application on the server that is able to discover

   the network.  Client applications SHOULD NOT modify configurations of

   networks for which "server-provided" is true.  When they do, they

   need to be aware that any modifications they make are subject to be

   reverted by the server.  For servers that support NACM (Netconf

   Access Control Model), data node rules should ideally prevent write

   access by other clients to network instances for which server-

   provided is set to true.

The SHOULD NOT above is really odd, considering is not supported by YANG
or the NC/RC protocols.

"data node rules should ideally prevent"

s/should/SHOULD/

Ideally prevent?
Is that a new engineering term?
Either this new usage of NACM works or it doesn't.

Also, there is no guidance or examples of the NACM config that the
server is supposed to magically create for server-provided topology data.
There is nothing in NACM at all about server-created data rules.
This is not supported by NACM.

Does the I2RS text imply that the server-provided property extends
to the NACM sub-trees? They are also subject to replacement by the server?
The client SHOULD NOT change these NACM rules?

IMO the way this server-provided property is being done is a short-sighted
point solution, but this should be a fundamental part of the revised datastores 
work.
Is there something special about network topology such that
server-provided data for a different module will require a different
solution? If not, is the topo module solution reusable?


Andy




_______________________________________________
i2rs mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/i2rs

Reply via email to