On 27/10/2012 00:17, Henry Nash wrote:
So to pick up on a couple of the areas of contention:

a) Roles.  I agree that role names must stay globally unique.  One way
of thinking about this is that it is not actually keystone that is
creating the "role name space" it is the other services (Nova etc.) by
specifying roles in their policy files.  Until those services support
domain specific segmentation, then role names stay global.

I addressed this issue in my Federation design doc (in Appendix 2). Here is the text to save you having to look it up (note that an attribute is simply a generalisation of role and is needed in the broader authz context. Roles are too limiting.)

"Attributes may be globally defined, e.g. visa attributes, or locally defined e.g. member of club X. Globally defined attributes are often specified in international standards and may be used in several different domains and federations. Their syntax and semantics are fixed, regardless of which Attribute Authority (AA) issues them. Local attributes are defined by their issuing attribute authority and usually are only valid in the domain or federation in which the AA is a member. For locally identifiable attributes the attribute authority (issuer) must be globally identifiable (in the federation). The attribute then becomes globally identifiable through hierarchical naming (AA.attribute)."

Whilst in a non-federated world the service provider (e.g. Swift) can unilaterally define the roles it wants, in a federated world the attributes have to be mutually agreed between the issuer (AA) and the consumer (e.g. Swift).

To address this issue I proposed a role mapping (attribute mapping) service that is run by Keystone, and it maps between the role/attribute required by the service, and the actual attribute issued by the AA. For example, say Swift requires the role of Admin to be assigned to addministrators, whereas company X, the attribute authority, assigns the LDAP attribute title=OpenStack Cloud Administrator to its admin staff. Keystone will use its attribute mapping service to map between these values.


b) Will multi-domains make it more complicated in terms of authorisation
- e.g. will the users have to input a Domain Name into Horizon the whole
time?  The first thing I would say is that if the cloud administrator
has create multiple domains, then the keystone API should indeed require
the domain specification.

Again, in our federated design document we have the concept of a realm, which is similar to that of a domain, only in the federated case it indicates the place where the user will be authenticated and obtain (some of) his authz attributes from. The user can indicate the realm/domain name on the command line, but if it is missing, Keystone replies with a list of domains that it knows about and asks the user to choose one from the list.

 However, that should not mean it should be
laborious for a Horizon user.  In the case where a Cloud Provider has
created domains to encapsulate each of their customers - then if they
want to let those customer use horizon as the UI, then I would think
they want to be able to give each customer a unique URL which will point
to a Horizon that "knows which domain to go to".

this is certainly a possibility.

regards

David

  Maybe the url contains
the Domain Name or ID in the path, and Horizon pulls this out of its own
url (assuming that's possible) and hence the user is never given an
option to chose a domain.  A Cloud Admin would use a "non domain
qualified url" to get to Horizon (basically as it is now) and hence be
able to see the different domains.  Likewise, in the case of where the
Cloud Provider has not chosen to create any individual domains (and is
just running the cloud in the default domain), then the  "non domain
qualified url" would be used to a Horizon that only showed one, default
domain and hence no choice is required.


Henry

On 26 Oct 2012, at 17:31, heckj wrote:

Bringing conversation for domains in Keystone to the broader mailing
lists.


On Oct 26, 2012, at 5:18 AM, Dolph Mathews <dolph.math...@gmail.com
<mailto:dolph.math...@gmail.com>> wrote:
I think this discussion would be great for both mailing lists.

-Dolph


On Fri, Oct 26, 2012 at 5:18 AM, Henry Nash <henry.n...@mac.com
<mailto:henry.n...@mac.com>> wrote:

    Hi

    <Not sure where best to have this discussion - here, as a comment
    to the v3api doc, or elsewhere - appreciate some guidance and
    will transfer this to the right place>

    At the Summit we started a discussion on whether things like user
    name, tenant name etc. should be globally unique or unique within
    a domain.  I'd like to widen that discussion to try and a) agree
    a direction, b) agree some changes to our current spec. Here's my
    view as an opening gambit:

    - When a Keystone instance is first started, there is only one,
    default, Domain.  The Cloud Provider does not need to create any
    new domains, all projects can exist in this default domain, as
    will the users etc.  There is one, global, name space.  Clients
    using the v2 API will work just fine.


+1

Very much what we were thinking for the initial implemenation and
rollout to make it backwards "compatible" with the V2 (non-domain)
core API

    - If the Cloud Provider wants to provide their customers with
    regions they can administer themselves and be self-contained,
    then they create a Domain for each customer.  It should be
    possible for users/roles to be scoped to a Domain so that
    (effectively) administrative duties can be delegated to some
    users in that Domain.  So far so good - all this can be done with
    the v3 API.


Not clear on if you're referring to endpoint regions, or just
describing domain isolation?

I believe you're describing the key use cases behind the domains
mechanism to begin with - user and project partitioning to allow for
administration of those to be clearly "owned" and managed appropriately.


    - We still have work to do to make sure items in other OS
    projects that reference tenants (e.g. Images) can take a Domain
    or Project ID, but we'll get to that soon enough


Everything will continue to work with projects, but once middleware
starts providing a DOMAIN_ID and DOMAIN_NAME to the underlying
service, it'll be up to them to take advantage of it. Images per
domain is an excellent example use case.

    - However, Cloud Providers want to start enabling enterprise
    customers to run more and more of the workloads in OpenStack
    clouds - over and above, the smaller sized companies that are
    doing this today.  For this to work, the encapsulation of a
    Domain need, I think, to be able to be stricter - and this is
    where the name space comes into play.  I think we need to allow
    for a Domain to have its own namespace (i.e. users, roles,
    projects etc.) as an option.  I see this as a first step to
    allowing each Domain to have its own AuthZ/N service (.e.g
    external ldap owned and hosted by the customer who will be using
    the Domain)

    Implementation:

    - A simplistic version would just allow a flag to specified on
    Domain creation that said whether this a "private" or "shared"
    Domain.  Shared would use the current global name space (and
    probably be the default for compatibility reasons).


I like the direction of this -- need to digest implications :)

I like the idea conceptually - but let's be clear on the implications
to the end users:

Where we're starting is preserving a global name space for project
names and user names. Allowing a mix of segregated and global name
spaces imposes a burden of additional data being needed to uniquely
place authentication and authorization.

We've been keeping to 2 key pieces of info (username, password) to get
authenticated - and then (via CLI or Horizon dashboard) you can choose
from a list of protential projects and carry on. In most practical
circumstances, any user working primarily from the CLI is already
providing 3-4 pieces of information:

* username
* password
* tenant name
* auth_url

to access and use the cloud.

By allowing domains to be their own namespaces, we're adding another
element that will be absolutely required to identify the person
authenticating:
 * domain name

implying a cascade of changes to the user experience all the way down
through horizon.


    - A more flexible approach would be to allow the specification of
    where the various sub-services of Keystone (e.g. AuthN/Z, Service
    Catalogue, Resources (i.e Users, Projects)) are hosted.  The
    defaults would all point back to the default domain (i.e. are
    global and shared), but instead could be specified as "self"
    (I.e. the new Domain), or, in the future, some other external
    location, e.g. for a remote ldap.
    - As an aside, this multi-name space model could also allow the
    Cloud Provider their own name space, separate from their
    customers - i.e. they will have a need to create admins who can
    just create domains and on-board customers into those domains.
     These users & roles could exist in the default domain, while all
    the customers' users/roles exist solely within their own domains.
    - One potential problem I do see is with roles.  Today, the role
    name is, if I understand it correctly, a kind of shared secret
    between, other services and Keystone - e.g. it is the actual name
    of a given role, say "ProjectAdmin" , that must match in, say,
    the Nova policy file and the role assignment in Keystone (please
    correct me if I have this wrong).


You're 100% correct.

    How would that work if the role names were not unique across Domains?


Not that we would want admins to ever see Role ID's, or edit policy
files with role ID's, but they provide a potential solution.

The different role names would need to be accounted for in the policy
files the way they're set up today - the enforcement there is all at
the service level. There's no current provision for evaluating policy
differently based on domain. While that's possible, it sounds like a
tremendous cascade of additional complication, as the policy, and
roles, are all set up and managed by deployers.

I think this might be an interesting addition in the future, but want
to keep the initial implementation and roll-out of the policy
mechanisms and domains consistent and simple for a first roll-out
iteration.

    What is the desired functionality for a Cloud Provider wanting to
    give their enterprise customers this level of flexibility - will
    they have dedicated Nova endpoints anyway?  Sounds too rigid.
     This might tie into another bp we are working on at IBM in terms
    of using Availability zones to allow Cloud Providers to divide up
    their compute resources in a more flexible way.
    - Finally, I wanted to raise the subject of whether we should
    make it a goal to remain compatible with the v2 API /once the
    cloud provider starts creating additional domains/.


Joe and I briefly discussed this at the summit. As a migration to v3,
we'd obviously be creating the default domain and mapping all
existing users/projectse/etc to it. I'd be fine if the v2
implementation ONLY interacted with resources in that default domain;
i.e. if you want to use domains, upgrade to a v3 client.

    As stated above, if just the default domain is being used, then
    fine.  And while I agree that, technically, the v2 API should
    still work with the above if all the other domains point back to
    the default domain for their sub-services - it feels overly
    flexible (and maybe wrong conceptually) to support v2 semantics
    across a multi-domain installation.


+1


    Interested in everyone else's view.

    Henry






_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to