Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Lance Bragstad
For those who may be following along and are not familiar with what we mean
by federated auto-provisioning [0].

[0]
https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning

On Wed, Sep 26, 2018 at 9:06 AM Morgan Fainberg 
wrote:

> This discussion was also not about user assigned IDs, but predictable IDs
> with the auto provisioning. We still want it to be something keystone
> controls (locally). It might be hash domain ID and value from assertion (
> similar.to the LDAP user ID generator). As long as within an environment,
> the IDs are predictable when auto provisioning via federation, we should be
> good. And the problem of the totally unknown ID until provisioning could be
> made less of an issue for someone working within a massively federated edge
> environment.
>
> I don't want user/explicit admin set IDs.
>
> On Wed, Sep 26, 2018, 04:43 Jay Pipes  wrote:
>
>> On 09/26/2018 05:10 AM, Colleen Murphy wrote:
>> > Thanks for the summary, Ildiko. I have some questions inline.
>> >
>> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
>> >
>> > 
>> >
>> >>
>> >> We agreed to prefer federation for Keystone and came up with two work
>> >> items to cover missing functionality:
>> >>
>> >> * Keystone to trust a token from an ID Provider master and when the
>> auth
>> >> method is called, perform an idempotent creation of the user, project
>> >> and role assignments according to the assertions made in the token
>> >
>> > This sounds like it is based on the customizations done at Oath, which
>> to my recollection did not use the actual federation implementation in
>> keystone due to its reliance on Athenz (I think?) as an identity manager.
>> Something similar can be accomplished in standard keystone with the mapping
>> API in keystone which can cause dynamic generation of a shadow user,
>> project and role assignments.
>> >
>> >> * Keystone should support the creation of users and projects with
>> >> predictable UUIDs (eg.: hash of the name of the users and projects).
>> >> This greatly simplifies Image federation and telemetry gathering
>> >
>> > I was in and out of the room and don't recall this discussion exactly.
>> We have historically pushed back hard against allowing setting a project ID
>> via the API, though I can see predictable-but-not-settable as less
>> problematic. One of the use cases from the past was being able to use the
>> same token in different regions, which is problematic from a security
>> perspective. Is that that idea here? Or could someone provide more details
>> on why this is needed?
>>
>> Hi Colleen,
>>
>> I wasn't in the room for this conversation either, but I believe the
>> "use case" wanted here is mostly a convenience one. If the edge
>> deployment is composed of hundreds of small Keystone installations and
>> you have a user (e.g. an NFV MANO user) which should have visibility
>> across all of those Keystone installations, it becomes a hassle to need
>> to remember (or in the case of headless users, store some lookup of) all
>> the different tenant and user UUIDs for what is essentially the same
>> user across all of those Keystone installations.
>>
>> I'd argue that as long as it's possible to create a Keystone tenant and
>> user with a unique name within a deployment, and as long as it's
>> possible to authenticate using the tenant and user *name* (i.e. not the
>> UUID), then this isn't too big of a problem. However, I do know that a
>> bunch of scripts and external tools rely on setting the tenant and/or
>> user via the UUID values and not the names, so that might be where this
>> feature request is coming from.
>>
>> Hope that makes sense?
>>
>> Best,
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread James Penick
Hey Colleen,

>This sounds like it is based on the customizations done at Oath, which to
my recollection did not use the actual federation implementation in
keystone due to its reliance on Athenz (I think?) as an identity manager.
Something similar can be accomplished in standard keystone with the mapping
API in keystone which can cause dynamic generation of a shadow user,
project and role assignments.

You're correct, this was more about the general design of asymmetrical
token based authentication rather that our exact implementation with
Athenz. We didn't use the shadow users because Athenz authentication in our
implementation is done via an 'ntoken'  which is Athenz' older method for
identification, so it was it more straightforward for us to resurrect the
PKI driver. The new way is via mTLS, where the user can identify themselves
via a client cert. I imagine we'll need to move our implementation to use
shadow users as a part of that change.

>We have historically pushed back hard against allowing setting a project
ID via the API, though I can see predictable-but-not-settable as less
problematic.

Yup, predictable-but-not-settable is what we need. Basically as long as the
uuid is a hash of the string, we're good. I definitely don't want to be
able to set a user ID or project ID via API, because of the security and
operability problems that could arise. In my mind this would just be a
config setting.

>One of the use cases from the past was being able to use the same token in
different regions, which is problematic from a security perspective. Is
that that idea here? Or could someone provide more details on why this is
needed?

Well, sorta. As far as we're concerned you can get authenticate to keystone
in each region independently using your credential from the IdP. Our use
cases are more about simplifying federation of other systems, like Glance.
Say I create an image and a member list for that image. I'd like to be able
to copy that image *and* all of its metadata straight across to another
cluster and have things Just Work without needing to look up and resolve
the new UUIDs on the new cluster.

However, for deployers who wish to use Keystone as their IdP, then in that
case they'll need to use that keystone credential to establish a credential
in the keystone cluster in that region.

-James

On Wed, Sep 26, 2018 at 2:10 AM Colleen Murphy  wrote:

> Thanks for the summary, Ildiko. I have some questions inline.
>
> On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
>
> 
>
> >
> > We agreed to prefer federation for Keystone and came up with two work
> > items to cover missing functionality:
> >
> > * Keystone to trust a token from an ID Provider master and when the auth
> > method is called, perform an idempotent creation of the user, project
> > and role assignments according to the assertions made in the token
>
> This sounds like it is based on the customizations done at Oath, which to
> my recollection did not use the actual federation implementation in
> keystone due to its reliance on Athenz (I think?) as an identity manager.
> Something similar can be accomplished in standard keystone with the mapping
> API in keystone which can cause dynamic generation of a shadow user,
> project and role assignments.
>
> > * Keystone should support the creation of users and projects with
> > predictable UUIDs (eg.: hash of the name of the users and projects).
> > This greatly simplifies Image federation and telemetry gathering
>
> I was in and out of the room and don't recall this discussion exactly. We
> have historically pushed back hard against allowing setting a project ID
> via the API, though I can see predictable-but-not-settable as less
> problematic. One of the use cases from the past was being able to use the
> same token in different regions, which is problematic from a security
> perspective. Is that that idea here? Or could someone provide more details
> on why this is needed?
>
> Were there any volunteers to help write up specs and work on the
> implementations in keystone?
>
> 
>
> Colleen (cmurphy)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Giulio Fidente
hi,

thanks for sharing this!

At TripleO we're looking at implementing in Stein deployment of at least
1 regional DC and N edge zones. More comments below.

On 9/25/18 11:21 AM, Ildiko Vancsa wrote:
> Hi,
>
> Hereby I would like to give you a short summary on the discussions
that happened at the PTG in the area of edge.
>
> The Edge Computing Group sessions took place on Tuesday where our main
activity was to draw an overall architecture diagram to capture the
basic setup and requirements of edge towards a set of OpenStack
services. Our main and initial focus was around Keystone and Glance, but
discussion with other project teams such as Nova, Ironic and Cinder also
happened later during the week.
>
> The edge architecture diagrams we drew are part of a so called Minimum
Viable Product (MVP) which refers to the minimalist nature of the setup
where we didn’t try to cover all aspects but rather define a minimum set
of services and requirements to get to a functional system. This
architecture will evolve further as we collect more use cases and
requirements.
>
> To describe edge use cases on a higher level with Mobile Edge as a use
case in the background we identified three main building blocks:
>
> * Main or Regional Datacenter (DC)
> * Edge Sites
> * Far Edge Sites or Cloudlets
>
> We examined the architecture diagram with the following user stories
in mind:
>
> * As a deployer of OpenStack I want to minimize the number of control
planes I need to manage across a large geographical region.
> * As a user of OpenStack I expect instance autoscale continues to
function in an edge site if connectivity is lost to the main datacenter.
> * As a deployer of OpenStack I want disk images to be pulled to a
cluster on demand, without needing to sync every disk image everywhere.
> * As a user of OpenStack I want to manage all of my instances in a
region (from regional DC to far edge cloudlets) via a single API endpoint.
>
> We concluded to talk about service requirements in two major categories:
>
> 1. The Edge sites are fully operational in case of a connection loss
between the Regional DC and the Edge site which requires control plane
services running on the Edge site
> 2. Having full control on the Edge site is not critical in case a
connection loss between the Regional DC and an Edge site which can be
satisfied by having the control plane services running only in the
Regional DC
>
> In the first case the orchestration of the services becomes harder and
is not necessarily solved yet, while in the second case you have
centralized control but losing functionality on the Edge sites in the
event of a connection loss.
>
> We did not discuss things such as HA at the PTG and we did not go into
details on networking during the architectural discussion either.

while TripleO used to rely on pacemaker to manage cinder-volume A/P in
the controlplane, we'd like to push for cinder-volume A/A in the edge
zone and avoid the deployment of pacemaker in the edge zones

the safety of cinder-volume A/A seems to depend mostly on the backend
driver and for RBD we should be good

> We agreed to prefer federation for Keystone and came up with two work
items to cover missing functionality:
>
> * Keystone to trust a token from an ID Provider master and when the
auth method is called, perform an idempotent creation of the user,
project and role assignments according to the assertions made in the token
> * Keystone should support the creation of users and projects with
predictable UUIDs (eg.: hash of the name of the users and projects).
This greatly simplifies Image federation and telemetry gathering
>
> For Glance we explored image caching and spent some time discussing
the option to also cache metadata so a user can boot new instances at
the edge in case of a network connection loss which would result in
being disconnected from the registry:
>
> * I as a user of Glance, want to upload an image in the main
datacenter and boot that image in an edge datacenter. Fetch the image to
the edge datacenter with its metadata
>
> We are still in the progress of documenting the discussions and draw
the architecture diagrams and flows for Keystone and Glance.

for glance we'd like to deploy only one glance-api in the regional dc
and configure glance/cache in each edge zone ... pointing all instances
to a shared database

this should solve the metadata problem and also provide for storage
"locality" into every edge zone

> In addition to the above we went through Dublin PTG wiki
(https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG)
capturing requirements:
>
> * we agreed to consider the list of requirements on the wiki finalized
for now
> * agreed to move there the additional requirements listed on the Use
Cases (https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases)
wiki page
>
> For the details on the discussions with related OpenStack projects you
can check the following etherpads for notes:
>
> * Cinder:
https://etherpad.opens

Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Morgan Fainberg
This discussion was also not about user assigned IDs, but predictable IDs
with the auto provisioning. We still want it to be something keystone
controls (locally). It might be hash domain ID and value from assertion (
similar.to the LDAP user ID generator). As long as within an environment,
the IDs are predictable when auto provisioning via federation, we should be
good. And the problem of the totally unknown ID until provisioning could be
made less of an issue for someone working within a massively federated edge
environment.

I don't want user/explicit admin set IDs.

On Wed, Sep 26, 2018, 04:43 Jay Pipes  wrote:

> On 09/26/2018 05:10 AM, Colleen Murphy wrote:
> > Thanks for the summary, Ildiko. I have some questions inline.
> >
> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
> >
> > 
> >
> >>
> >> We agreed to prefer federation for Keystone and came up with two work
> >> items to cover missing functionality:
> >>
> >> * Keystone to trust a token from an ID Provider master and when the auth
> >> method is called, perform an idempotent creation of the user, project
> >> and role assignments according to the assertions made in the token
> >
> > This sounds like it is based on the customizations done at Oath, which
> to my recollection did not use the actual federation implementation in
> keystone due to its reliance on Athenz (I think?) as an identity manager.
> Something similar can be accomplished in standard keystone with the mapping
> API in keystone which can cause dynamic generation of a shadow user,
> project and role assignments.
> >
> >> * Keystone should support the creation of users and projects with
> >> predictable UUIDs (eg.: hash of the name of the users and projects).
> >> This greatly simplifies Image federation and telemetry gathering
> >
> > I was in and out of the room and don't recall this discussion exactly.
> We have historically pushed back hard against allowing setting a project ID
> via the API, though I can see predictable-but-not-settable as less
> problematic. One of the use cases from the past was being able to use the
> same token in different regions, which is problematic from a security
> perspective. Is that that idea here? Or could someone provide more details
> on why this is needed?
>
> Hi Colleen,
>
> I wasn't in the room for this conversation either, but I believe the
> "use case" wanted here is mostly a convenience one. If the edge
> deployment is composed of hundreds of small Keystone installations and
> you have a user (e.g. an NFV MANO user) which should have visibility
> across all of those Keystone installations, it becomes a hassle to need
> to remember (or in the case of headless users, store some lookup of) all
> the different tenant and user UUIDs for what is essentially the same
> user across all of those Keystone installations.
>
> I'd argue that as long as it's possible to create a Keystone tenant and
> user with a unique name within a deployment, and as long as it's
> possible to authenticate using the tenant and user *name* (i.e. not the
> UUID), then this isn't too big of a problem. However, I do know that a
> bunch of scripts and external tools rely on setting the tenant and/or
> user via the UUID values and not the names, so that might be where this
> feature request is coming from.
>
> Hope that makes sense?
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Jay Pipes

On 09/26/2018 05:10 AM, Colleen Murphy wrote:

Thanks for the summary, Ildiko. I have some questions inline.

On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:





We agreed to prefer federation for Keystone and came up with two work
items to cover missing functionality:

* Keystone to trust a token from an ID Provider master and when the auth
method is called, perform an idempotent creation of the user, project
and role assignments according to the assertions made in the token


This sounds like it is based on the customizations done at Oath, which to my 
recollection did not use the actual federation implementation in keystone due 
to its reliance on Athenz (I think?) as an identity manager. Something similar 
can be accomplished in standard keystone with the mapping API in keystone which 
can cause dynamic generation of a shadow user, project and role assignments.


* Keystone should support the creation of users and projects with
predictable UUIDs (eg.: hash of the name of the users and projects).
This greatly simplifies Image federation and telemetry gathering


I was in and out of the room and don't recall this discussion exactly. We have 
historically pushed back hard against allowing setting a project ID via the 
API, though I can see predictable-but-not-settable as less problematic. One of 
the use cases from the past was being able to use the same token in different 
regions, which is problematic from a security perspective. Is that that idea 
here? Or could someone provide more details on why this is needed?


Hi Colleen,

I wasn't in the room for this conversation either, but I believe the 
"use case" wanted here is mostly a convenience one. If the edge 
deployment is composed of hundreds of small Keystone installations and 
you have a user (e.g. an NFV MANO user) which should have visibility 
across all of those Keystone installations, it becomes a hassle to need 
to remember (or in the case of headless users, store some lookup of) all 
the different tenant and user UUIDs for what is essentially the same 
user across all of those Keystone installations.


I'd argue that as long as it's possible to create a Keystone tenant and 
user with a unique name within a deployment, and as long as it's 
possible to authenticate using the tenant and user *name* (i.e. not the 
UUID), then this isn't too big of a problem. However, I do know that a 
bunch of scripts and external tools rely on setting the tenant and/or 
user via the UUID values and not the names, so that might be where this 
feature request is coming from.


Hope that makes sense?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Colleen Murphy
Thanks for the summary, Ildiko. I have some questions inline.

On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:



> 
> We agreed to prefer federation for Keystone and came up with two work 
> items to cover missing functionality:
> 
> * Keystone to trust a token from an ID Provider master and when the auth 
> method is called, perform an idempotent creation of the user, project 
> and role assignments according to the assertions made in the token

This sounds like it is based on the customizations done at Oath, which to my 
recollection did not use the actual federation implementation in keystone due 
to its reliance on Athenz (I think?) as an identity manager. Something similar 
can be accomplished in standard keystone with the mapping API in keystone which 
can cause dynamic generation of a shadow user, project and role assignments.

> * Keystone should support the creation of users and projects with 
> predictable UUIDs (eg.: hash of the name of the users and projects). 
> This greatly simplifies Image federation and telemetry gathering

I was in and out of the room and don't recall this discussion exactly. We have 
historically pushed back hard against allowing setting a project ID via the 
API, though I can see predictable-but-not-settable as less problematic. One of 
the use cases from the past was being able to use the same token in different 
regions, which is problematic from a security perspective. Is that that idea 
here? Or could someone provide more details on why this is needed?

Were there any volunteers to help write up specs and work on the 
implementations in keystone?



Colleen (cmurphy)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-25 Thread Ildiko Vancsa
Hi,

Hereby I would like to give you a short summary on the discussions that 
happened at the PTG in the area of edge.

The Edge Computing Group sessions took place on Tuesday where our main activity 
was to draw an overall architecture diagram to capture the basic setup and 
requirements of edge towards a set of OpenStack services. Our main and initial 
focus was around Keystone and Glance, but discussion with other project teams 
such as Nova, Ironic and Cinder also happened later during the week.

The edge architecture diagrams we drew are part of a so called Minimum Viable 
Product (MVP) which refers to the minimalist nature of the setup where we 
didn’t try to cover all aspects but rather define a minimum set of services and 
requirements to get to a functional system. This architecture will evolve 
further as we collect more use cases and requirements.

To describe edge use cases on a higher level with Mobile Edge as a use case in 
the background we identified three main building blocks:

* Main or Regional Datacenter (DC)
* Edge Sites
* Far Edge Sites or Cloudlets

We examined the architecture diagram with the following user stories in mind:

* As a deployer of OpenStack I want to minimize the number of control planes I 
need to manage across a large geographical region.
* As a user of OpenStack I expect instance autoscale continues to function in 
an edge site if connectivity is lost to the main datacenter.
* As a deployer of OpenStack I want disk images to be pulled to a cluster on 
demand, without needing to sync every disk image everywhere.
* As a user of OpenStack I want to manage all of my instances in a region (from 
regional DC to far edge cloudlets) via a single API endpoint. 

We concluded to talk about service requirements in two major categories:

1. The Edge sites are fully operational in case of a connection loss between 
the Regional DC and the Edge site which requires control plane services running 
on the Edge site
2. Having full control on the Edge site is not critical in case a connection 
loss between the Regional DC and an Edge site which can be satisfied by having 
the control plane services running only in the Regional DC

In the first case the orchestration of the services becomes harder and is not 
necessarily solved yet, while in the second case you have centralized control 
but losing functionality on the Edge sites in the event of a connection loss.

We did not discuss things such as HA at the PTG and we did not go into details 
on networking during the architectural discussion either.

We agreed to prefer federation for Keystone and came up with two work items to 
cover missing functionality:

* Keystone to trust a token from an ID Provider master and when the auth method 
is called, perform an idempotent creation of the user, project and role 
assignments according to the assertions made in the token
* Keystone should support the creation of users and projects with predictable 
UUIDs (eg.: hash of the name of the users and projects). This greatly 
simplifies Image federation and telemetry gathering

For Glance we explored image caching and spent some time discussing the option 
to also cache metadata so a user can boot new instances at the edge in case of 
a network connection loss which would result in being disconnected from the 
registry:

* I as a user of Glance, want to upload an image in the main datacenter and 
boot that image in an edge datacenter. Fetch the image to the edge datacenter 
with its metadata

We are still in the progress of documenting the discussions and draw the 
architecture diagrams and flows for Keystone and Glance.


In addition to the above we went through Dublin PTG wiki 
(https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG) 
capturing requirements:

* we agreed to consider the list of requirements on the wiki finalized for now
* agreed to move there the additional requirements listed on the Use Cases 
(https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases) wiki page

For the details on the discussions with related OpenStack projects you can 
check the following etherpads for notes:

* Cinder: https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018
* Glance: https://etherpad.openstack.org/p/glance-stein-edge-architecture
* Ironic: https://etherpad.openstack.org/p/ironic-stein-ptg-edge
* Keystone: https://etherpad.openstack.org/p/keystone-stein-edge-architecture
* Neutron: https://etherpad.openstack.org/p/neutron-stein-ptg
* Nova: https://etherpad.openstack.org/p/nova-ptg-stein

Notes from the StarlingX sessions: 
https://etherpad.openstack.org/p/stx-PTG-agenda


We are still working on the MVP architecture to clean it up and discuss 
comments and questions before moving it to a wiki page. Please let me know if 
you would like to get access to the document and I will share it with you.

Please let me know if you have any questions or comments to the above captured 
items.

Thanks and Best Regards,
Ildikó