Re: [openstack-dev] [nova][cinder] about unified limits

2018-11-09 Thread Lance Bragstad
Sending a follow up here since there has been some movement on this
recently.

There is a nova specification up for review that goes through the work to
consume unified limits out of keystone [0]. John and Jay have also been
working through the oslo.limit integration, which is forcing us to think
about the interface. There are a few patches up that take different
approaches [1][2].

If anyone is still interested in helping out with this work, please don't
hesitate to reach out.

[0] https://review.openstack.org/#/c/602201/
[1] https://review.openstack.org/#/c/615180/
[2]
https://review.openstack.org/#/q/project:openstack/oslo.limit+status:open

On Tue, Sep 11, 2018 at 8:10 AM Lance Bragstad  wrote:

> Extra eyes on the API would be appreciated. We're also close to the point
> where we can start incorporating oslo.limit into services, so preparing
> those changes might be useful, too.
>
> One of the outcomes from yesterday's session was that Jay and Mel (from
> nova) were going to work out some examples we could use to finish up the
> enforcement code in oslo.limit. Helping out with that or picking it up
> would certainly help move the ball forward in nova.
>
>
>
>
> On Tue, Sep 11, 2018 at 1:15 AM Jaze Lee  wrote:
>
>> I recommend li...@unitedstack.com to join in to help to work forward.
>> May be first we should the keystone unified limits api really ok or
>> something else ?
>>
>> Lance Bragstad  于2018年9月8日周六 上午2:35写道:
>> >
>> > That would be great! I can break down the work a little bit to help
>> describe where we are at with different parts of the initiative. Hopefully
>> it will be useful for your colleagues in case they haven't been closely
>> following the effort.
>> >
>> > # keystone
>> >
>> > Based on the initial note in this thread, I'm sure you're aware of
>> keystone's status with respect to unified limits. But to recap, the initial
>> implementation landed in Queens and targeted flat enforcement [0]. During
>> the Rocky PTG we sat down with other services and a few operators to
>> explain the current status in keystone and if either developers or
>> operators had feedback on the API specifically. Notes were captured in
>> etherpad [1]. We spent the Rocky cycle fixing usability issues with the API
>> [2] and implementing support for a hierarchical enforcement model [3].
>> >
>> > At this point keystone is ready for services to start consuming the
>> unified limits work. The unified limits API is still marked as stable and
>> it will likely stay that way until we have at least one project using
>> unified limits. We can use that as an opportunity to do a final flush of
>> any changes that need to be made to the API before fully supporting it. The
>> keystone team expects that to be a quick transition, as we don't want to
>> keep the API hanging in an experimental state. It's really just a safe
>> guard to make sure we have the opportunity to use it in another service
>> before fully committing to the API. Ultimately, we don't want to
>> prematurely mark the API as supported when other services aren't even using
>> it yet, and then realize it has issues that could have been fixed prior to
>> the adoption phase.
>> >
>> > # oslo.limit
>> >
>> > In parallel with the keystone work, we created a new library to aid
>> services in consuming limits. Currently, the sole purpose of oslo.limit is
>> to abstract project and project hierarchy information away from the
>> service, so that services don't have to reimplement client code to
>> understand project trees, which could arguably become complex and lead to
>> inconsistencies in u-x across services.
>> >
>> > Ideally, a service should be able to pass some relatively basic
>> information to oslo.limit and expect an answer on whether or not usage for
>> that claim is valid. For example, here is a project ID, resource name, and
>> resource quantity, tell me if this project is over it's associated limit or
>> default limit.
>> >
>> > We're currently working on implementing the enforcement bits of
>> oslo.limit, which requires making API calls to keystone in order to
>> retrieve the deployed enforcement model, limit information, and project
>> hierarchies. Then it needs to reason about those things and calculate usage
>> from the service in order to determine if the request claim is valid or
>> not. There are patches up for this work, and reviews are always welcome [4].
>> >
>> > Note that we haven't released oslo.limit yet, but once the basic
>> enforcement described above is implemented we will. Then service

[openstack-dev] [keystone] No meeting 13 Nov 2018

2018-11-06 Thread Lance Bragstad
Just a reminder that we won't be holding a weekly meeting for keystone next
week due to the OpenStack Summit in Berlin.

Meetings will resume on the 20th of November.

Thanks,

Lance
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Berlin Forum Sessions & Talks

2018-11-06 Thread Lance Bragstad
Hey all,

Here is what's on my radar for keystone-specific sessions and talks next
week:

*Tuesday*
- Change ownership of resources [0]
- Keystone Project Update [1]
- OpenStack Policy 101 [2]
- Keystone Project Onboarding [3]
- Gaps between OpenStack and business logic with Adjutant [4]

*Wednesday*
- Deletion of project and project resources [5]
- Enforcing Quota Consistently with Unified Limits [6]

*Thursday*
- Keystone as an Identity Provider Proxy [7]
- Keystone Operator Feedback [8]

If you know about a keystone-related session that I've missed, please feel
free to follow up. Links to the forum session etherpads are available from
the main wiki [9].


[0]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22785/change-of-ownership-of-resources
[1]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22728/keystone-project-updates
[2]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21977/openstack-policy-101
[3]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22727/keystone-project-onboarding
[4]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22184/bridging-the-gaps-between-openstack-and-business-logic-with-adjutant
[5]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22784/deletion-of-project-and-project-resources
[6]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22557/enforcing-quota-consistently-with-unified-limits
[7]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22791/keystone-as-an-identity-provider-proxy
[8]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22792/keystone-operator-feedback
[9] https://wiki.openstack.org/wiki/Forum/Berlin2018#Thursday.2C_November_15
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Lance Bragstad
On Wed, Oct 24, 2018 at 2:49 PM Jay Pipes  wrote:

> On 10/24/2018 02:57 PM, Matt Riedemann wrote:
> > On 10/24/2018 10:10 AM, Jay Pipes wrote:
> >> I'd like to propose deprecating this API and getting rid of this
> >> functionality since it conflicts with the new Keystone /limits
> >> endpoint, is highly coupled with RAX's turnstile middleware and I
> >> can't seem to find anyone who has ever used it. Deprecating this API
> >> and functionality would make the transition to a saner quota
> >> management system much easier and straightforward.
> >
> > I was trying to do this before it was cool:
> >
> > https://review.openstack.org/#/c/411035/
> >
> > I think it was the Pike PTG in ATL where people said, "meh, let's just
> > wait for unified limits from keystone and let this rot on the vine".
> >
> > I'd be happy to restore and update that spec.
>
> ++
>
> I think partly things have stalled out because maybe each side (keystone
> + nova) think the other is working on something but isn't?
>

I have a Post-it on my montior to follow up with what we talked about at
the PTG.

AFAIK, the next steps were to use the examples we went through and apply
them to nova [0] using oslo.limit. We were hoping this would do two things.
First, it would expose any remaining gaps we have in oslo.limit that need
to get closed before other services start using the library. Second, we
could iterate on the example in gerrit as a nova review and making it
easier to merge when it's working.

Is that still the case and if so, how can I help?

[0] https://gist.github.com/lbragstad/69d28dca8adfa689c00b272d6db8bde7

>
> I'm currently working on cleaning up the quota system and would be happy
> to deprecate the os-quota-classes API along with the patch series that
> does that cleanup.
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Lance Bragstad
On Wed, Oct 24, 2018 at 2:49 PM Jay Pipes  wrote:

> On 10/24/2018 02:57 PM, Matt Riedemann wrote:
> > On 10/24/2018 10:10 AM, Jay Pipes wrote:
> >> I'd like to propose deprecating this API and getting rid of this
> >> functionality since it conflicts with the new Keystone /limits
> >> endpoint, is highly coupled with RAX's turnstile middleware and I
> >> can't seem to find anyone who has ever used it. Deprecating this API
> >> and functionality would make the transition to a saner quota
> >> management system much easier and straightforward.
> >
> > I was trying to do this before it was cool:
> >
> > https://review.openstack.org/#/c/411035/
> >
> > I think it was the Pike PTG in ATL where people said, "meh, let's just
> > wait for unified limits from keystone and let this rot on the vine".
> >
> > I'd be happy to restore and update that spec.
>
> ++
>
> I think partly things have stalled out because maybe each side (keystone
> + nova) think the other is working on something but isn't?
>

I have a Post-it on my montior to follow up with what we talked about at
the PTG.

AFAIK, the next steps were to use the examples we went through and apply
them to nova [0] using oslo.limit. We were hoping this would do two things.
First, it would expose any remaining gaps we have in oslo.limit that need
to get closed before other services start using the library. Second, we
could iterate on the example in gerrit as a nova review and making it
easier to merge when it's working.

Is that still the case and if so, how can I help?

[0] https://gist.github.com/lbragstad/69d28dca8adfa689c00b272d6db8bde7

>
> I'm currently working on cleaning up the quota system and would be happy
> to deprecate the os-quota-classes API along with the patch series that
> does that cleanup.
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Pike: Keystone setup problem (in Docker container...)

2018-10-23 Thread Lance Bragstad
Add the ML back into the thread.

On Tue, Oct 23, 2018 at 5:13 AM Lance Bragstad  wrote:

>
> On Tue, Oct 9, 2018 at 9:49 AM Matthias Leopold <
> matthias.leop...@meduniwien.ac.at> wrote:
>
>> Hi,
>>
>> I'm trying to setup Cinder as a standalone service with Docker using the
>> blockbox system (contrib/blockbox in the Cinder source distribution). I
>> was inspired by this manual:
>> https://thenewstack.io/deploying-cinder-stand-alone-storage-service/.
>>
>> This works quite well with Cinder’s noauth option as described above.
>> Now i want/have to add Keystone to the mix. I built the Keystone image
>> and added a custom init script to initialize Keystone when fired up and
>> a certain environment is set. For this i followed instructions from
>> https://docs.openstack.org/keystone/pike/install/keystone-install-rdo.html
>> .
>>
>> This works to the point where "keystone-manage bootstrap" is called.
>> This fails with:
>>
>> CRITICAL keystone [req-45247f41-0e4f-4cc7-8bb8-60c3793489b9 - - - - -]
>> Unhandled error: TypeError: unpackb() got an unexpected keyword argument
>> 'raw'
>>
>
> This feels like a dependency issue. Are you able to share more of the
> trace? The method in question, unpackb() is a part of msgpack, which is a
> library that keystone uses to serialize token payloads before encrypting
> them.
>
> It could be that your version of msgpack isn't up-to-date.
>
>
>>
>> Can anybody tell me what's wrong?
>>
>> Of course my setup is rather special so I'll mention some more details:
>> Docker host system: CentOS 7
>> Docker version: 18.06.1-ce, build e68fc7a
>> Keystone branch: stable/pike
>> Platform (for docker images): centos:7
>>
>> I additionally rolled the python2-pyasn1 package into the Keystone
>> image, but that didn't help. The "keystone" database in the "mariadb"
>> container is initialized and accessible from the "keystone" container, i
>> checked that.
>>
>> I know this is a rather exotic case, but maybe someone recognizes the
>> obvious problem. I'm not an OpenStack expert (want to use Cinder for
>> oVirt).
>>
>> thx
>> Matthias
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-10-16 Thread Lance Bragstad
It happened. Documentation is hot off the press and ready for you to read
[0]. As always, feel free to raise concerns, comments, or questions any
time.

I appreciate everyone's help in nailing this down.

[0]
https://docs.openstack.org/oslo.policy/latest/user/usage.html#naming-policies

On Sat, Oct 13, 2018 at 6:07 AM Ghanshyam Mann 
wrote:

>   On Sat, 13 Oct 2018 01:45:17 +0900 Lance Bragstad <
> lbrags...@gmail.com> wrote 
>  > Sending a follow up here quick.
>  > The reviewers actively participating in [0] are nearing a conclusion.
> Ultimately, the convention is going to be:
>  >
>  
> :[:][:]:[:]
>  > Details about what that actually means can be found in the review [0].
> Each piece is denoted as being required or optional, along with examples. I
> think this gives us a pretty good starting place, and the syntax is
> flexible enough to support almost every policy naming convention we've
> stumbled across.
>  > Now is the time if you have any final input or feedback. Thanks for
> sticking with the discussion.
>
> Thanks Lance for working on this. Current version lgtm. I would like to
> see some operators feedback also if  this standard policy name format is
> clear and easy understandable.
>
> -gmann
>
>  > Lance
>  > [0] https://review.openstack.org/#/c/606214/
>  >
>  > On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad 
> wrote:
>  >
>  > On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann 
> wrote:
>  >   On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad <
> lbrags...@gmail.com> wrote 
>  >   >
>  >   > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki 
> wrote:
>  >   > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
>  >   >   wrote:
>  >   >  >
>  >   >  > Ideally I would like to see it in the form of least specific to
> most specific. But more importantly in a way that there is no additional
> delimiters between the service type and the resource. Finally, I do not
> like the change of plurality depending on action type.
>  >   >  >
>  >   >  > I propose we consider
>  >   >  >
>  >   >  > ::[:]
>  >   >  >
>  >   >  > Example for keystone (note, action names below are strictly
> examples I am fine with whatever form those actions take):
>  >   >  > identity:projects:create
>  >   >  > identity:projects:delete
>  >   >  > identity:projects:list
>  >   >  > identity:projects:get
>  >   >  >
>  >   >  > It keeps things simple and consistent when you're looking
> through overrides / defaults.
>  >   >  > --Morgan
>  >   >  +1 -- I think the ordering if `resource` comes before
>  >   >  `action|subaction` will be more clean.
>  >   >
>  >   > ++
>  >   > These are excellent points. I especially like being able to omit
> the convention about plurality. Furthermore, I'd like to add that I think
> we should make the resource singular (e.g., project instead or projects).
> For example:
>  >   > compute:server:list
>  >   >
> compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
> (or confirm-resize)
>  >
>  >  Do we need "action" word there? I think action name itself should
> convey the operation. IMO below notation without "äction" word looks clear
> enough. what you say?
>  >
>  >  compute:server:reboot
>  >  compute:server:confirm_resize
>  >
>  > I agree. I simplified this in the current version up for review.
>  >  -gmann
>  >
>  >   >
>  >   > Otherwise, someone might mistake compute:servers:get, as "list".
> This is ultra-nick-picky, but something I thought of when seeing the usage
> of "get_all" in policy names in favor of "list."
>  >   > In summary, the new convention based on the most recent feedback
> should be:
>  >   > ::[:]
>  >   > Rules:service-type is always defined in the service types authority
>  >   > resources are always singular
>  >   > Thanks to all for sticking through this tedious discussion. I
> appreciate it.
>  >   >  /R
>  >   >
>  >   >  Harry
>  >   >  >
>  >   >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad <
> lbrags...@gmail.com> wrote:
>  >   >  >>
>  >   >  >> Bumping this thread again and proposing two conventions based
> on the discussion here. I propose we decide on one of the two following
> conventions:
>  >   >  >>
>  >   >

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-10-16 Thread Lance Bragstad
It happened. Documentation is hot off the press and ready for you to read
[0]. As always, feel free to raise concerns, comments, or questions any
time.

I appreciate everyone's help in nailing this down.

[0]
https://docs.openstack.org/oslo.policy/latest/user/usage.html#naming-policies

On Sat, Oct 13, 2018 at 6:07 AM Ghanshyam Mann 
wrote:

>   On Sat, 13 Oct 2018 01:45:17 +0900 Lance Bragstad <
> lbrags...@gmail.com> wrote 
>  > Sending a follow up here quick.
>  > The reviewers actively participating in [0] are nearing a conclusion.
> Ultimately, the convention is going to be:
>  >
>  
> :[:][:]:[:]
>  > Details about what that actually means can be found in the review [0].
> Each piece is denoted as being required or optional, along with examples. I
> think this gives us a pretty good starting place, and the syntax is
> flexible enough to support almost every policy naming convention we've
> stumbled across.
>  > Now is the time if you have any final input or feedback. Thanks for
> sticking with the discussion.
>
> Thanks Lance for working on this. Current version lgtm. I would like to
> see some operators feedback also if  this standard policy name format is
> clear and easy understandable.
>
> -gmann
>
>  > Lance
>  > [0] https://review.openstack.org/#/c/606214/
>  >
>  > On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad 
> wrote:
>  >
>  > On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann 
> wrote:
>  >   On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad <
> lbrags...@gmail.com> wrote 
>  >   >
>  >   > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki 
> wrote:
>  >   > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
>  >   >   wrote:
>  >   >  >
>  >   >  > Ideally I would like to see it in the form of least specific to
> most specific. But more importantly in a way that there is no additional
> delimiters between the service type and the resource. Finally, I do not
> like the change of plurality depending on action type.
>  >   >  >
>  >   >  > I propose we consider
>  >   >  >
>  >   >  > ::[:]
>  >   >  >
>  >   >  > Example for keystone (note, action names below are strictly
> examples I am fine with whatever form those actions take):
>  >   >  > identity:projects:create
>  >   >  > identity:projects:delete
>  >   >  > identity:projects:list
>  >   >  > identity:projects:get
>  >   >  >
>  >   >  > It keeps things simple and consistent when you're looking
> through overrides / defaults.
>  >   >  > --Morgan
>  >   >  +1 -- I think the ordering if `resource` comes before
>  >   >  `action|subaction` will be more clean.
>  >   >
>  >   > ++
>  >   > These are excellent points. I especially like being able to omit
> the convention about plurality. Furthermore, I'd like to add that I think
> we should make the resource singular (e.g., project instead or projects).
> For example:
>  >   > compute:server:list
>  >   >
> compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
> (or confirm-resize)
>  >
>  >  Do we need "action" word there? I think action name itself should
> convey the operation. IMO below notation without "äction" word looks clear
> enough. what you say?
>  >
>  >  compute:server:reboot
>  >  compute:server:confirm_resize
>  >
>  > I agree. I simplified this in the current version up for review.
>  >  -gmann
>  >
>  >   >
>  >   > Otherwise, someone might mistake compute:servers:get, as "list".
> This is ultra-nick-picky, but something I thought of when seeing the usage
> of "get_all" in policy names in favor of "list."
>  >   > In summary, the new convention based on the most recent feedback
> should be:
>  >   > ::[:]
>  >   > Rules:service-type is always defined in the service types authority
>  >   > resources are always singular
>  >   > Thanks to all for sticking through this tedious discussion. I
> appreciate it.
>  >   >  /R
>  >   >
>  >   >  Harry
>  >   >  >
>  >   >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad <
> lbrags...@gmail.com> wrote:
>  >   >  >>
>  >   >  >> Bumping this thread again and proposing two conventions based
> on the discussion here. I propose we decide on one of the two following
> conventions:
>  >   >  >>
>  >   >

Re: [openstack-dev] [glance][upgrade-checkers] Question about glance rocky upgrade release note

2018-10-15 Thread Lance Bragstad
I haven't implemented any checks, but I did take a shot at laying down the
scaffolding for implementing upgrade checks in glance [0].

Anyone who is more familiar with glance should be able to build off of that
commit by implementing specific checks in glance/cmd/status.py

[0] https://review.openstack.org/#/c/610661/

On Mon, Oct 15, 2018 at 10:49 AM Brian Rosmaita 
wrote:

> On 9/24/18 3:13 PM, Matt Riedemann wrote:
> > On 9/24/2018 2:06 PM, Matt Riedemann wrote:
> >> Are there more specific docs about how to configure the 'image import'
> >> feature so that I can be sure I'm careful? In other words, are there
> >> specific things a "glance-status upgrade check" check could look at
> >> and say, "your image import configuration is broken, here are details
> >> on how you should do this"
> Apologies for this delayed reply.
> > I guess this answers the question about docs:
> >
> >
> https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html
>
> Yes, you found the correct docs.  They could probably use a revision to
> eliminate some of the references to Pike and Queens, but I think the
> content is accurate with respect to proper configuration of image import.
> > Would a basic upgrade check be such that if glance-api.conf contains
> > enable_image_import=False, you're going to have issues since that option
> > is removed in Rocky?
>
> I completely missed this question when I saw this email a few weeks ago.
>
> Yes, if a Queens glance-api.conf has enable_image_import=False, then it
> was disabled on purpose since the default in Queens was True.  Given the
> Rocky defaults for import-related config (namely, all import_methods are
> enabled), the operator may need to make some kind of adjustment.
>
> As a side point, although the web-download import method is enabled by
> default in Rocky, it has whitelist/blacklist configurability to restrict
> what kind of URIs end-users may access.  By default, end users are only
> able to access URIs using the http or https scheme on the standard ports.
>
> Thanks for working on the upgrade-checker goal for Glance!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-15 Thread Lance Bragstad
On Mon, Oct 15, 2018 at 8:52 AM Ed Leafe  wrote:

> On Oct 15, 2018, at 7:40 AM, Chris Dent  wrote:
> >
> > I'd like some input from the community on how we'd like this to go.
>
> I would say it depends on the long-term plans for paste. Are we planning
> on weaning ourselves off of paste, and simply need to maintain it until
> that can be completed, or are we planning on encouraging its use?
>
>
Keystone started doing this last release and we're just finishing it up
now. The removal of keystone's v2.0 API and our hand-rolled API dispatching
ended up being the perfect storm for us to say "let's just remove paste
entirely and migrate to something supported".

It helped that we stacked a couple of long-standing work items behind the
paste removal, but it was a ton of work [0]. I think Morgan was going to
put together a summary of how we approached the removal. If the long-term
goal is to help projects move away from Paste, then we can try and share
some of the knowledge we have.

[0] https://twitter.com/MdrnStm/status/1050519620724056065


>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-10-12 Thread Lance Bragstad
Sending a follow up here quick.

The reviewers actively participating in [0] are nearing a conclusion.
Ultimately, the convention is going to be:

  :[:][:]:[:]

Details about what that actually means can be found in the review [0]. Each
piece is denoted as being required or optional, along with examples. I
think this gives us a pretty good starting place, and the syntax is
flexible enough to support almost every policy naming convention we've
stumbled across.

Now is the time if you have any final input or feedback. Thanks for
sticking with the discussion.

Lance

[0] https://review.openstack.org/#/c/606214/


On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad  wrote:

>
> On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann 
> wrote:
>
>>   On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad <
>> lbrags...@gmail.com> wrote 
>>  >
>>  > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki 
>> wrote:
>>  > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
>>  >   wrote:
>>  >  >
>>  >  > Ideally I would like to see it in the form of least specific to
>> most specific. But more importantly in a way that there is no additional
>> delimiters between the service type and the resource. Finally, I do not
>> like the change of plurality depending on action type.
>>  >  >
>>  >  > I propose we consider
>>  >  >
>>  >  > ::[:]
>>  >  >
>>  >  > Example for keystone (note, action names below are strictly
>> examples I am fine with whatever form those actions take):
>>  >  > identity:projects:create
>>  >  > identity:projects:delete
>>  >  > identity:projects:list
>>  >  > identity:projects:get
>>  >  >
>>  >  > It keeps things simple and consistent when you're looking through
>> overrides / defaults.
>>  >  > --Morgan
>>  >  +1 -- I think the ordering if `resource` comes before
>>  >  `action|subaction` will be more clean.
>>  >
>>  > ++
>>  > These are excellent points. I especially like being able to omit the
>> convention about plurality. Furthermore, I'd like to add that I think we
>> should make the resource singular (e.g., project instead or projects). For
>> example:
>>  > compute:server:list
>>  >
>> compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
>> (or confirm-resize)
>>
>> Do we need "action" word there? I think action name itself should convey
>> the operation. IMO below notation without "äction" word looks clear enough.
>> what you say?
>>
>> compute:server:reboot
>> compute:server:confirm_resize
>>
>
> I agree. I simplified this in the current version up for review.
>
>
>>
>> -gmann
>>
>>  >
>>  > Otherwise, someone might mistake compute:servers:get, as "list". This
>> is ultra-nick-picky, but something I thought of when seeing the usage of
>> "get_all" in policy names in favor of "list."
>>  > In summary, the new convention based on the most recent feedback
>> should be:
>>  > ::[:]
>>  > Rules:service-type is always defined in the service types authority
>>  > resources are always singular
>>  > Thanks to all for sticking through this tedious discussion. I
>> appreciate it.
>>  >  /R
>>  >
>>  >  Harry
>>  >  >
>>  >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad 
>> wrote:
>>  >  >>
>>  >  >> Bumping this thread again and proposing two conventions based on
>> the discussion here. I propose we decide on one of the two following
>> conventions:
>>  >  >>
>>  >  >> ::
>>  >  >>
>>  >  >> or
>>  >  >>
>>  >  >> :_
>>  >  >>
>>  >  >> Where  is the corresponding service type of the
>> project [0], and  is either create, get, list, update, or delete. I
>> think decoupling the method from the policy name should aid in consistency,
>> regardless of the underlying implementation. The HTTP method specifics can
>> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>>  >  >>
>>  >  >> I think the plurality of the resource should default to what makes
>> sense for the operation being carried out (e.g., list:foobars,
>> create:foobar).
>>  >  >>
>>  >  >> I don't mind the first one because it's clear about what the
>> delimi

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-10-12 Thread Lance Bragstad
Sending a follow up here quick.

The reviewers actively participating in [0] are nearing a conclusion.
Ultimately, the convention is going to be:

  :[:][:]:[:]

Details about what that actually means can be found in the review [0]. Each
piece is denoted as being required or optional, along with examples. I
think this gives us a pretty good starting place, and the syntax is
flexible enough to support almost every policy naming convention we've
stumbled across.

Now is the time if you have any final input or feedback. Thanks for
sticking with the discussion.

Lance

[0] https://review.openstack.org/#/c/606214/


On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad  wrote:

>
> On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann 
> wrote:
>
>>   On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad <
>> lbrags...@gmail.com> wrote 
>>  >
>>  > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki 
>> wrote:
>>  > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
>>  >   wrote:
>>  >  >
>>  >  > Ideally I would like to see it in the form of least specific to
>> most specific. But more importantly in a way that there is no additional
>> delimiters between the service type and the resource. Finally, I do not
>> like the change of plurality depending on action type.
>>  >  >
>>  >  > I propose we consider
>>  >  >
>>  >  > ::[:]
>>  >  >
>>  >  > Example for keystone (note, action names below are strictly
>> examples I am fine with whatever form those actions take):
>>  >  > identity:projects:create
>>  >  > identity:projects:delete
>>  >  > identity:projects:list
>>  >  > identity:projects:get
>>  >  >
>>  >  > It keeps things simple and consistent when you're looking through
>> overrides / defaults.
>>  >  > --Morgan
>>  >  +1 -- I think the ordering if `resource` comes before
>>  >  `action|subaction` will be more clean.
>>  >
>>  > ++
>>  > These are excellent points. I especially like being able to omit the
>> convention about plurality. Furthermore, I'd like to add that I think we
>> should make the resource singular (e.g., project instead or projects). For
>> example:
>>  > compute:server:list
>>  >
>> compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
>> (or confirm-resize)
>>
>> Do we need "action" word there? I think action name itself should convey
>> the operation. IMO below notation without "äction" word looks clear enough.
>> what you say?
>>
>> compute:server:reboot
>> compute:server:confirm_resize
>>
>
> I agree. I simplified this in the current version up for review.
>
>
>>
>> -gmann
>>
>>  >
>>  > Otherwise, someone might mistake compute:servers:get, as "list". This
>> is ultra-nick-picky, but something I thought of when seeing the usage of
>> "get_all" in policy names in favor of "list."
>>  > In summary, the new convention based on the most recent feedback
>> should be:
>>  > ::[:]
>>  > Rules:service-type is always defined in the service types authority
>>  > resources are always singular
>>  > Thanks to all for sticking through this tedious discussion. I
>> appreciate it.
>>  >  /R
>>  >
>>  >  Harry
>>  >  >
>>  >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad 
>> wrote:
>>  >  >>
>>  >  >> Bumping this thread again and proposing two conventions based on
>> the discussion here. I propose we decide on one of the two following
>> conventions:
>>  >  >>
>>  >  >> ::
>>  >  >>
>>  >  >> or
>>  >  >>
>>  >  >> :_
>>  >  >>
>>  >  >> Where  is the corresponding service type of the
>> project [0], and  is either create, get, list, update, or delete. I
>> think decoupling the method from the policy name should aid in consistency,
>> regardless of the underlying implementation. The HTTP method specifics can
>> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>>  >  >>
>>  >  >> I think the plurality of the resource should default to what makes
>> sense for the operation being carried out (e.g., list:foobars,
>> create:foobar).
>>  >  >>
>>  >  >> I don't mind the first one because it's clear about what the
>> delimi

Re: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0

2018-10-09 Thread Lance Bragstad
On Tue, Oct 9, 2018 at 10:56 AM Doug Hellmann  wrote:

> Matthew Thode  writes:
>
> > On 18-10-09 11:12:30, Doug Hellmann wrote:
> >> Matthew Thode  writes:
> >>
> >> > several projects have had problems with the new release, some have
> ways
> >> > of working around it, and some do not.  I'm sending this just to raise
> >> > the issue and allow a place to discuss solutions.
> >> >
> >> > Currently there is a review proposed to blacklist 9.0.0, but if this
> is
> >> > going to still be an issue somehow in further releases we may need
> >> > another solution.
> >> >
> >> > https://review.openstack.org/#/c/608835/
> >> >
> >> > --
> >> > Matthew Thode (prometheanfire)
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> Do you have links to the failure logs or bug reports or something? If I
> >> wanted to help I wouldn't even know where to start.
> >>
> >
> >
> http://logs.openstack.org/21/607521/2/check/cross-cinder-py35/e15722e/testr_results.html.gz
>
> These failures look like we should add a proper API to oslo.messaging to
> set the notification and rpc backends for testing. The configuration
> options are *not* part of the API of the library.
>
> There is already an oslo_messaging.conffixture module with a fixture
> class, but it looks like it defaults to rabbit. Maybe someone wants to
> propose a patch to make that a parameter to the constructor?
>
> >
> http://logs.openstack.org/21/607521/2/check/cross-glance-py35/e2161d7/testr_results.html.gz
>
> These failures should be fixed by releasing the patch that Mehdi
> provided that ensures there is a valid default transport configured.
>
> >
> http://logs.openstack.org/21/607521/2/check/cross-keystone-py35/908a1c2/testr_results.html.gz
>
> Lance has already described these as mocking implementation details of
> the library. I expect we'll need someone with keystone experience to
> work out what the best solution is to do there.
>

So - I think it's apparent there are two things to do to fix this for
keystone, which could be true for other projects as well.

To recap, keystone has tests to assert the plumbing to send a notification
was called, or not called, depending on configuration options in keystone
(we allow operators to opt out of noisy notifications, like authenticate).

As noted earlier, we shouldn't be making these assertions using an internal
method of oslo.messaging. I have a patch up to refactor that to use the
public API instead [0]. Even with that fix [0], the tests mentioned by Matt
still fail because there isn't a sane default. I have a separate patch up
to make keystone's tests work by supplying the default introduced in
version 9.0.1 [1], overriding the configuration option for transport_url.
This got a bit hairy in a circular-dependency kind of way because
get_notification_transport() [2] is what registers the default options,
which is broken. I have a patch to keystone [3] showing how I worked around
this, which might not be needed if we allow the constructor to accept an
override for transport_url.

[0] https://review.openstack.org/#/c/609072/
[1] https://review.openstack.org/#/c/608196/3/oslo_messaging/transport.py
[2]
https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/notifier.py#n167
[3] https://review.openstack.org/#/c/609106/


>
> >
> > --
> > Matthew Thode (prometheanfire)
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0

2018-10-09 Thread Lance Bragstad
On Tue, Oct 9, 2018 at 10:31 AM Ben Nemec  wrote:

>
>
> On 10/9/18 9:06 AM, Lance Bragstad wrote:
> > Keystone is failing because it's missing a fix from oslo.messaging [0].
> > That said, keystone is also relying on an internal implementation detail
> > in oslo.messaging by mocking it in tests [1]. The notification work has
> > been around in keystone for a *long* time, but it's apparent that we
> > should revisit these tests to make sure we aren't testing something that
> > is already tested by oslo.messaging if we're mocking internal
> > implementation details of a library.
>
> This is actually the same problem Cinder and Glance had, it's just being
> hidden because there is an exception handler in Keystone that buried the
> original exception message in log output. 9.0.1 will get Keystone
> working too.
>
> But mocking library internals is still naughty and you should stop that.
> :-P
>

Agreed. I have a note to investigate and see if I can rip those bits out or
rewrite them.


>
> >
> > Regardless, blacklisting version 9.0.0 will work for keystone, but we
> > can work around it another way by either rewriting the tests to not care
> > about oslo.messaging specifics, or removing them if they're obsolete.
> >
> > [0] https://review.openstack.org/#/c/608196/
> > [1]
> >
> https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/common/test_notifications.py#n1343
> >
> > On Mon, Oct 8, 2018 at 10:59 PM Matthew Thode  > <mailto:prometheanf...@gentoo.org>> wrote:
> >
> > several projects have had problems with the new release, some have
> ways
> > of working around it, and some do not.  I'm sending this just to
> raise
> > the issue and allow a place to discuss solutions.
> >
> > Currently there is a review proposed to blacklist 9.0.0, but if this
> is
> > going to still be an issue somehow in further releases we may need
> > another solution.
> >
> > https://review.openstack.org/#/c/608835/
> >
> > --
> > Matthew Thode (prometheanfire)
> >
>  __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0

2018-10-09 Thread Lance Bragstad
Keystone is failing because it's missing a fix from oslo.messaging [0].
That said, keystone is also relying on an internal implementation detail in
oslo.messaging by mocking it in tests [1]. The notification work has been
around in keystone for a *long* time, but it's apparent that we should
revisit these tests to make sure we aren't testing something that is
already tested by oslo.messaging if we're mocking internal implementation
details of a library.

Regardless, blacklisting version 9.0.0 will work for keystone, but we can
work around it another way by either rewriting the tests to not care about
oslo.messaging specifics, or removing them if they're obsolete.

[0] https://review.openstack.org/#/c/608196/
[1]
https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/common/test_notifications.py#n1343

On Mon, Oct 8, 2018 at 10:59 PM Matthew Thode 
wrote:

> several projects have had problems with the new release, some have ways
> of working around it, and some do not.  I'm sending this just to raise
> the issue and allow a place to discuss solutions.
>
> Currently there is a review proposed to blacklist 9.0.0, but if this is
> going to still be an issue somehow in further releases we may need
> another solution.
>
> https://review.openstack.org/#/c/608835/
>
> --
> Matthew Thode (prometheanfire)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] assigning new liaisons to projects

2018-10-08 Thread Lance Bragstad
On Mon, Oct 8, 2018 at 9:27 AM Doug Hellmann  wrote:

> TC members,
>
> Since we are starting a new term, and have several new members, we need
> to decide how we want to rotate the liaisons attached to each our
> project teams, SIGs, and working groups [1].
>
> Last term we went through a period of volunteer sign-up and then I
> randomly assigned folks to slots to fill out the roster evenly. During
> the retrospective we talked a bit about how to ensure we had an
> objective perspective for each team by not having PTLs sign up for their
> own teams, but I don't think we settled on that as a hard rule.
>
> I think the easiest and fairest (to new members) way to manage the list
> will be to wipe it and follow the same process we did last time. If you
> agree, I will update the page this week and we can start collecting
> volunteers over the next week or so.
>

+1

>From the perspective of someone new, it'll be nice to go through all the
motions.


>
> Doug
>
> [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-08 Thread Lance Bragstad
On Mon, Oct 8, 2018 at 9:08 AM Doug Hellmann  wrote:

> Based on the conversation in the other branch of this thread, I have
> filed [1] to start monthly meetings on November 1 at 1400 UTC. It may
> take a while before that actually shows up on the calendar, because it
> required adding a feature to yaml2ical [2].
>
> We talked about using email to add items to the agenda, but I realized
> that's going to complicate the coordination between chair and vice
> chair, so I would like for us to use the wiki [2] to suggest agenda
> items. We will still rely on email to the openstack-dev or
> openstack-discuss list to set the formal agenda before the actual
> meeting. Let me know if you foresee any issues with that plan.
>
>
++ I think the wiki is a good alternative to using email. Those times also
work for me.


> Doug
>
> [1] https://review.openstack.org/608682
> [2] https://review.openstack.org/608680
> [3] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-10-08 Thread Lance Bragstad
On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann 
wrote:

>   On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad <
> lbrags...@gmail.com> wrote 
>  >
>  > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki 
> wrote:
>  > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
>  >   wrote:
>  >  >
>  >  > Ideally I would like to see it in the form of least specific to most
> specific. But more importantly in a way that there is no additional
> delimiters between the service type and the resource. Finally, I do not
> like the change of plurality depending on action type.
>  >  >
>  >  > I propose we consider
>  >  >
>  >  > ::[:]
>  >  >
>  >  > Example for keystone (note, action names below are strictly examples
> I am fine with whatever form those actions take):
>  >  > identity:projects:create
>  >  > identity:projects:delete
>  >  > identity:projects:list
>  >  > identity:projects:get
>  >  >
>  >  > It keeps things simple and consistent when you're looking through
> overrides / defaults.
>  >  > --Morgan
>  >  +1 -- I think the ordering if `resource` comes before
>  >  `action|subaction` will be more clean.
>  >
>  > ++
>  > These are excellent points. I especially like being able to omit the
> convention about plurality. Furthermore, I'd like to add that I think we
> should make the resource singular (e.g., project instead or projects). For
> example:
>  > compute:server:list
>  >
> compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
> (or confirm-resize)
>
> Do we need "action" word there? I think action name itself should convey
> the operation. IMO below notation without "äction" word looks clear enough.
> what you say?
>
> compute:server:reboot
> compute:server:confirm_resize
>

I agree. I simplified this in the current version up for review.


>
> -gmann
>
>  >
>  > Otherwise, someone might mistake compute:servers:get, as "list". This
> is ultra-nick-picky, but something I thought of when seeing the usage of
> "get_all" in policy names in favor of "list."
>  > In summary, the new convention based on the most recent feedback should
> be:
>  > ::[:]
>  > Rules:service-type is always defined in the service types authority
>  > resources are always singular
>  > Thanks to all for sticking through this tedious discussion. I
> appreciate it.
>  >  /R
>  >
>  >  Harry
>  >  >
>  >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad 
> wrote:
>  >  >>
>  >  >> Bumping this thread again and proposing two conventions based on
> the discussion here. I propose we decide on one of the two following
> conventions:
>  >  >>
>  >  >> ::
>  >  >>
>  >  >> or
>  >  >>
>  >  >> :_
>  >  >>
>  >  >> Where  is the corresponding service type of the
> project [0], and  is either create, get, list, update, or delete. I
> think decoupling the method from the policy name should aid in consistency,
> regardless of the underlying implementation. The HTTP method specifics can
> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>  >  >>
>  >  >> I think the plurality of the resource should default to what makes
> sense for the operation being carried out (e.g., list:foobars,
> create:foobar).
>  >  >>
>  >  >> I don't mind the first one because it's clear about what the
> delimiter is and it doesn't look weird when projects have something like:
>  >  >>
>  >  >> :::
>  >  >>
>  >  >> If folks are ok with this, I can start working on some
> documentation that explains the motivation for this. Afterward, we can
> figure out how we want to track this work.
>  >  >>
>  >  >> What color do you want the shed to be?
>  >  >>
>  >  >> [0] https://service-types.openstack.org/service-types.json
>  >  >> [1]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
>  >  >>
>  >  >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad 
> wrote:
>  >  >>>
>  >  >>>
>  >  >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann <
> gm...@ghanshyammann.com> wrote:
>  >  >>>>
>  >  >>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
> j...@johngarbutt.com> wrote 
>

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-10-08 Thread Lance Bragstad
On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann 
wrote:

>   On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad <
> lbrags...@gmail.com> wrote 
>  >
>  > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki 
> wrote:
>  > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
>  >   wrote:
>  >  >
>  >  > Ideally I would like to see it in the form of least specific to most
> specific. But more importantly in a way that there is no additional
> delimiters between the service type and the resource. Finally, I do not
> like the change of plurality depending on action type.
>  >  >
>  >  > I propose we consider
>  >  >
>  >  > ::[:]
>  >  >
>  >  > Example for keystone (note, action names below are strictly examples
> I am fine with whatever form those actions take):
>  >  > identity:projects:create
>  >  > identity:projects:delete
>  >  > identity:projects:list
>  >  > identity:projects:get
>  >  >
>  >  > It keeps things simple and consistent when you're looking through
> overrides / defaults.
>  >  > --Morgan
>  >  +1 -- I think the ordering if `resource` comes before
>  >  `action|subaction` will be more clean.
>  >
>  > ++
>  > These are excellent points. I especially like being able to omit the
> convention about plurality. Furthermore, I'd like to add that I think we
> should make the resource singular (e.g., project instead or projects). For
> example:
>  > compute:server:list
>  >
> compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
> (or confirm-resize)
>
> Do we need "action" word there? I think action name itself should convey
> the operation. IMO below notation without "äction" word looks clear enough.
> what you say?
>
> compute:server:reboot
> compute:server:confirm_resize
>

I agree. I simplified this in the current version up for review.


>
> -gmann
>
>  >
>  > Otherwise, someone might mistake compute:servers:get, as "list". This
> is ultra-nick-picky, but something I thought of when seeing the usage of
> "get_all" in policy names in favor of "list."
>  > In summary, the new convention based on the most recent feedback should
> be:
>  > ::[:]
>  > Rules:service-type is always defined in the service types authority
>  > resources are always singular
>  > Thanks to all for sticking through this tedious discussion. I
> appreciate it.
>  >  /R
>  >
>  >  Harry
>  >  >
>  >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad 
> wrote:
>  >  >>
>  >  >> Bumping this thread again and proposing two conventions based on
> the discussion here. I propose we decide on one of the two following
> conventions:
>  >  >>
>  >  >> ::
>  >  >>
>  >  >> or
>  >  >>
>  >  >> :_
>  >  >>
>  >  >> Where  is the corresponding service type of the
> project [0], and  is either create, get, list, update, or delete. I
> think decoupling the method from the policy name should aid in consistency,
> regardless of the underlying implementation. The HTTP method specifics can
> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>  >  >>
>  >  >> I think the plurality of the resource should default to what makes
> sense for the operation being carried out (e.g., list:foobars,
> create:foobar).
>  >  >>
>  >  >> I don't mind the first one because it's clear about what the
> delimiter is and it doesn't look weird when projects have something like:
>  >  >>
>  >  >> :::
>  >  >>
>  >  >> If folks are ok with this, I can start working on some
> documentation that explains the motivation for this. Afterward, we can
> figure out how we want to track this work.
>  >  >>
>  >  >> What color do you want the shed to be?
>  >  >>
>  >  >> [0] https://service-types.openstack.org/service-types.json
>  >  >> [1]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
>  >  >>
>  >  >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad 
> wrote:
>  >  >>>
>  >  >>>
>  >  >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann <
> gm...@ghanshyammann.com> wrote:
>  >  >>>>
>  >  >>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
> j...@johngarbutt.com> wrote 
>

Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
Alright - I've worked up the majority of what we have in this thread and
proposed a documentation patch for oslo.policy [0].

I think we're at the point where we can finish the rest of this discussion
in gerrit if folks are ok with that.

[0] https://review.openstack.org/#/c/606214/

On Fri, Sep 28, 2018 at 3:33 PM Sean McGinnis  wrote:

> On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote:
> > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki 
> wrote:
> >
> > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
> > >  wrote:
> > > >
> > > > Ideally I would like to see it in the form of least specific to most
> > > specific. But more importantly in a way that there is no additional
> > > delimiters between the service type and the resource. Finally, I do not
> > > like the change of plurality depending on action type.
> > > >
> > > > I propose we consider
> > > >
> > > > ::[:]
> > > >
> > > > Example for keystone (note, action names below are strictly examples
> I
> > > am fine with whatever form those actions take):
> > > > identity:projects:create
> > > > identity:projects:delete
> > > > identity:projects:list
> > > > identity:projects:get
> > > >
> > > > It keeps things simple and consistent when you're looking through
> > > overrides / defaults.
> > > > --Morgan
> > > +1 -- I think the ordering if `resource` comes before
> > > `action|subaction` will be more clean.
> > >
> >
>
> Great idea. This is looking better and better.
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
Alright - I've worked up the majority of what we have in this thread and
proposed a documentation patch for oslo.policy [0].

I think we're at the point where we can finish the rest of this discussion
in gerrit if folks are ok with that.

[0] https://review.openstack.org/#/c/606214/

On Fri, Sep 28, 2018 at 3:33 PM Sean McGinnis  wrote:

> On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote:
> > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki 
> wrote:
> >
> > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
> > >  wrote:
> > > >
> > > > Ideally I would like to see it in the form of least specific to most
> > > specific. But more importantly in a way that there is no additional
> > > delimiters between the service type and the resource. Finally, I do not
> > > like the change of plurality depending on action type.
> > > >
> > > > I propose we consider
> > > >
> > > > ::[:]
> > > >
> > > > Example for keystone (note, action names below are strictly examples
> I
> > > am fine with whatever form those actions take):
> > > > identity:projects:create
> > > > identity:projects:delete
> > > > identity:projects:list
> > > > identity:projects:get
> > > >
> > > > It keeps things simple and consistent when you're looking through
> > > overrides / defaults.
> > > > --Morgan
> > > +1 -- I think the ordering if `resource` comes before
> > > `action|subaction` will be more clean.
> > >
> >
>
> Great idea. This is looking better and better.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:

> On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
>  wrote:
> >
> > Ideally I would like to see it in the form of least specific to most
> specific. But more importantly in a way that there is no additional
> delimiters between the service type and the resource. Finally, I do not
> like the change of plurality depending on action type.
> >
> > I propose we consider
> >
> > ::[:]
> >
> > Example for keystone (note, action names below are strictly examples I
> am fine with whatever form those actions take):
> > identity:projects:create
> > identity:projects:delete
> > identity:projects:list
> > identity:projects:get
> >
> > It keeps things simple and consistent when you're looking through
> overrides / defaults.
> > --Morgan
> +1 -- I think the ordering if `resource` comes before
> `action|subaction` will be more clean.
>

++

These are excellent points. I especially like being able to omit the
convention about plurality. Furthermore, I'd like to add that I think we
should make the resource singular (e.g., project instead or projects). For
example:

compute:server:list
compute:server:update
compute:server:create
compute:server:delete
compute:server:action:reboot
compute:server:action:confirm_resize (or confirm-resize)

Otherwise, someone might mistake compute:servers:get, as "list". This is
ultra-nick-picky, but something I thought of when seeing the usage of
"get_all" in policy names in favor of "list."

In summary, the new convention based on the most recent feedback should be:

*::[:]*

Rules:

   - service-type is always defined in the service types authority
   - resources are always singular

Thanks to all for sticking through this tedious discussion. I appreciate it.


>
> /R
>
> Harry
> >
> > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad 
> wrote:
> >>
> >> Bumping this thread again and proposing two conventions based on the
> discussion here. I propose we decide on one of the two following
> conventions:
> >>
> >> ::
> >>
> >> or
> >>
> >> :_
> >>
> >> Where  is the corresponding service type of the project
> [0], and  is either create, get, list, update, or delete. I think
> decoupling the method from the policy name should aid in consistency,
> regardless of the underlying implementation. The HTTP method specifics can
> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
> >>
> >> I think the plurality of the resource should default to what makes
> sense for the operation being carried out (e.g., list:foobars,
> create:foobar).
> >>
> >> I don't mind the first one because it's clear about what the delimiter
> is and it doesn't look weird when projects have something like:
> >>
> >> :::
> >>
> >> If folks are ok with this, I can start working on some documentation
> that explains the motivation for this. Afterward, we can figure out how we
> want to track this work.
> >>
> >> What color do you want the shed to be?
> >>
> >> [0] https://service-types.openstack.org/service-types.json
> >> [1]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
> >>
> >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad 
> wrote:
> >>>
> >>>
> >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann <
> gm...@ghanshyammann.com> wrote:
> >>>>
> >>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
> j...@johngarbutt.com> wrote 
> >>>>  > tl;dr+1 consistent names
> >>>>  > I would make the names mirror the API... because the Operator
> setting them knows the API, not the codeIgnore the crazy names in Nova, I
> certainly hate them
> >>>>
> >>>> Big +1 on consistent naming  which will help operator as well as
> developer to maintain those.
> >>>>
> >>>>  >
> >>>>  > Lance Bragstad  wrote:
> >>>>  > > I'm curious if anyone has context on the "os-" part of the
> format?
> >>>>  >
> >>>>  > My memory of the Nova policy mess...* Nova's policy rules
> traditionally followed the patterns of the code
> >>>>  > ** Yes, horrible, but it happened.* The code used to have the
> OpenStack API and the EC2 API, hence the "os"* API used to expand with
> extensions, so the policy name is often based on extensions** note most of
> the extension code has now gone, includi

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:

> On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
>  wrote:
> >
> > Ideally I would like to see it in the form of least specific to most
> specific. But more importantly in a way that there is no additional
> delimiters between the service type and the resource. Finally, I do not
> like the change of plurality depending on action type.
> >
> > I propose we consider
> >
> > ::[:]
> >
> > Example for keystone (note, action names below are strictly examples I
> am fine with whatever form those actions take):
> > identity:projects:create
> > identity:projects:delete
> > identity:projects:list
> > identity:projects:get
> >
> > It keeps things simple and consistent when you're looking through
> overrides / defaults.
> > --Morgan
> +1 -- I think the ordering if `resource` comes before
> `action|subaction` will be more clean.
>

++

These are excellent points. I especially like being able to omit the
convention about plurality. Furthermore, I'd like to add that I think we
should make the resource singular (e.g., project instead or projects). For
example:

compute:server:list
compute:server:update
compute:server:create
compute:server:delete
compute:server:action:reboot
compute:server:action:confirm_resize (or confirm-resize)

Otherwise, someone might mistake compute:servers:get, as "list". This is
ultra-nick-picky, but something I thought of when seeing the usage of
"get_all" in policy names in favor of "list."

In summary, the new convention based on the most recent feedback should be:

*::[:]*

Rules:

   - service-type is always defined in the service types authority
   - resources are always singular

Thanks to all for sticking through this tedious discussion. I appreciate it.


>
> /R
>
> Harry
> >
> > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad 
> wrote:
> >>
> >> Bumping this thread again and proposing two conventions based on the
> discussion here. I propose we decide on one of the two following
> conventions:
> >>
> >> ::
> >>
> >> or
> >>
> >> :_
> >>
> >> Where  is the corresponding service type of the project
> [0], and  is either create, get, list, update, or delete. I think
> decoupling the method from the policy name should aid in consistency,
> regardless of the underlying implementation. The HTTP method specifics can
> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
> >>
> >> I think the plurality of the resource should default to what makes
> sense for the operation being carried out (e.g., list:foobars,
> create:foobar).
> >>
> >> I don't mind the first one because it's clear about what the delimiter
> is and it doesn't look weird when projects have something like:
> >>
> >> :::
> >>
> >> If folks are ok with this, I can start working on some documentation
> that explains the motivation for this. Afterward, we can figure out how we
> want to track this work.
> >>
> >> What color do you want the shed to be?
> >>
> >> [0] https://service-types.openstack.org/service-types.json
> >> [1]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
> >>
> >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad 
> wrote:
> >>>
> >>>
> >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann <
> gm...@ghanshyammann.com> wrote:
> >>>>
> >>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
> j...@johngarbutt.com> wrote 
> >>>>  > tl;dr+1 consistent names
> >>>>  > I would make the names mirror the API... because the Operator
> setting them knows the API, not the codeIgnore the crazy names in Nova, I
> certainly hate them
> >>>>
> >>>> Big +1 on consistent naming  which will help operator as well as
> developer to maintain those.
> >>>>
> >>>>  >
> >>>>  > Lance Bragstad  wrote:
> >>>>  > > I'm curious if anyone has context on the "os-" part of the
> format?
> >>>>  >
> >>>>  > My memory of the Nova policy mess...* Nova's policy rules
> traditionally followed the patterns of the code
> >>>>  > ** Yes, horrible, but it happened.* The code used to have the
> OpenStack API and the EC2 API, hence the "os"* API used to expand with
> extensions, so the policy name is often based on extensions** note most of
> the extension code has now gone, includi

Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
Adding the operator list back in.

On Fri, Sep 28, 2018 at 8:48 AM Lance Bragstad  wrote:

> Bumping this thread again and proposing two conventions based on the
> discussion here. I propose we decide on one of the two following
> conventions:
>
> *::*
>
> or
>
> *:_*
>
> Where  is the corresponding service type of the project [0],
> and  is either create, get, list, update, or delete. I think
> decoupling the method from the policy name should aid in consistency,
> regardless of the underlying implementation. The HTTP method specifics can
> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>
> I think the plurality of the resource should default to what makes sense
> for the operation being carried out (e.g., list:foobars, create:foobar).
>
> I don't mind the first one because it's clear about what the delimiter is
> and it doesn't look weird when projects have something like:
>
> :::
>
> If folks are ok with this, I can start working on some documentation that
> explains the motivation for this. Afterward, we can figure out how we want
> to track this work.
>
> What color do you want the shed to be?
>
> [0] https://service-types.openstack.org/service-types.json
> [1]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
>
> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad 
> wrote:
>
>>
>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
>> wrote:
>>
>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
>>> j...@johngarbutt.com> wrote 
>>>  > tl;dr+1 consistent names
>>>  > I would make the names mirror the API... because the Operator setting
>>> them knows the API, not the codeIgnore the crazy names in Nova, I certainly
>>> hate them
>>>
>>> Big +1 on consistent naming  which will help operator as well as
>>> developer to maintain those.
>>>
>>>  >
>>>  > Lance Bragstad  wrote:
>>>  > > I'm curious if anyone has context on the "os-" part of the format?
>>>  >
>>>  > My memory of the Nova policy mess...* Nova's policy rules
>>> traditionally followed the patterns of the code
>>>  > ** Yes, horrible, but it happened.* The code used to have the
>>> OpenStack API and the EC2 API, hence the "os"* API used to expand with
>>> extensions, so the policy name is often based on extensions** note most of
>>> the extension code has now gone, including lots of related policies* Policy
>>> in code was focused on getting us to a place where we could rename policy**
>>> Whoop whoop by the way, it feels like we are really close to something
>>> sensible now!
>>>  > Lance Bragstad  wrote:
>>>  > Thoughts on using create, list, update, and delete as opposed to
>>> post, get, put, patch, and delete in the naming convention?
>>>  > I could go either way as I think about "list servers" in the API.But
>>> my preference is for the URL stub and POST, GET, etc.
>>>  >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad 
>>> wrote:If we consider dropping "os", should we entertain dropping "api",
>>> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple
>>> service types (e.g "compute" or "loadbalancer").
>>>  > +1The API is known as "compute" in api-ref, so the policy should be
>>> for "compute", etc.
>>>
>>> Agree on mapping the policy name with api-ref as much as possible. Other
>>> than policy name having 'os-', we have 'os-' in resource name also in nova
>>> API url like /os-agents, /os-aggregates etc (almost every resource except
>>> servers , flavors).  As we cannot get rid of those from API url, we need to
>>> keep the same in policy naming too? or we can have policy name like
>>> compute:agents:create/post but that mismatch from api-ref where agents
>>> resource url is os-agents.
>>>
>>
>> Good question. I think this depends on how the service does policy
>> enforcement.
>>
>> I know we did something like this in keystone, which required policy
>> names and method names to be the same:
>>
>>   "identity:list_users": "..."
>>
>> Because the initial implementation of policy enforcement used a decorator
>> like this:
>>
>>   from keystone import controller
>>
>>   @controller.protected
>>   def list_users(self):
>>   ...
>>
>> Having the poli

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
Adding the operator list back in.

On Fri, Sep 28, 2018 at 8:48 AM Lance Bragstad  wrote:

> Bumping this thread again and proposing two conventions based on the
> discussion here. I propose we decide on one of the two following
> conventions:
>
> *::*
>
> or
>
> *:_*
>
> Where  is the corresponding service type of the project [0],
> and  is either create, get, list, update, or delete. I think
> decoupling the method from the policy name should aid in consistency,
> regardless of the underlying implementation. The HTTP method specifics can
> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>
> I think the plurality of the resource should default to what makes sense
> for the operation being carried out (e.g., list:foobars, create:foobar).
>
> I don't mind the first one because it's clear about what the delimiter is
> and it doesn't look weird when projects have something like:
>
> :::
>
> If folks are ok with this, I can start working on some documentation that
> explains the motivation for this. Afterward, we can figure out how we want
> to track this work.
>
> What color do you want the shed to be?
>
> [0] https://service-types.openstack.org/service-types.json
> [1]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
>
> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad 
> wrote:
>
>>
>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
>> wrote:
>>
>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
>>> j...@johngarbutt.com> wrote 
>>>  > tl;dr+1 consistent names
>>>  > I would make the names mirror the API... because the Operator setting
>>> them knows the API, not the codeIgnore the crazy names in Nova, I certainly
>>> hate them
>>>
>>> Big +1 on consistent naming  which will help operator as well as
>>> developer to maintain those.
>>>
>>>  >
>>>  > Lance Bragstad  wrote:
>>>  > > I'm curious if anyone has context on the "os-" part of the format?
>>>  >
>>>  > My memory of the Nova policy mess...* Nova's policy rules
>>> traditionally followed the patterns of the code
>>>  > ** Yes, horrible, but it happened.* The code used to have the
>>> OpenStack API and the EC2 API, hence the "os"* API used to expand with
>>> extensions, so the policy name is often based on extensions** note most of
>>> the extension code has now gone, including lots of related policies* Policy
>>> in code was focused on getting us to a place where we could rename policy**
>>> Whoop whoop by the way, it feels like we are really close to something
>>> sensible now!
>>>  > Lance Bragstad  wrote:
>>>  > Thoughts on using create, list, update, and delete as opposed to
>>> post, get, put, patch, and delete in the naming convention?
>>>  > I could go either way as I think about "list servers" in the API.But
>>> my preference is for the URL stub and POST, GET, etc.
>>>  >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad 
>>> wrote:If we consider dropping "os", should we entertain dropping "api",
>>> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple
>>> service types (e.g "compute" or "loadbalancer").
>>>  > +1The API is known as "compute" in api-ref, so the policy should be
>>> for "compute", etc.
>>>
>>> Agree on mapping the policy name with api-ref as much as possible. Other
>>> than policy name having 'os-', we have 'os-' in resource name also in nova
>>> API url like /os-agents, /os-aggregates etc (almost every resource except
>>> servers , flavors).  As we cannot get rid of those from API url, we need to
>>> keep the same in policy naming too? or we can have policy name like
>>> compute:agents:create/post but that mismatch from api-ref where agents
>>> resource url is os-agents.
>>>
>>
>> Good question. I think this depends on how the service does policy
>> enforcement.
>>
>> I know we did something like this in keystone, which required policy
>> names and method names to be the same:
>>
>>   "identity:list_users": "..."
>>
>> Because the initial implementation of policy enforcement used a decorator
>> like this:
>>
>>   from keystone import controller
>>
>>   @controller.protected
>>   def list_users(self):
>>   ...
>>
>> Having the poli

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
Bumping this thread again and proposing two conventions based on the
discussion here. I propose we decide on one of the two following
conventions:

*::*

or

*:_*

Where  is the corresponding service type of the project [0],
and  is either create, get, list, update, or delete. I think
decoupling the method from the policy name should aid in consistency,
regardless of the underlying implementation. The HTTP method specifics can
still be relayed using oslo.policy's DocumentedRuleDefault object [1].

I think the plurality of the resource should default to what makes sense
for the operation being carried out (e.g., list:foobars, create:foobar).

I don't mind the first one because it's clear about what the delimiter is
and it doesn't look weird when projects have something like:

:::

If folks are ok with this, I can start working on some documentation that
explains the motivation for this. Afterward, we can figure out how we want
to track this work.

What color do you want the shed to be?

[0] https://service-types.openstack.org/service-types.json
[1]
https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule

On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad  wrote:

>
> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
> wrote:
>
>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
>> j...@johngarbutt.com> wrote 
>>  > tl;dr+1 consistent names
>>  > I would make the names mirror the API... because the Operator setting
>> them knows the API, not the codeIgnore the crazy names in Nova, I certainly
>> hate them
>>
>> Big +1 on consistent naming  which will help operator as well as
>> developer to maintain those.
>>
>>  >
>>  > Lance Bragstad  wrote:
>>  > > I'm curious if anyone has context on the "os-" part of the format?
>>  >
>>  > My memory of the Nova policy mess...* Nova's policy rules
>> traditionally followed the patterns of the code
>>  > ** Yes, horrible, but it happened.* The code used to have the
>> OpenStack API and the EC2 API, hence the "os"* API used to expand with
>> extensions, so the policy name is often based on extensions** note most of
>> the extension code has now gone, including lots of related policies* Policy
>> in code was focused on getting us to a place where we could rename policy**
>> Whoop whoop by the way, it feels like we are really close to something
>> sensible now!
>>  > Lance Bragstad  wrote:
>>  > Thoughts on using create, list, update, and delete as opposed to post,
>> get, put, patch, and delete in the naming convention?
>>  > I could go either way as I think about "list servers" in the API.But
>> my preference is for the URL stub and POST, GET, etc.
>>  >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad 
>> wrote:If we consider dropping "os", should we entertain dropping "api",
>> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple
>> service types (e.g "compute" or "loadbalancer").
>>  > +1The API is known as "compute" in api-ref, so the policy should be
>> for "compute", etc.
>>
>> Agree on mapping the policy name with api-ref as much as possible. Other
>> than policy name having 'os-', we have 'os-' in resource name also in nova
>> API url like /os-agents, /os-aggregates etc (almost every resource except
>> servers , flavors).  As we cannot get rid of those from API url, we need to
>> keep the same in policy naming too? or we can have policy name like
>> compute:agents:create/post but that mismatch from api-ref where agents
>> resource url is os-agents.
>>
>
> Good question. I think this depends on how the service does policy
> enforcement.
>
> I know we did something like this in keystone, which required policy names
> and method names to be the same:
>
>   "identity:list_users": "..."
>
> Because the initial implementation of policy enforcement used a decorator
> like this:
>
>   from keystone import controller
>
>   @controller.protected
>   def list_users(self):
>   ...
>
> Having the policy name the same as the method name made it easier for the
> decorator implementation to resolve the policy needed to protect the API
> because it just looked at the name of the wrapped method. The advantage was
> that it was easy to implement new APIs because you only needed to add a
> policy, implement the method, and make sure you decorate the implementation.
>
> While this worked, we are moving away from it entirely. The decorator
> implementation was ridiculously complicated. Only a handfu

Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Lance Bragstad
Ack - thanks for the clarification, Tim.

On Thu, Sep 27, 2018 at 12:10 PM Tim Bell  wrote:

>
>
> Lance,
>
>
>
> The comment regarding ‘readers’ is more to explain that the distinction
> between ‘admin’ and ‘user’ commands is gradually reducing, where OSC has
> been prioritising ‘user’ commands.
>
>
>
> As an example, we give the CERN security team view-only access to many
> parts of the cloud. This allows them to perform their investigations
> independently.  Thus, many commands which would be, by default, admin only
> are also available to roles such as the ‘readers’ (e.g. list, show, … of
> internals or projects which they are not in the members list)
>
>
>
> I don’t think there is any implications for Keystone (and the readers role
> is a nice improvement to replace the previous manual policy definitions)
> but more of a question of which subcommands we should aim to support in OSC.
>
>
>
> The *-manage commands such as nova-manage, I would consider, out of scope
> for OSC. Only admins would be migrating between versions or DB schemas.
>
>
>
> Tim
>
>
>
> *From: *Lance Bragstad 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Thursday, 27 September 2018 at 15:30
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [goals][tc][ptl][uc] starting goal
> selection for T series
>
>
>
>
>
> On Wed, Sep 26, 2018 at 1:56 PM Tim Bell  wrote:
>
>
> Doug,
>
> Thanks for raising this. I'd like to highlight the goal "Finish moving
> legacy python-*client CLIs to python-openstackclient" from the etherpad and
> propose this for a T/U series goal.
>
> To give it some context and the motivation:
>
> At CERN, we have more than 3000 users of the OpenStack cloud. We write an
> extensive end user facing documentation which explains how to use the
> OpenStack along with CERN specific features (such as workflows for
> requesting projects/quotas/etc.).
>
> One regular problem we come across is that the end user experience is
> inconsistent. In some cases, we find projects which are not covered by the
> unified OpenStack client (e.g. Manila). In other cases, there are subsets
> of the function which require the native project client.
>
> I would strongly support a goal which targets
>
> - All new projects should have the end user facing functionality fully
> exposed via the unified client
> - Existing projects should aim to close the gap within 'N' cycles (N to be
> defined)
> - Many administrator actions would also benefit from integration (reader
> roles are end users too so list and show need to be covered too)
> - Users should be able to use a single openrc for all interactions with
> the cloud (e.g. not switch between password for some CLIs and Kerberos for
> OSC)
>
>
>
> Sorry to back up the conversation a bit, but does reader role require work
> in the clients? Last release we incorporated three roles by default during
> keystone's installation process [0]. Is the definition in the specification
> what you mean by reader role, or am I on a different page?
>
>
>
> [0]
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html#default-roles
>
>
>
> The end user perception of a solution will be greatly enhanced by a single
> command line tool with consistent syntax and authentication framework.
>
> It may be a multi-release goal but it would really benefit the cloud
> consumers and I feel that goals should include this audience also.
>
> Tim
>
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, 26 September 2018 at 18:00
> To: openstack-dev ,
> openstack-operators ,
> openstack-sigs 
> Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for
> T series
>
> It's time to start thinking about community-wide goals for the T
> series.
>
> We use community-wide goals to achieve visible common changes, push for
> basic levels of consistency and user experience, and efficiently
> improve
> certain areas where technical debt payments have become too high -
> across all OpenStack projects. Community input is important to ensure
> that the TC makes good decisions about the goals. We need to consider
> the timing, cycle length, priority, and feasibility of the suggested
> goals.
>
> If you are interested in proposing a goal, please make sure that be

Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Lance Bragstad
On Wed, Sep 26, 2018 at 1:56 PM Tim Bell  wrote:

>
> Doug,
>
> Thanks for raising this. I'd like to highlight the goal "Finish moving
> legacy python-*client CLIs to python-openstackclient" from the etherpad and
> propose this for a T/U series goal.
>
> To give it some context and the motivation:
>
> At CERN, we have more than 3000 users of the OpenStack cloud. We write an
> extensive end user facing documentation which explains how to use the
> OpenStack along with CERN specific features (such as workflows for
> requesting projects/quotas/etc.).
>
> One regular problem we come across is that the end user experience is
> inconsistent. In some cases, we find projects which are not covered by the
> unified OpenStack client (e.g. Manila). In other cases, there are subsets
> of the function which require the native project client.
>
> I would strongly support a goal which targets
>
> - All new projects should have the end user facing functionality fully
> exposed via the unified client
> - Existing projects should aim to close the gap within 'N' cycles (N to be
> defined)
> - Many administrator actions would also benefit from integration (reader
> roles are end users too so list and show need to be covered too)
> - Users should be able to use a single openrc for all interactions with
> the cloud (e.g. not switch between password for some CLIs and Kerberos for
> OSC)
>
>
Sorry to back up the conversation a bit, but does reader role require work
in the clients? Last release we incorporated three roles by default during
keystone's installation process [0]. Is the definition in the specification
what you mean by reader role, or am I on a different page?

[0]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html#default-roles


> The end user perception of a solution will be greatly enhanced by a single
> command line tool with consistent syntax and authentication framework.
>
> It may be a multi-release goal but it would really benefit the cloud
> consumers and I feel that goals should include this audience also.
>
> Tim
>
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, 26 September 2018 at 18:00
> To: openstack-dev ,
> openstack-operators ,
> openstack-sigs 
> Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for
> T series
>
> It's time to start thinking about community-wide goals for the T
> series.
>
> We use community-wide goals to achieve visible common changes, push for
> basic levels of consistency and user experience, and efficiently
> improve
> certain areas where technical debt payments have become too high -
> across all OpenStack projects. Community input is important to ensure
> that the TC makes good decisions about the goals. We need to consider
> the timing, cycle length, priority, and feasibility of the suggested
> goals.
>
> If you are interested in proposing a goal, please make sure that before
> the summit it is described in the tracking etherpad [1] and that you
> have started a mailing list thread on the openstack-dev list about the
> proposal so that everyone in the forum session [2] has an opportunity
> to
> consider the details.  The forum session is only one step in the
> selection process. See [3] for more details.
>
> Doug
>
> [1] https://etherpad.openstack.org/p/community-goals
> [2]
> https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
> [3] https://governance.openstack.org/tc/goals/index.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs

2018-09-27 Thread Lance Bragstad
Using the domain name + group name pairing also allows for things like:


JSON:{"group_name": "C",
"domain_name": "X"}
JSON:{"group_name": "C",
"domain_name": "Y"}


To showcase how we solve the ambiguity in group names by namespacing them
with domains.

On Thu, Sep 27, 2018 at 3:11 AM Colleen Murphy  wrote:

>
>
> On Thu, Sep 27, 2018, at 5:09 AM, vishakha agarwal wrote:
> > > From : Colleen Murphy 
> > > To : 
> > > Date : Tue, 25 Sep 2018 18:33:30 +0900
> > > Subject : Re: [openstack-dev] [keystone] Domain-namespaced user
> attributes in SAML assertions from Keystone IdPs
> > >  Forwarded message 
> > >  > On Mon, Sep 24, 2018, at 8:40 PM, John Dennis wrote:
> > >  > > On 9/24/18 8:00 AM, Colleen Murphy wrote:
> > >  > > > This is in regard to https://launchpad.net/bugs/1641625 and
> the proposed patch https://review.openstack.org/588211 for it. Thanks
> Vishakha for getting the ball rolling.
> > >  > > >
> > >  > > > tl;dr: Keystone as an IdP should support sending
> non-strings/lists-of-strings as user attribute values, specifically lists
> of keystone groups, here's how that might happen.
> > >  > > >
> > >  > > > Problem statement:
> > >  > > >
> > >  > > > When keystone is set up as a service provider with an external
> non-keystone identity provider, it is common to configure the mapping rules
> to accept a list of group names from the IdP and map them to some property
> of a local keystone user, usually also a keystone group name. When keystone
> acts as the IdP, it's not currently possible to send a group name as a user
> property in the assertion. There are a few problems:
> > >  > > >
> > >  > > >  1. We haven't added any openstack_groups key in the
> creation of the SAML assertion (
> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164
> ).
> > >  > > >  2. If we did, this would not be enough. Unlike other IdPs,
> in keystone there can be multiple groups with the same name, namespaced by
> domain. So it's not enough for the SAML AttributeStatement to contain a
> semi-colon-separated list of group names, since a user could theoretically
> be a member of two or more groups with the same name.
> > >  > > > * Why can't we just send group IDs, which are unique?
> Because two different keystones are not going to have independent groups
> with the same UUID, so we cannot possibly map an ID of a group from
> keystone A to the ID of a different group in keystone B. We could map the
> ID of the group in in A to the name of a group in B but then operators need
> to create groups with UUIDs as names which is a little awkward for both the
> operator and the user who now is a member of groups with nondescriptive
> names.
> > >  > > >  3. If we then were able to encode a complex type like a
> group dict in a SAML assertion, we'd have to deal with it on the service
> provider side by being able to parse such an environment variable from the
> Apache headers.
> > >  > > >  4. The current mapping rules engine uses basic python
> string formatting to translate remote key-value pairs to local rules. We
> would need to change the mapping API to work with values more complex than
> strings and lists of strings.
> > >  > > >
> > >  > > > Possible solution:
> > >  > > >
> > >  > > > Vishakha's patch (https://review.openstack.org/588211) starts
> to solve (1) but it doesn't go far enough to solve (2-4). What we talked
> about at the PTG was:
> > >  > > >
> > >  > > >  2. Encode the group+domain as a string, for example by
> using the dict string repr or a string representation of some custom XML
> and maybe base64 encoding it.
> > >  > > >  * It's not totally clear whether the AttributeValue
> class of the pysaml2 library supports any data types outside of the
> xmlns:xs namespace or whether nested XML is an option, so encoding the
> whole thing as an xs:string seems like the simplest solution.
> > >  > > >  3. The SP will have to be aware that openstack_groups is a
> special key that needs the encoding reversed.
> > >  > > >  * I wrote down "MultiDict" in my notes but I don't
> recall exactly what format the environment variable would take that would
> make a MultiDict make sense here, in any case I think encoding the whole
> thing as a string eliminates the need for this.
> > >  > > >  4. We didn't talk about the mapping API, but here's what I
> think. If we were just talking about group names, the mapping API today
> would work like this (slight oversimplification for brevity):
> > >  > > >
> > >  > > > Given a list of openstack_groups like ["A", "B", "C"], it would
> work like this:
> > >  > > >
> > >  > > > [
> > >  > > >{
> > >  > > >  "local":
> > >  > > >  [
> > >  > > >{
> > >  > > >  "group":
> > >  > > >  {
> > >  > > >"name": "{0}",
> > >  > > >"domain":
> > >  > > >{
> > >  > > >  "name": 

Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Lance Bragstad
For those who may be following along and are not familiar with what we mean
by federated auto-provisioning [0].

[0]
https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning

On Wed, Sep 26, 2018 at 9:06 AM Morgan Fainberg 
wrote:

> This discussion was also not about user assigned IDs, but predictable IDs
> with the auto provisioning. We still want it to be something keystone
> controls (locally). It might be hash domain ID and value from assertion (
> similar.to the LDAP user ID generator). As long as within an environment,
> the IDs are predictable when auto provisioning via federation, we should be
> good. And the problem of the totally unknown ID until provisioning could be
> made less of an issue for someone working within a massively federated edge
> environment.
>
> I don't want user/explicit admin set IDs.
>
> On Wed, Sep 26, 2018, 04:43 Jay Pipes  wrote:
>
>> On 09/26/2018 05:10 AM, Colleen Murphy wrote:
>> > Thanks for the summary, Ildiko. I have some questions inline.
>> >
>> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
>> >
>> > 
>> >
>> >>
>> >> We agreed to prefer federation for Keystone and came up with two work
>> >> items to cover missing functionality:
>> >>
>> >> * Keystone to trust a token from an ID Provider master and when the
>> auth
>> >> method is called, perform an idempotent creation of the user, project
>> >> and role assignments according to the assertions made in the token
>> >
>> > This sounds like it is based on the customizations done at Oath, which
>> to my recollection did not use the actual federation implementation in
>> keystone due to its reliance on Athenz (I think?) as an identity manager.
>> Something similar can be accomplished in standard keystone with the mapping
>> API in keystone which can cause dynamic generation of a shadow user,
>> project and role assignments.
>> >
>> >> * Keystone should support the creation of users and projects with
>> >> predictable UUIDs (eg.: hash of the name of the users and projects).
>> >> This greatly simplifies Image federation and telemetry gathering
>> >
>> > I was in and out of the room and don't recall this discussion exactly.
>> We have historically pushed back hard against allowing setting a project ID
>> via the API, though I can see predictable-but-not-settable as less
>> problematic. One of the use cases from the past was being able to use the
>> same token in different regions, which is problematic from a security
>> perspective. Is that that idea here? Or could someone provide more details
>> on why this is needed?
>>
>> Hi Colleen,
>>
>> I wasn't in the room for this conversation either, but I believe the
>> "use case" wanted here is mostly a convenience one. If the edge
>> deployment is composed of hundreds of small Keystone installations and
>> you have a user (e.g. an NFV MANO user) which should have visibility
>> across all of those Keystone installations, it becomes a hassle to need
>> to remember (or in the case of headless users, store some lookup of) all
>> the different tenant and user UUIDs for what is essentially the same
>> user across all of those Keystone installations.
>>
>> I'd argue that as long as it's possible to create a Keystone tenant and
>> user with a unique name within a deployment, and as long as it's
>> possible to authenticate using the tenant and user *name* (i.e. not the
>> UUID), then this isn't too big of a problem. However, I do know that a
>> bunch of scripts and external tools rely on setting the tenant and/or
>> user via the UUID values and not the names, so that might be where this
>> feature request is coming from.
>>
>> Hope that makes sense?
>>
>> Best,
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs

2018-09-24 Thread Lance Bragstad
On Mon, Sep 24, 2018 at 9:31 AM Colleen Murphy  wrote:

> On Mon, Sep 24, 2018, at 4:16 PM, Lance Bragstad wrote:
> > On Mon, Sep 24, 2018 at 7:00 AM Colleen Murphy 
> wrote:
> >
> > > This is in regard to https://launchpad.net/bugs/1641625 and the
> proposed
> > > patch https://review.openstack.org/588211 for it. Thanks Vishakha for
> > > getting the ball rolling.
> > >
> > > tl;dr: Keystone as an IdP should support sending
> > > non-strings/lists-of-strings as user attribute values, specifically
> lists
> > > of keystone groups, here's how that might happen.
> > >
> > > Problem statement:
> > >
> > > When keystone is set up as a service provider with an external
> > > non-keystone identity provider, it is common to configure the mapping
> rules
> > > to accept a list of group names from the IdP and map them to some
> property
> > > of a local keystone user, usually also a keystone group name. When
> keystone
> > > acts as the IdP, it's not currently possible to send a group name as a
> user
> > > property in the assertion. There are a few problems:
> > >
> > > 1. We haven't added any openstack_groups key in the creation of the
> > > SAML assertion (
> > >
> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164
> > > ).
> > > 2. If we did, this would not be enough. Unlike other IdPs, in
> keystone
> > > there can be multiple groups with the same name, namespaced by domain.
> So
> > > it's not enough for the SAML AttributeStatement to contain a
> > > semi-colon-separated list of group names, since a user could
> theoretically
> > > be a member of two or more groups with the same name.
> > >* Why can't we just send group IDs, which are unique? Because two
> > > different keystones are not going to have independent groups with the
> same
> > > UUID, so we cannot possibly map an ID of a group from keystone A to
> the ID
> > > of a different group in keystone B. We could map the ID of the group
> in in
> > > A to the name of a group in B but then operators need to create groups
> with
> > > UUIDs as names which is a little awkward for both the operator and the
> user
> > > who now is a member of groups with nondescriptive names.
> > > 3. If we then were able to encode a complex type like a group dict
> in
> > > a SAML assertion, we'd have to deal with it on the service provider
> side by
> > > being able to parse such an environment variable from the Apache
> headers.
> > > 4. The current mapping rules engine uses basic python string
> > > formatting to translate remote key-value pairs to local rules. We would
> > > need to change the mapping API to work with values more complex than
> > > strings and lists of strings.
> > >
> > > Possible solution:
> > >
> > > Vishakha's patch (https://review.openstack.org/588211) starts to solve
> > > (1) but it doesn't go far enough to solve (2-4). What we talked about
> at
> > > the PTG was:
> > >
> > > 2. Encode the group+domain as a string, for example by using the
> dict
> > > string repr or a string representation of some custom XML and maybe
> base64
> > > encoding it.
> > > * It's not totally clear whether the AttributeValue class of
> the
> > > pysaml2 library supports any data types outside of the xmlns:xs
> namespace
> > > or whether nested XML is an option, so encoding the whole thing as an
> > > xs:string seems like the simplest solution.
> > >
> >
> > Encoding this makes sense. We can formally support different SAML data
> > types in the future if a better solution comes along. We would have to
> make
> > the service provider deal with both types of encoding, but we could
> > eventually consolidate, and users shouldn't know the difference. Right?
>
> The only way this would make a difference to the user is if they need to
> debug a request by actually looking at the response to this request[1]. If
> we were to base64-encode the string that immediately obfuscates what the
> actual value is. I'm not really sure if we need to base64-encode it or just
> serialize it some other way.
>

Oh - yeah that makes sense. In your opinion, does that prevent us from
adopting another way of solving the problem if we find a better data type?


>
> [1]
> https://developer.openstack.org/api-ref/identity/v3-ext/index.html#id404
> >
> >
> > > 3. The S

Re: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs

2018-09-24 Thread Lance Bragstad
On Mon, Sep 24, 2018 at 7:00 AM Colleen Murphy  wrote:

> This is in regard to https://launchpad.net/bugs/1641625 and the proposed
> patch https://review.openstack.org/588211 for it. Thanks Vishakha for
> getting the ball rolling.
>
> tl;dr: Keystone as an IdP should support sending
> non-strings/lists-of-strings as user attribute values, specifically lists
> of keystone groups, here's how that might happen.
>
> Problem statement:
>
> When keystone is set up as a service provider with an external
> non-keystone identity provider, it is common to configure the mapping rules
> to accept a list of group names from the IdP and map them to some property
> of a local keystone user, usually also a keystone group name. When keystone
> acts as the IdP, it's not currently possible to send a group name as a user
> property in the assertion. There are a few problems:
>
> 1. We haven't added any openstack_groups key in the creation of the
> SAML assertion (
> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164
> ).
> 2. If we did, this would not be enough. Unlike other IdPs, in keystone
> there can be multiple groups with the same name, namespaced by domain. So
> it's not enough for the SAML AttributeStatement to contain a
> semi-colon-separated list of group names, since a user could theoretically
> be a member of two or more groups with the same name.
>* Why can't we just send group IDs, which are unique? Because two
> different keystones are not going to have independent groups with the same
> UUID, so we cannot possibly map an ID of a group from keystone A to the ID
> of a different group in keystone B. We could map the ID of the group in in
> A to the name of a group in B but then operators need to create groups with
> UUIDs as names which is a little awkward for both the operator and the user
> who now is a member of groups with nondescriptive names.
> 3. If we then were able to encode a complex type like a group dict in
> a SAML assertion, we'd have to deal with it on the service provider side by
> being able to parse such an environment variable from the Apache headers.
> 4. The current mapping rules engine uses basic python string
> formatting to translate remote key-value pairs to local rules. We would
> need to change the mapping API to work with values more complex than
> strings and lists of strings.
>
> Possible solution:
>
> Vishakha's patch (https://review.openstack.org/588211) starts to solve
> (1) but it doesn't go far enough to solve (2-4). What we talked about at
> the PTG was:
>
> 2. Encode the group+domain as a string, for example by using the dict
> string repr or a string representation of some custom XML and maybe base64
> encoding it.
> * It's not totally clear whether the AttributeValue class of the
> pysaml2 library supports any data types outside of the xmlns:xs namespace
> or whether nested XML is an option, so encoding the whole thing as an
> xs:string seems like the simplest solution.
>

Encoding this makes sense. We can formally support different SAML data
types in the future if a better solution comes along. We would have to make
the service provider deal with both types of encoding, but we could
eventually consolidate, and users shouldn't know the difference. Right?


> 3. The SP will have to be aware that openstack_groups is a special key
> that needs the encoding reversed.
> * I wrote down "MultiDict" in my notes but I don't recall exactly
> what format the environment variable would take that would make a MultiDict
> make sense here, in any case I think encoding the whole thing as a string
> eliminates the need for this.
> 4. We didn't talk about the mapping API, but here's what I think. If
> we were just talking about group names, the mapping API today would work
> like this (slight oversimplification for brevity):
>
> Given a list of openstack_groups like ["A", "B", "C"], it would work like
> this:
>
> [
>   {
> "local":
> [
>   {
> "group":
> {
>   "name": "{0}",
>   "domain":
>   {
> "name": "federated_domain"
>   }
> }
>   }
> ], "remote":
> [
>   {
> "type": "openstack_groups"
>   }
> ]
>   }
> ]
> (paste in case the spacing makes this unreadable:
> http://paste.openstack.org/show/730623/ )
>
> But now, we no longer have a list of strings but something more like
> [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name":
> "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a
> string, this example doesn't really work. Instead, let's assume that in
> step (3) we converted the decoded AttributeValue text to an object. Then
> the mapping could look more like this:
>
> [
>   {
> "local":
> [
>   {
> "group":
> {
>   "name": "{0.name}",
>   "domain":
>   {
> "name": "{0.domain_name}"
> 

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-21 Thread Lance Bragstad
On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
wrote:

>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
> j...@johngarbutt.com> wrote 
>  > tl;dr+1 consistent names
>  > I would make the names mirror the API... because the Operator setting
> them knows the API, not the codeIgnore the crazy names in Nova, I certainly
> hate them
>
> Big +1 on consistent naming  which will help operator as well as developer
> to maintain those.
>
>  >
>  > Lance Bragstad  wrote:
>  > > I'm curious if anyone has context on the "os-" part of the format?
>  >
>  > My memory of the Nova policy mess...* Nova's policy rules traditionally
> followed the patterns of the code
>  > ** Yes, horrible, but it happened.* The code used to have the OpenStack
> API and the EC2 API, hence the "os"* API used to expand with extensions, so
> the policy name is often based on extensions** note most of the extension
> code has now gone, including lots of related policies* Policy in code was
> focused on getting us to a place where we could rename policy** Whoop whoop
> by the way, it feels like we are really close to something sensible now!
>  > Lance Bragstad  wrote:
>  > Thoughts on using create, list, update, and delete as opposed to post,
> get, put, patch, and delete in the naming convention?
>  > I could go either way as I think about "list servers" in the API.But my
> preference is for the URL stub and POST, GET, etc.
>  >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad 
> wrote:If we consider dropping "os", should we entertain dropping "api",
> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple
> service types (e.g "compute" or "loadbalancer").
>  > +1The API is known as "compute" in api-ref, so the policy should be for
> "compute", etc.
>
> Agree on mapping the policy name with api-ref as much as possible. Other
> than policy name having 'os-', we have 'os-' in resource name also in nova
> API url like /os-agents, /os-aggregates etc (almost every resource except
> servers , flavors).  As we cannot get rid of those from API url, we need to
> keep the same in policy naming too? or we can have policy name like
> compute:agents:create/post but that mismatch from api-ref where agents
> resource url is os-agents.
>

Good question. I think this depends on how the service does policy
enforcement.

I know we did something like this in keystone, which required policy names
and method names to be the same:

  "identity:list_users": "..."

Because the initial implementation of policy enforcement used a decorator
like this:

  from keystone import controller

  @controller.protected
  def list_users(self):
  ...

Having the policy name the same as the method name made it easier for the
decorator implementation to resolve the policy needed to protect the API
because it just looked at the name of the wrapped method. The advantage was
that it was easy to implement new APIs because you only needed to add a
policy, implement the method, and make sure you decorate the implementation.

While this worked, we are moving away from it entirely. The decorator
implementation was ridiculously complicated. Only a handful of keystone
developers understood it. With the addition of system-scope, it would have
only become more convoluted. It also enables a much more copy-paste pattern
(e.g., so long as I wrap my method with this decorator implementation,
things should work right?). Instead, we're calling enforcement within the
controller implementation to ensure things are easier to understand. It
requires developers to be cognizant of how different token types affect the
resources within an API. That said, coupling the policy name to the method
name is no longer a requirement for keystone.

Hopefully, that helps explain why we needed them to match.


>
> Also we have action API (i know from nova not sure from other services)
> like POST /servers/{server_id}/action {addSecurityGroup} and their current
> policy name is all inconsistent.  few have policy name including their
> resource name like "os_compute_api:os-flavor-access:add_tenant_access", few
> has 'action' in policy name like
> "os_compute_api:os-admin-actions:reset_state" and few has direct action
> name like "os_compute_api:os-console-output"
>

Since the actions API relies on the request body and uses a single HTTP
method, does it make sense to have the HTTP method in the policy name? It
feels redundant, and we might be able to establish a convention that's more
meaningful for things like action APIs. It looks like cinder has a similar
pattern [0].

[0]
https://developer.openstack.org/api-ref/block-storage/v3/index.html#

Re: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...)

2018-09-20 Thread Lance Bragstad
On Thu, Sep 20, 2018 at 7:19 PM Sean McGinnis  wrote:

> On Thu, Sep 20, 2018 at 03:46:43PM -0600, Doug Hellmann wrote:
> > Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +:
> > > tl;dr: The openstack, openstack-dev, openstack-sigs and
> > > openstack-operators mailing lists (to which this is being sent) will
> > > be replaced by a new openstack-disc...@lists.openstack.org mailing
> > > list.
> >
> > Since last week there was some discussion of including the openstack-tc
> > mailing list among these lists to eliminate confusion caused by the fact
> > that the list is not configured to accept messages from all subscribers
> > (it's meant to be used for us to make sure TC members see meeting
> > announcements).
> >
> > I'm inclined to include it and either use a direct mailing or the
> > [tc] tag on the new discuss list to reach TC members, but I would
> > like to hear feedback from TC members and other interested parties
> > before calling that decision made. Please let me know what you think.
> >
> > Doug
> >
>
> This makes sense to me. I would rather have any discussions where everyone
> is
> likely to see them than to continue with the current separation.
>

+1


>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] noop role in openstack

2018-09-20 Thread Lance Bragstad
On Thu, Sep 20, 2018 at 12:22 AM Adrian Turjak 
wrote:

> For Adam's benefit continuing this a bit in email:
>
> regarding the noop role:
>
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-09-20.log.html#t2018-09-20T04:13:43
>
> The first benefit of such a role (in the given policy scenario) is that
> you can now give a user explicit scope on a project (but they can't do
> anything) and then use that role for Swift ACLs with full knowledge they
> can't do anything other than auth, scope to the project, and then
> whatever the ACLs let them do. An example use case being: "a user that
> can ONLY talk to a specific container and NOTHING else in OpenStack or
> Swift" which is really useful if you want to use a single project for a
> lot of websites, or backups, or etc.
>
> Or in my MFA case, a role I can use when wanting a user to still be able
> to auth and setup their MFA, but not actually touch any resources until
> they have MFA setup at which point you give them back their real member
> role.
>
> It all relies on leaving no policy rules 'empty' unless those rules (and
> their API) really are safe for a noop role. And by empty I don't mean
> empty, really I mean "any role on a project". Because that's painful to
> then work with.
>
> With the default policies in Nova (and most other projects), you can't
> actually make proper use of Swift ACLs, because having any role on a
> project gives you access to all the resources. Like say:
> https://github.com/openstack/nova/blob/master/nova/policies/base.py#L31
>
> ^ that rule implies, if you are scoped to the project, don't care about
> the role, you can do anything to the resources. That doesn't work for
> anything role specific. Such rules would need to be:
> "is_admin:True or (role:member and project_id:%(project_id)s)"
>
> If we stop with this assumption that "any role" on a project works,
> suddenly policy becomes more powerful and the roles are actually useful
> beyond admin vs not admin. System scope will help, but then we'll still
> only have system scope, admin on a project, and not admin on a project,
> which still makes the role mostly pointless.
>

Kind of. System-scope is only half the equation for fixing RBAC because it
gives developers an RBAC target that isn't project-scoped that they can use
to protect APIs with. When you combine that with default roles (admin,
member, and reader) [0] then you can start building a matrix, per se.

[0]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html


>
> We as a community need to stop with this assumption (that "any role" on
> a project works), because it hurts us in regards to actually useful
> RBAC. Yes deployers can edit the policy to avoid the any role on a
> project issue (we have), but it's a huge amount of work to figure out
> that we could all work together and fix upstream.
>

As I'm sure you know, even rolling custom policy files might not be enough.
Despite an override, there are APIs that still check for 'admin' roles.


>
> Part of that work is actually happening. With the default roles that
> Keystone is defining, and system scope. We can then start updating all
> the project default policies to actually require those roles explicitly,
> but that effort, needs us to get everyone on board...
>

That's the idea. We're trying to build that out in keystone now so that
other projects have a template to follow.


>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-19 Thread Lance Bragstad
johnsom (from octavia) had a good idea, which was to use the service types
that are defined already [0].

I like this for three reasons, specifically. First, it's already a known
convention for services that we can just reuse. Second, it includes a
spacing convention (e.g. load-balancer vs load_balancer). Third,
it's relatively short since it doesn't include "os" or "api".

So long as there isn't any objection to that, we can start figuring out how
we want to do the method and resource parts. I pulled some policies into a
place where I could try and query them for specific patterns and existing
usage [1]. With the representation that I have (nova, neutron, glance,
cinder, keystone, mistral, and octavia):

- *create* is favored over post (105 occurrences to 7)
- *list* is favored over get_all (74 occurrences to 28)
- *update* is favored over put/patch (91 occurrences to 10)

>From this perspective, using the HTTP method might be slightly redundant
for projects using the DocumentedRuleDefault object from oslo.policy since
it contains the URL and method for invoking the policy. It also might
differ depending on the service implementing the API (some might use put
instead of patch to update a resource). Conversely, using the HTTP method
in the policy name itself doesn't require use of DocumentedRuleDefault,
although its usage is still recommended.

Thoughts on using create, list, update, and delete as opposed to post, get,
put, patch, and delete in the naming convention?

[0] https://service-types.openstack.org/service-types.json
[1]
https://gist.github.com/lbragstad/5000b46f27342589701371c88262c35b#file-policy-names-yaml

On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad  wrote:

> If we consider dropping "os", should we entertain dropping "api", too? Do
> we have a good reason to keep "api"?
>
> I wouldn't be opposed to simple service types (e.g "compute" or
> "loadbalancer").
>
> On Sat, Sep 15, 2018 at 9:01 AM Morgan Fainberg 
> wrote:
>
>> I am generally opposed to needlessly prefixing things with "os".
>>
>> I would advocate to drop it.
>>
>>
>> On Fri, Sep 14, 2018, 20:17 Lance Bragstad  wrote:
>>
>>> Ok - yeah, I'm not sure what the history behind that is either...
>>>
>>> I'm mainly curious if that's something we can/should keep or if we are
>>> opposed to dropping 'os' and 'api' from the convention (e.g.
>>> load-balancer:loadbalancer:post as opposed to
>>> os_load-balancer_api:loadbalancer:post) and just sticking with the
>>> service-type?
>>>
>>> On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson 
>>> wrote:
>>>
>>>> I don't know for sure, but I assume it is short for "OpenStack" and
>>>> prefixing OpenStack policies vs. third party plugin policies for
>>>> documentation purposes.
>>>>
>>>> I am guilty of borrowing this from existing code examples[0].
>>>>
>>>> [0]
>>>> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
>>>>
>>>> Michael
>>>> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad 
>>>> wrote:
>>>> >
>>>> >
>>>> >
>>>> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson 
>>>> wrote:
>>>> >>
>>>> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
>>>> >> which maps to the "os--api::" format.
>>>> >
>>>> >
>>>> > Thanks for explaining the justification, Michael.
>>>> >
>>>> > I'm curious if anyone has context on the "os-" part of the format?
>>>> I've seen that pattern in a couple different projects. Does anyone know
>>>> about its origin? Was it something we converted to our policy names because
>>>> of API names/paths?
>>>> >
>>>> >>
>>>> >>
>>>> >> I selected it as it uses the service-type[1], references the API
>>>> >> resource, and then the method. So it maps well to the API
>>>> reference[2]
>>>> >> for the service.
>>>> >>
>>>> >> [0]
>>>> https://docs.openstack.org/octavia/latest/configuration/policy.html
>>>> >> [1] https://service-types.openstack.org/
>>>> >> [2]
>>>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>>>> >>
>>>> >> Michael
>>>> >> On Wed, Sep 12, 201

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-19 Thread Lance Bragstad
johnsom (from octavia) had a good idea, which was to use the service types
that are defined already [0].

I like this for three reasons, specifically. First, it's already a known
convention for services that we can just reuse. Second, it includes a
spacing convention (e.g. load-balancer vs load_balancer). Third,
it's relatively short since it doesn't include "os" or "api".

So long as there isn't any objection to that, we can start figuring out how
we want to do the method and resource parts. I pulled some policies into a
place where I could try and query them for specific patterns and existing
usage [1]. With the representation that I have (nova, neutron, glance,
cinder, keystone, mistral, and octavia):

- *create* is favored over post (105 occurrences to 7)
- *list* is favored over get_all (74 occurrences to 28)
- *update* is favored over put/patch (91 occurrences to 10)

>From this perspective, using the HTTP method might be slightly redundant
for projects using the DocumentedRuleDefault object from oslo.policy since
it contains the URL and method for invoking the policy. It also might
differ depending on the service implementing the API (some might use put
instead of patch to update a resource). Conversely, using the HTTP method
in the policy name itself doesn't require use of DocumentedRuleDefault,
although its usage is still recommended.

Thoughts on using create, list, update, and delete as opposed to post, get,
put, patch, and delete in the naming convention?

[0] https://service-types.openstack.org/service-types.json
[1]
https://gist.github.com/lbragstad/5000b46f27342589701371c88262c35b#file-policy-names-yaml

On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad  wrote:

> If we consider dropping "os", should we entertain dropping "api", too? Do
> we have a good reason to keep "api"?
>
> I wouldn't be opposed to simple service types (e.g "compute" or
> "loadbalancer").
>
> On Sat, Sep 15, 2018 at 9:01 AM Morgan Fainberg 
> wrote:
>
>> I am generally opposed to needlessly prefixing things with "os".
>>
>> I would advocate to drop it.
>>
>>
>> On Fri, Sep 14, 2018, 20:17 Lance Bragstad  wrote:
>>
>>> Ok - yeah, I'm not sure what the history behind that is either...
>>>
>>> I'm mainly curious if that's something we can/should keep or if we are
>>> opposed to dropping 'os' and 'api' from the convention (e.g.
>>> load-balancer:loadbalancer:post as opposed to
>>> os_load-balancer_api:loadbalancer:post) and just sticking with the
>>> service-type?
>>>
>>> On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson 
>>> wrote:
>>>
>>>> I don't know for sure, but I assume it is short for "OpenStack" and
>>>> prefixing OpenStack policies vs. third party plugin policies for
>>>> documentation purposes.
>>>>
>>>> I am guilty of borrowing this from existing code examples[0].
>>>>
>>>> [0]
>>>> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
>>>>
>>>> Michael
>>>> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad 
>>>> wrote:
>>>> >
>>>> >
>>>> >
>>>> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson 
>>>> wrote:
>>>> >>
>>>> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
>>>> >> which maps to the "os--api::" format.
>>>> >
>>>> >
>>>> > Thanks for explaining the justification, Michael.
>>>> >
>>>> > I'm curious if anyone has context on the "os-" part of the format?
>>>> I've seen that pattern in a couple different projects. Does anyone know
>>>> about its origin? Was it something we converted to our policy names because
>>>> of API names/paths?
>>>> >
>>>> >>
>>>> >>
>>>> >> I selected it as it uses the service-type[1], references the API
>>>> >> resource, and then the method. So it maps well to the API
>>>> reference[2]
>>>> >> for the service.
>>>> >>
>>>> >> [0]
>>>> https://docs.openstack.org/octavia/latest/configuration/policy.html
>>>> >> [1] https://service-types.openstack.org/
>>>> >> [2]
>>>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>>>> >>
>>>> >> Michael
>>>> >> On Wed, Sep 12, 201

Re: [openstack-dev] [Openstack-sigs] [tc][uc]Community Wide Long Term Goals

2018-09-18 Thread Lance Bragstad
On Tue, Sep 18, 2018 at 12:17 PM Doug Hellmann 
wrote:

> Excerpts from Lance Bragstad's message of 2018-09-18 11:56:22 -0500:
> > On Tue, Sep 18, 2018 at 10:17 AM Doug Hellmann 
> > wrote:
> >
> > > Excerpts from Zhipeng Huang's message of 2018-09-14 18:51:40 -0600:
> > > > Hi,
> > > >
> > > > Based upon the discussion we had at the TC session in the afternoon,
> I'm
> > > > starting to draft a patch to add long term goal mechanism into
> > > governance.
> > > > It is by no means a complete solution at the moment (still have not
> > > thought
> > > > through the execution method yet to make sure the outcome), but feel
> free
> > > > to provide your feedback at https://review.openstack.org/#/c/602799/
> .
> > > >
> > > > --
> > > > Zhipeng (Howard) Huang
> > >
> > > [I commented on the patch, but I'll also reply here for anyone not
> > > following the review.]
> > >
> > > I'm glad to see the increased interest in goals. Before we change
> > > the existing process, though, I would prefer to see engagement with
> > > the current process. We can start by having SIGs and WGs update the
> > > etherpad where we track goal proposals
> > > (https://etherpad.openstack.org/p/community-goals) and then we can
> > > see if we actually need to manage goals across multiple release
> > > cycles as a single unit.
> > >
> >
> > Depending on the official outcome of this resolution, I was going to try
> > and use the granular RBAC work to test out this process.
> >
> > I can still do that, or I can hold off if appropriate.
>
> The Python 3 transition has been going on for 5-6 years now, and
> started before we had even the current goals process in place. I
> think it's completely possible for us to do work that takes a long
> time without making the goals process more complex.  Let's try to
> keep the process lightweight, and make incremental changes to it
> based on real shortcomings (adding champions is one example of a
> tweak that made a significant improvement).
>
> It may be easy to continue to prioritize a follow-up part of a
> multi-part goal we have already started, but I would rather we don't
> *require* that in case we have some other significant work that we
> have to rally folks to complete (I'm thinking of things like
> addressing security issues, some new technical challenge that comes
> up, or other community needs that we don't foresee at the start of
> a multi-part goal). We designed the current process to encourage
> those sorts of conversations to happen on a regular basis, after
> all, so I'm very happy to see interest in using it. But let's try
> to use what we have before we assume it's broken.
>

That's fair.


>
> I think you could (and should) start by describing the stages you
> anticipate for the RBAC stuff, and then we can see which parts need
> to be done before we adopt a goal, which part are goals, and whether
> enough momentum picks up that we don't need to make later parts
> formal goals.
>

Do you have a particular medium in mind?


>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [tc][uc]Community Wide Long Term Goals

2018-09-18 Thread Lance Bragstad
On Tue, Sep 18, 2018 at 10:17 AM Doug Hellmann 
wrote:

> Excerpts from Zhipeng Huang's message of 2018-09-14 18:51:40 -0600:
> > Hi,
> >
> > Based upon the discussion we had at the TC session in the afternoon, I'm
> > starting to draft a patch to add long term goal mechanism into
> governance.
> > It is by no means a complete solution at the moment (still have not
> thought
> > through the execution method yet to make sure the outcome), but feel free
> > to provide your feedback at https://review.openstack.org/#/c/602799/ .
> >
> > --
> > Zhipeng (Howard) Huang
>
> [I commented on the patch, but I'll also reply here for anyone not
> following the review.]
>
> I'm glad to see the increased interest in goals. Before we change
> the existing process, though, I would prefer to see engagement with
> the current process. We can start by having SIGs and WGs update the
> etherpad where we track goal proposals
> (https://etherpad.openstack.org/p/community-goals) and then we can
> see if we actually need to manage goals across multiple release
> cycles as a single unit.
>

Depending on the official outcome of this resolution, I was going to try
and use the granular RBAC work to test out this process.

I can still do that, or I can hold off if appropriate.


>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc]Question for candidates about global reachout

2018-09-17 Thread Lance Bragstad
On Mon, Sep 17, 2018 at 1:42 PM Mohammed Naser  wrote:

> Hi,
>
> On that note, is there any way to get an 'invite' onto those channels?
>
> Any information about the foundation side of things about the
> 'official' channels?
>

I actually have a question about this as well. During the TC discussion
last Friday there was representation from the Foundation in the room. I
though I remember someone (annabelleB?) saying there were known issues
(technical or otherwise) regarding the official channels spun up by the
Foundation.

Does anyone know what issues were being referred to here?


>
> Thanks,
> Mohammed
> On Mon, Sep 17, 2018 at 3:28 PM Samuel Cassiba  wrote:
> >
> > On Mon, Sep 17, 2018 at 6:58 AM Sylvain Bauza 
> wrote:
> > >
> > >
> > >
> > > Le lun. 17 sept. 2018 à 15:32, Jeremy Stanley  a
> écrit :
> > >>
> > >> On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote:
> > >> [...]
> > >> > - What is the problem joining Wechat will solve (keeping in mind the
> > >> > language barrier)?
> > >>
> > >> As I understand it, the suggestion is that mere presence of project
> > >> leadership in venues where this emerging subset of our community
> > >> gathers would provide a strong signal that we support them and care
> > >> about their experience with the software.
> > >>
> > >> > - Isn't this problem already solved for other languages with
> > >> > existing initiatives like local ambassadors and i18n team? Why
> > >> > aren't these relevant?
> > >> [...]
> > >>
> > >> It seems like there are at least couple of factors at play here:
> > >> first the significant number of users and contributors within
> > >> mainland China compared to other regions (analysis suggests there
> > >> were nearly as many contributors to the Rocky release from China as
> > >> the USA), but second there may be facets of Chinese culture which
> > >> make this sort of demonstrative presence a much stronger signal than
> > >> it would be in other cultures.
> > >>
> > >> > - Pardon my ignorance here, what is the problem with email? (I
> > >> > understand some chat systems might be blocked, I thought emails
> > >> > would be fine, and the lowest common denominator).
> > >>
> > >> Someone in the TC room (forgive me, I don't recall who now, maybe
> > >> Rico?) asserted that Chinese contributors generally only read the
> > >> first message in any given thread (perhaps just looking for possible
> > >> announcements?) and that if they _do_ attempt to read through some
> > >> of the longer threads they don't participate in them because the
> > >> discussion is presumed to be over and decisions final by the time
> > >> they "reach the end" (I guess not realizing that it's perfectly fine
> > >> to reply to a month-old discussion and try to help alter course on
> > >> things if you have an actual concern?).
> > >>
> > >
> > > While I understand the technical issues that could be due using IRC in
> China, I still don't get why opening the gates and saying WeChat being yet
> another official channel would prevent our community from fragmenting.
> > >
> > > Truly the usage of IRC is certainly questionable, but if we have
> multiple ways to discuss, I just doubt we could prevent us to silo
> ourselves between our personal usages.
> > > Either we consider the new channels as being only for southbound
> communication, or we envisage the possibility, as a community, to migrate
> from IRC to elsewhere (I'm particulary not fan of the latter so I would
> challenge this but I can understand the reasons)
> > >
> > > -Sylvain
> > >
> >
> > Objectively, I don't see a way to endorse something other than IRC
> > without some form of collective presence on more than just Wechat to
> > keep the message intact. IRC is the official messaging platform, for
> > whatever that's worth these days. However, at present, it makes less
> > and less sense to explicitly eschew other outlets in favor. From a
> > Chef OpenStack perspective, the common medium is, perhaps not
> > unsurprising, code review. Everything else evolved over time to be
> > southbound paths to the code, including most of the conversation
> > taking place there as opposed to IRC.
> >
> > The continuation of this thread only confirms that there is already
> > fragmentation in the community, and that people on each side of the
> > void genuinely want to close that gap. At this point, the thing to do
> > is prevent further fragmentation of the intent. It is, however, far
> > easier to bikeshed over which platform of choice.
> >
> > At present, it seems a collective presence is forming ad hoc,
> > regardless of any such resolution. With some additional coordination
> > and planning, I think that there could be something that could scale
> > beyond one or two outlets.
> >
> > Best,
> > Samuel
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> 

[openstack-dev] [keystone] Rocky Retrospective

2018-09-17 Thread Lance Bragstad
This is typically something we do in-person during the PTG, but due to
weather and travel approval we didn't have great representation last week.

That said, let's try to do an asynchronous retrospective to gather feedback
regarding the last cycle. Afterwords we can try and meet to go through
specific things, if needed. I've created a doodle to see if we can get a
time lined up [0]. The retrospective board [1] is available and waiting for
your feedback! The board should be public, but if you need access to add
cards, just ping me.

I'll collect results from the doodle on Friday and see what times work.

Thanks,

Lance

[0] https://doodle.com/poll/5vkztz9sumkbzp4h
[1] https://trello.com/b/af8vmDPs/keystone-rocky-retrospective
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-16 Thread Lance Bragstad
If we consider dropping "os", should we entertain dropping "api", too? Do
we have a good reason to keep "api"?

I wouldn't be opposed to simple service types (e.g "compute" or
"loadbalancer").

On Sat, Sep 15, 2018 at 9:01 AM Morgan Fainberg 
wrote:

> I am generally opposed to needlessly prefixing things with "os".
>
> I would advocate to drop it.
>
>
> On Fri, Sep 14, 2018, 20:17 Lance Bragstad  wrote:
>
>> Ok - yeah, I'm not sure what the history behind that is either...
>>
>> I'm mainly curious if that's something we can/should keep or if we are
>> opposed to dropping 'os' and 'api' from the convention (e.g.
>> load-balancer:loadbalancer:post as opposed to
>> os_load-balancer_api:loadbalancer:post) and just sticking with the
>> service-type?
>>
>> On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson 
>> wrote:
>>
>>> I don't know for sure, but I assume it is short for "OpenStack" and
>>> prefixing OpenStack policies vs. third party plugin policies for
>>> documentation purposes.
>>>
>>> I am guilty of borrowing this from existing code examples[0].
>>>
>>> [0]
>>> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
>>>
>>> Michael
>>> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad 
>>> wrote:
>>> >
>>> >
>>> >
>>> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson 
>>> wrote:
>>> >>
>>> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
>>> >> which maps to the "os--api::" format.
>>> >
>>> >
>>> > Thanks for explaining the justification, Michael.
>>> >
>>> > I'm curious if anyone has context on the "os-" part of the format?
>>> I've seen that pattern in a couple different projects. Does anyone know
>>> about its origin? Was it something we converted to our policy names because
>>> of API names/paths?
>>> >
>>> >>
>>> >>
>>> >> I selected it as it uses the service-type[1], references the API
>>> >> resource, and then the method. So it maps well to the API reference[2]
>>> >> for the service.
>>> >>
>>> >> [0]
>>> https://docs.openstack.org/octavia/latest/configuration/policy.html
>>> >> [1] https://service-types.openstack.org/
>>> >> [2]
>>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>>> >>
>>> >> Michael
>>> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
>>> >> >
>>> >> > So +1
>>> >> >
>>> >> >
>>> >> >
>>> >> > Tim
>>> >> >
>>> >> >
>>> >> >
>>> >> > From: Lance Bragstad 
>>> >> > Reply-To: "OpenStack Development Mailing List (not for usage
>>> questions)" 
>>> >> > Date: Wednesday, 12 September 2018 at 20:43
>>> >> > To: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-...@lists.openstack.org>, OpenStack Operators <
>>> openstack-operators@lists.openstack.org>
>>> >> > Subject: [openstack-dev] [all] Consistent policy names
>>> >> >
>>> >> >
>>> >> >
>>> >> > The topic of having consistent policy names has popped up a few
>>> times this week. Ultimately, if we are to move forward with this, we'll
>>> need a convention. To help with that a little bit I started an etherpad [0]
>>> that includes links to policy references, basic conventions *within* that
>>> service, and some examples of each. I got through quite a few projects this
>>> morning, but there are still a couple left.
>>> >> >
>>> >> >
>>> >> >
>>> >> > The idea is to look at what we do today and see what conventions we
>>> can come up with to move towards, which should also help us determine how
>>> much each convention is going to impact services (e.g. picking a convention
>>> that will cause 70% of services to rename policies).
>>> >> >
>>> >> >
>>> >> >
>>> >> > Please have a look and we can discuss conventions in thi

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-16 Thread Lance Bragstad
If we consider dropping "os", should we entertain dropping "api", too? Do
we have a good reason to keep "api"?

I wouldn't be opposed to simple service types (e.g "compute" or
"loadbalancer").

On Sat, Sep 15, 2018 at 9:01 AM Morgan Fainberg 
wrote:

> I am generally opposed to needlessly prefixing things with "os".
>
> I would advocate to drop it.
>
>
> On Fri, Sep 14, 2018, 20:17 Lance Bragstad  wrote:
>
>> Ok - yeah, I'm not sure what the history behind that is either...
>>
>> I'm mainly curious if that's something we can/should keep or if we are
>> opposed to dropping 'os' and 'api' from the convention (e.g.
>> load-balancer:loadbalancer:post as opposed to
>> os_load-balancer_api:loadbalancer:post) and just sticking with the
>> service-type?
>>
>> On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson 
>> wrote:
>>
>>> I don't know for sure, but I assume it is short for "OpenStack" and
>>> prefixing OpenStack policies vs. third party plugin policies for
>>> documentation purposes.
>>>
>>> I am guilty of borrowing this from existing code examples[0].
>>>
>>> [0]
>>> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
>>>
>>> Michael
>>> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad 
>>> wrote:
>>> >
>>> >
>>> >
>>> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson 
>>> wrote:
>>> >>
>>> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
>>> >> which maps to the "os--api::" format.
>>> >
>>> >
>>> > Thanks for explaining the justification, Michael.
>>> >
>>> > I'm curious if anyone has context on the "os-" part of the format?
>>> I've seen that pattern in a couple different projects. Does anyone know
>>> about its origin? Was it something we converted to our policy names because
>>> of API names/paths?
>>> >
>>> >>
>>> >>
>>> >> I selected it as it uses the service-type[1], references the API
>>> >> resource, and then the method. So it maps well to the API reference[2]
>>> >> for the service.
>>> >>
>>> >> [0]
>>> https://docs.openstack.org/octavia/latest/configuration/policy.html
>>> >> [1] https://service-types.openstack.org/
>>> >> [2]
>>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>>> >>
>>> >> Michael
>>> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
>>> >> >
>>> >> > So +1
>>> >> >
>>> >> >
>>> >> >
>>> >> > Tim
>>> >> >
>>> >> >
>>> >> >
>>> >> > From: Lance Bragstad 
>>> >> > Reply-To: "OpenStack Development Mailing List (not for usage
>>> questions)" 
>>> >> > Date: Wednesday, 12 September 2018 at 20:43
>>> >> > To: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev@lists.openstack.org>, OpenStack Operators <
>>> openstack-operat...@lists.openstack.org>
>>> >> > Subject: [openstack-dev] [all] Consistent policy names
>>> >> >
>>> >> >
>>> >> >
>>> >> > The topic of having consistent policy names has popped up a few
>>> times this week. Ultimately, if we are to move forward with this, we'll
>>> need a convention. To help with that a little bit I started an etherpad [0]
>>> that includes links to policy references, basic conventions *within* that
>>> service, and some examples of each. I got through quite a few projects this
>>> morning, but there are still a couple left.
>>> >> >
>>> >> >
>>> >> >
>>> >> > The idea is to look at what we do today and see what conventions we
>>> can come up with to move towards, which should also help us determine how
>>> much each convention is going to impact services (e.g. picking a convention
>>> that will cause 70% of services to rename policies).
>>> >> >
>>> >> >
>>> >> >
>>> >> > Please have a look and we can discuss conventions in thi

Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-14 Thread Lance Bragstad
Ok - yeah, I'm not sure what the history behind that is either...

I'm mainly curious if that's something we can/should keep or if we are
opposed to dropping 'os' and 'api' from the convention (e.g.
load-balancer:loadbalancer:post as opposed to
os_load-balancer_api:loadbalancer:post) and just sticking with the
service-type?

On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson  wrote:

> I don't know for sure, but I assume it is short for "OpenStack" and
> prefixing OpenStack policies vs. third party plugin policies for
> documentation purposes.
>
> I am guilty of borrowing this from existing code examples[0].
>
> [0]
> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
>
> Michael
> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad 
> wrote:
> >
> >
> >
> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson 
> wrote:
> >>
> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
> >> which maps to the "os--api::" format.
> >
> >
> > Thanks for explaining the justification, Michael.
> >
> > I'm curious if anyone has context on the "os-" part of the format? I've
> seen that pattern in a couple different projects. Does anyone know about
> its origin? Was it something we converted to our policy names because of
> API names/paths?
> >
> >>
> >>
> >> I selected it as it uses the service-type[1], references the API
> >> resource, and then the method. So it maps well to the API reference[2]
> >> for the service.
> >>
> >> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html
> >> [1] https://service-types.openstack.org/
> >> [2]
> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
> >>
> >> Michael
> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
> >> >
> >> > So +1
> >> >
> >> >
> >> >
> >> > Tim
> >> >
> >> >
> >> >
> >> > From: Lance Bragstad 
> >> > Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> >> > Date: Wednesday, 12 September 2018 at 20:43
> >> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-...@lists.openstack.org>, OpenStack Operators <
> openstack-operators@lists.openstack.org>
> >> > Subject: [openstack-dev] [all] Consistent policy names
> >> >
> >> >
> >> >
> >> > The topic of having consistent policy names has popped up a few times
> this week. Ultimately, if we are to move forward with this, we'll need a
> convention. To help with that a little bit I started an etherpad [0] that
> includes links to policy references, basic conventions *within* that
> service, and some examples of each. I got through quite a few projects this
> morning, but there are still a couple left.
> >> >
> >> >
> >> >
> >> > The idea is to look at what we do today and see what conventions we
> can come up with to move towards, which should also help us determine how
> much each convention is going to impact services (e.g. picking a convention
> that will cause 70% of services to rename policies).
> >> >
> >> >
> >> >
> >> > Please have a look and we can discuss conventions in this thread. If
> we come to agreement, I'll start working on some documentation in
> oslo.policy so that it's somewhat official because starting to renaming
> policies.
> >> >
> >> >
> >> >
> >> > [0] https://etherpad.openstack.org/p/consistent-policy-names
> >> >
> >> > ___
> >> > OpenStack-operators mailing list
> >> > OpenStack-operators@lists.openstack.org
> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-14 Thread Lance Bragstad
Ok - yeah, I'm not sure what the history behind that is either...

I'm mainly curious if that's something we can/should keep or if we are
opposed to dropping 'os' and 'api' from the convention (e.g.
load-balancer:loadbalancer:post as opposed to
os_load-balancer_api:loadbalancer:post) and just sticking with the
service-type?

On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson  wrote:

> I don't know for sure, but I assume it is short for "OpenStack" and
> prefixing OpenStack policies vs. third party plugin policies for
> documentation purposes.
>
> I am guilty of borrowing this from existing code examples[0].
>
> [0]
> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
>
> Michael
> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad 
> wrote:
> >
> >
> >
> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson 
> wrote:
> >>
> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
> >> which maps to the "os--api::" format.
> >
> >
> > Thanks for explaining the justification, Michael.
> >
> > I'm curious if anyone has context on the "os-" part of the format? I've
> seen that pattern in a couple different projects. Does anyone know about
> its origin? Was it something we converted to our policy names because of
> API names/paths?
> >
> >>
> >>
> >> I selected it as it uses the service-type[1], references the API
> >> resource, and then the method. So it maps well to the API reference[2]
> >> for the service.
> >>
> >> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html
> >> [1] https://service-types.openstack.org/
> >> [2]
> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
> >>
> >> Michael
> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
> >> >
> >> > So +1
> >> >
> >> >
> >> >
> >> > Tim
> >> >
> >> >
> >> >
> >> > From: Lance Bragstad 
> >> > Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> >> > Date: Wednesday, 12 September 2018 at 20:43
> >> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, OpenStack Operators <
> openstack-operat...@lists.openstack.org>
> >> > Subject: [openstack-dev] [all] Consistent policy names
> >> >
> >> >
> >> >
> >> > The topic of having consistent policy names has popped up a few times
> this week. Ultimately, if we are to move forward with this, we'll need a
> convention. To help with that a little bit I started an etherpad [0] that
> includes links to policy references, basic conventions *within* that
> service, and some examples of each. I got through quite a few projects this
> morning, but there are still a couple left.
> >> >
> >> >
> >> >
> >> > The idea is to look at what we do today and see what conventions we
> can come up with to move towards, which should also help us determine how
> much each convention is going to impact services (e.g. picking a convention
> that will cause 70% of services to rename policies).
> >> >
> >> >
> >> >
> >> > Please have a look and we can discuss conventions in this thread. If
> we come to agreement, I'll start working on some documentation in
> oslo.policy so that it's somewhat official because starting to renaming
> policies.
> >> >
> >> >
> >> >
> >> > [0] https://etherpad.openstack.org/p/consistent-policy-names
> >> >
> >> > ___
> >> > OpenStack-operators mailing list
> >> > openstack-operat...@lists.openstack.org
> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-operators mailing list
> > openstack-operat...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-14 Thread Lance Bragstad
On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson  wrote:

> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
> which maps to the "os--api::" format.
>

Thanks for explaining the justification, Michael.

I'm curious if anyone has context on the "os-" part of the format? I've
seen that pattern in a couple different projects. Does anyone know about
its origin? Was it something we converted to our policy names because of
API names/paths?


>
> I selected it as it uses the service-type[1], references the API
> resource, and then the method. So it maps well to the API reference[2]
> for the service.
>
> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html
> [1] https://service-types.openstack.org/
> [2]
> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>
> Michael
> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
> >
> > So +1
> >
> >
> >
> > Tim
> >
> >
> >
> > From: Lance Bragstad 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> > Date: Wednesday, 12 September 2018 at 20:43
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-...@lists.openstack.org>, OpenStack Operators <
> openstack-operators@lists.openstack.org>
> > Subject: [openstack-dev] [all] Consistent policy names
> >
> >
> >
> > The topic of having consistent policy names has popped up a few times
> this week. Ultimately, if we are to move forward with this, we'll need a
> convention. To help with that a little bit I started an etherpad [0] that
> includes links to policy references, basic conventions *within* that
> service, and some examples of each. I got through quite a few projects this
> morning, but there are still a couple left.
> >
> >
> >
> > The idea is to look at what we do today and see what conventions we can
> come up with to move towards, which should also help us determine how much
> each convention is going to impact services (e.g. picking a convention that
> will cause 70% of services to rename policies).
> >
> >
> >
> > Please have a look and we can discuss conventions in this thread. If we
> come to agreement, I'll start working on some documentation in oslo.policy
> so that it's somewhat official because starting to renaming policies.
> >
> >
> >
> > [0] https://etherpad.openstack.org/p/consistent-policy-names
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-14 Thread Lance Bragstad
On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson  wrote:

> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
> which maps to the "os--api::" format.
>

Thanks for explaining the justification, Michael.

I'm curious if anyone has context on the "os-" part of the format? I've
seen that pattern in a couple different projects. Does anyone know about
its origin? Was it something we converted to our policy names because of
API names/paths?


>
> I selected it as it uses the service-type[1], references the API
> resource, and then the method. So it maps well to the API reference[2]
> for the service.
>
> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html
> [1] https://service-types.openstack.org/
> [2]
> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>
> Michael
> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
> >
> > So +1
> >
> >
> >
> > Tim
> >
> >
> >
> > From: Lance Bragstad 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> > Date: Wednesday, 12 September 2018 at 20:43
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, OpenStack Operators <
> openstack-operat...@lists.openstack.org>
> > Subject: [openstack-dev] [all] Consistent policy names
> >
> >
> >
> > The topic of having consistent policy names has popped up a few times
> this week. Ultimately, if we are to move forward with this, we'll need a
> convention. To help with that a little bit I started an etherpad [0] that
> includes links to policy references, basic conventions *within* that
> service, and some examples of each. I got through quite a few projects this
> morning, but there are still a couple left.
> >
> >
> >
> > The idea is to look at what we do today and see what conventions we can
> come up with to move towards, which should also help us determine how much
> each convention is going to impact services (e.g. picking a convention that
> will cause 70% of services to rename policies).
> >
> >
> >
> > Please have a look and we can discuss conventions in this thread. If we
> come to agreement, I'll start working on some documentation in oslo.policy
> so that it's somewhat official because starting to renaming policies.
> >
> >
> >
> > [0] https://etherpad.openstack.org/p/consistent-policy-names
> >
> > ___
> > OpenStack-operators mailing list
> > openstack-operat...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials)

2018-09-12 Thread Lance Bragstad
On Wed, Sep 12, 2018 at 3:55 PM Jeremy Stanley  wrote:

> On 2018-09-12 09:47:27 -0600 (-0600), Matt Riedemann wrote:
> [...]
> > So I encourage all elected TC members to work directly with the
> > various SIGs to figure out their top issue and then work on
> > managing those deliverables across the community because the TC is
> > particularly well suited to do so given the elected position.
> [...]
>
> I almost agree with you. I think the OpenStack TC members should be
> actively engaged in recruiting and enabling interested people in the
> community to do those things, but I don't think such work should be
> solely the domain of the TC and would hate to give the impression
> that you must be on the TC to have such an impact.
>

I agree that relaying that type of impression would be negative, but I'm
not sure this specifically would do that. I think we've been good about
letting people step up to drive initiatives without being in an elected
position [0].

IMHO, I think the point Matt is making here is more about ensuring sure we
have people to do what we've agreed upon, as a community, as being mission
critical. Enablement is imperative, but no matter how good we are at it,
sometimes we really just needs hands to do the work.

[0] Of the six goals agreed upon since we've implemented champions in
Queens, five of them have been championed by non-TC members (Chandan
championed two, in back-to-back releases).


> --
> Jeremy Stanley
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [all] Consistent policy names

2018-09-12 Thread Lance Bragstad
The topic of having consistent policy names has popped up a few times this
week. Ultimately, if we are to move forward with this, we'll need a
convention. To help with that a little bit I started an etherpad [0] that
includes links to policy references, basic conventions *within* that
service, and some examples of each. I got through quite a few projects this
morning, but there are still a couple left.

The idea is to look at what we do today and see what conventions we can
come up with to move towards, which should also help us determine how much
each convention is going to impact services (e.g. picking a convention that
will cause 70% of services to rename policies).

Please have a look and we can discuss conventions in this thread. If we
come to agreement, I'll start working on some documentation in oslo.policy
so that it's somewhat official because starting to renaming policies.

[0] https://etherpad.openstack.org/p/consistent-policy-names
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [all] Consistent policy names

2018-09-12 Thread Lance Bragstad
The topic of having consistent policy names has popped up a few times this
week. Ultimately, if we are to move forward with this, we'll need a
convention. To help with that a little bit I started an etherpad [0] that
includes links to policy references, basic conventions *within* that
service, and some examples of each. I got through quite a few projects this
morning, but there are still a couple left.

The idea is to look at what we do today and see what conventions we can
come up with to move towards, which should also help us determine how much
each convention is going to impact services (e.g. picking a convention that
will cause 70% of services to rename policies).

Please have a look and we can discuss conventions in this thread. If we
come to agreement, I'll start working on some documentation in oslo.policy
so that it's somewhat official because starting to renaming policies.

[0] https://etherpad.openstack.org/p/consistent-policy-names
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] about unified limits

2018-09-11 Thread Lance Bragstad
Extra eyes on the API would be appreciated. We're also close to the point
where we can start incorporating oslo.limit into services, so preparing
those changes might be useful, too.

One of the outcomes from yesterday's session was that Jay and Mel (from
nova) were going to work out some examples we could use to finish up the
enforcement code in oslo.limit. Helping out with that or picking it up
would certainly help move the ball forward in nova.




On Tue, Sep 11, 2018 at 1:15 AM Jaze Lee  wrote:

> I recommend li...@unitedstack.com to join in to help to work forward.
> May be first we should the keystone unified limits api really ok or
> something else ?
>
> Lance Bragstad  于2018年9月8日周六 上午2:35写道:
> >
> > That would be great! I can break down the work a little bit to help
> describe where we are at with different parts of the initiative. Hopefully
> it will be useful for your colleagues in case they haven't been closely
> following the effort.
> >
> > # keystone
> >
> > Based on the initial note in this thread, I'm sure you're aware of
> keystone's status with respect to unified limits. But to recap, the initial
> implementation landed in Queens and targeted flat enforcement [0]. During
> the Rocky PTG we sat down with other services and a few operators to
> explain the current status in keystone and if either developers or
> operators had feedback on the API specifically. Notes were captured in
> etherpad [1]. We spent the Rocky cycle fixing usability issues with the API
> [2] and implementing support for a hierarchical enforcement model [3].
> >
> > At this point keystone is ready for services to start consuming the
> unified limits work. The unified limits API is still marked as stable and
> it will likely stay that way until we have at least one project using
> unified limits. We can use that as an opportunity to do a final flush of
> any changes that need to be made to the API before fully supporting it. The
> keystone team expects that to be a quick transition, as we don't want to
> keep the API hanging in an experimental state. It's really just a safe
> guard to make sure we have the opportunity to use it in another service
> before fully committing to the API. Ultimately, we don't want to
> prematurely mark the API as supported when other services aren't even using
> it yet, and then realize it has issues that could have been fixed prior to
> the adoption phase.
> >
> > # oslo.limit
> >
> > In parallel with the keystone work, we created a new library to aid
> services in consuming limits. Currently, the sole purpose of oslo.limit is
> to abstract project and project hierarchy information away from the
> service, so that services don't have to reimplement client code to
> understand project trees, which could arguably become complex and lead to
> inconsistencies in u-x across services.
> >
> > Ideally, a service should be able to pass some relatively basic
> information to oslo.limit and expect an answer on whether or not usage for
> that claim is valid. For example, here is a project ID, resource name, and
> resource quantity, tell me if this project is over it's associated limit or
> default limit.
> >
> > We're currently working on implementing the enforcement bits of
> oslo.limit, which requires making API calls to keystone in order to
> retrieve the deployed enforcement model, limit information, and project
> hierarchies. Then it needs to reason about those things and calculate usage
> from the service in order to determine if the request claim is valid or
> not. There are patches up for this work, and reviews are always welcome [4].
> >
> > Note that we haven't released oslo.limit yet, but once the basic
> enforcement described above is implemented we will. Then services can
> officially pull it into their code as a dependency and we can work out
> remaining bugs in both keystone and oslo.limit. Once we're confident in
> both the API and the library, we'll bump oslo.limit to version 1.0 at the
> same time we graduate the unified limits API from "experimental" to
> "supported". Note that oslo libraries <1.0 are considered experimental,
> which fits nicely with the unified limit API being experimental as we shake
> out usability issues in both pieces of software.
> >
> > # services
> >
> > Finally, we'll be in a position to start integrating oslo.limit into
> services. I imagine this to be a coordinated effort between keystone, oslo,
> and service developers. I do have a patch up that adds a conceptual
> overview for developers consuming oslo.limit [5], which renders into [6].
> >
> > To be honest, this is going to be a very large piece of work and it's
> going to require 

Re: [openstack-dev] [election][tc] Opinion about 'PTL' tooling

2018-09-10 Thread Lance Bragstad
I agree in that it's dependent on what metrics you think accurately
showcase project health. Is it the number of contributions? The number of
unique contributors? Diversity across participating organizations?
Completion ratios of blueprints or committed fixes over bugs opened? I
imagine different projects will have different opinions on this, but it
would be interesting to know what those opinions are.

I think if you can reasonably justify a metric as an accurate
representation of health, then it makes sense to try and automate it.

This jogged my memory and it might not be a valid metric of health, but I
liked the idea after I heard another project doing it (I think it was
swift). If you could recognize contributions (loosely defined here to be
reviews, patches, bug triage) for an individual and if you noticed those
contributions dropping off after a period of time, then you (as a
maintainer or PTL of a project) could reach out to the individual directly.
This assumes the reason isn't obvious and feels like it is more meant to
track lost contributors.

On Mon, Sep 10, 2018 at 3:27 PM Samuel Cassiba  wrote:

> On Mon, Sep 10, 2018 at 6:07 AM, Jeremy Stanley  wrote:
> > On 2018-09-10 06:38:11 -0600 (-0600), Mohammed Naser wrote:
> >> I think something we should take into consideration is *what* you
> >> consider health because the way we’ve gone about it over health
> >> checks is not something that can become a toolkit because it was
> >> more of question asking, etc
> > [...]
> >
> > I was going to follow up with something similar. It's not as if the
> > TC has a toolkit of any sort at this point to come up with the
> > information we're assembling in the health tracker either. It's
> > built up from interviewing PTLs, reading meeting logs, looking at
> > the changes which merge to teams' various deliverable repositories,
> > asking around as to whether they've missed important deadlines such
> > as release milestones (depending on what release models they
> > follow) or PTL nominations, looking over cycle goals to see how far
> > along they are, and so on. Extremely time-consuming which is why
> > it's taken us most of a release cycle and we still haven't finished
> > a first pass.
> >
> > Assembling some of this information might be automatable if we make
> > adjustments to how the data/processes on which it's based are
> > maintained, but at this point we're not even sure which ones are
> > problem indicators at all and are just trying to provide the
> > clearest picture we can. If we come up with a detailed checklist and
> > some of the checks on that list can be automated in some way, that
> > seems like a good thing. However, the original data should be
> > publicly accessible so I don't see why it needs to be members of the
> > technical committee who write the software to collect that.
> > --
> > Jeremy Stanley
> >
>
> Things like tracking project health I see like organizing a trash
> pickup at the local park, or off the side of a road: dirty,
> unglamorous work. The results can be immediately visible to not only
> those doing the work, but passers-by. Eliminating the human factor in
> deeply human-driven interactions can have ramifications immediately
> noticed.
>
> As distributed as things exist today, reducing the conversation to a
> few methods or people can damage intent, without humans talking to
> humans in a more direct manner.
>
> Best,
> Samuel Cassiba (scas)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 3 September 2018

2018-09-07 Thread Lance Bragstad
# Keystone Team Update - Week of 3 September 2018

## News

This week was mainly focused on the python3 community goal and ultimately
cleaning up a bunch of issues with stable branches that were uncovered in
those reviews. Next week is the PTG, which the group is preparing for in
addition to brainstorming Stein forum topics [0][1].

[0]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134362.html
[1] https://etherpad.openstack.org/p/BER-keystone-forum-sessions

## User Feedback

The foundation provided us with the latest feedback from our users [0]. A
sanitized version of that data has been shared publicly [1] for you to
checkout prior to the PTG. We have time set aside on Wednesday to review
the feedback and discuss any adjustments we want to make to the survey
questions.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134434.html
[1]
https://docs.google.com/spreadsheets/d/1wz-GOoFODGWrFuGqVWDunEWsuhC_lvRJLrfUybTj69Q/edit?usp=sharing

## PTG Planning

As I'm sure you're aware, the PTG is next week. The schedule is relatively
firm at this point [0], but please raise any conflicts with other sessions
if you see any.

[0] https://etherpad.openstack.org/p/keystone-stein-ptg

## Open Specs

Search query: https://bit.ly/2Pi6dGj

A new specification was proposed this week to enable limit support for
domains [0]. This is going to be a main focus next week as we discuss
unified limits. Please have a look if you're interested in that particular
discussion.

[0] https://review.openstack.org/#/c/599491/

## Recently Merged Changes

Search query: https://bit.ly/2IACk3F

We merged 26 changes this week, most of which were for the python3
community goal [0].

We did notice a high number of stable branch failures for keystoneauth,
keystonemiddleware, and python-keystoneclient. This was discussed on the
ML[1][2].

[0] https://governance.openstack.org/tc/goals/stein/python3-first.html
[1]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134391.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134454.html

## Changes that need Attention

Search query: https://bit.ly/2wv7QLK

There are 58 changes that are passing CI, not in merge conflict, have no
negative reviews and aren't proposed by bots.

[0]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1776504

## Bugs

This week we opened 9 new bugs, closed 1, and fixed 3.

Bugs opened (9)

   - Bug #1790148 (keystone:Low) opened by FreudianSlip
   https://bugs.launchpad.net/keystone/+bug/1790148


   - Bug #1790428 (keystone:Undecided) opened by Eric Miller
   https://bugs.launchpad.net/keystone/+bug/1790428


   - Bug #179 (keystone:Undecided) opened by Paul Peereboom
   https://bugs.launchpad.net/keystone/+bug/179


   - Bug #1780164 (keystoneauth:Undecided) opened by mchlumsky
   https://bugs.launchpad.net/keystoneauth/+bug/1780164


   - Bug #1790423 (python-keystoneclient:Undecided) opened by ChenWu
   https://bugs.launchpad.net/python-keystoneclient/+bug/1790423


   - Bug #1790931 (oslo.limit:Medium) opened by Lance Bragstad
   https://bugs.launchpad.net/oslo.limit/+bug/1790931


   - Bug #1790954 (oslo.limit:Medium) opened by Lance Bragstad
   https://bugs.launchpad.net/oslo.limit/+bug/1790954


   - Bug #1790894 (oslo.limit:Low) opened by Lance Bragstad
   https://bugs.launchpad.net/oslo.limit/+bug/1790894


   - Bug #1790935 (oslo.limit:Low) opened by Lance Bragstad
   https://bugs.launchpad.net/oslo.limit/+bug/1790935


Bugs closed (1)

   - Bug #1790423 (python-keystoneclient:Undecided)
   https://bugs.launchpad.net/python-keystoneclient/+bug/1790423


Bugs fixed (3)

   - Bug #1777671 (keystone:Medium) fixed by Vishakha Agarwal
   https://bugs.launchpad.net/keystone/+bug/1777671


   - Bug #1790148 (keystone:Low) fixed by Chason Chan
   https://bugs.launchpad.net/keystone/+bug/1790148


   - Bug #1789351 (keystonemiddleware:Undecided) fixed by wangxiyuan
   https://bugs.launchpad.net/keystonemiddleware/+bug/1789351


## Milestone Outlook

We have a lot of work to do to shape the release between now and milestone
1, which will be October 26th. Focusing on specifications and early feature
development is appreciated.

https://releases.openstack.org/stein/schedule.html

## Shout-outs

Thanks to Ben, Doug, and Tony for helping us make sense of the
tox_install.sh and pip stable branch mess! We should be past the last layer
of the onion with respect to the python3 stable patches.

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad:
https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator and
https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org

Re: [openstack-dev] [nova][cinder] about unified limits

2018-09-07 Thread Lance Bragstad
5/3/check/openstack-tox-docs/a6bcf38/html/user/usage.html

On Thu, Sep 6, 2018 at 8:56 PM Jaze Lee  wrote:

> Lance Bragstad  于2018年9月6日周四 下午10:01写道:
> >
> > I wish there was a better answer for this question, but currently there
> are only a handful of us working on the initiative. If you, or someone you
> know, is interested in getting involved, I'll happily help onboard people.
>
> Well,I can recommend some my colleges to work on this. I wish in S,
> all service can use unified limits to do quota job.
>
> >
> > On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee  wrote:
> >>
> >> On Stein only one service?
> >> Is there some methods to move this more fast?
> >> Lance Bragstad  于2018年9月5日周三 下午9:29写道:
> >> >
> >> > Not yet. Keystone worked through a bunch of usability improvements
> with the unified limits API last release and created the oslo.limit
> library. We have a patch or two left to land in oslo.limit before projects
> can really start using unified limits [0].
> >> >
> >> > We're hoping to get this working with at least one resource in
> another service (nova, cinder, etc...) in Stein.
> >> >
> >> > [0]
> https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init
> >> >
> >> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee  wrote:
> >> >>
> >> >> Hello,
> >> >> Does nova and cinder  use keystone's unified limits api to do
> quota job?
> >> >> If not, is there a plan to do this?
> >> >> Thanks a lot.
> >> >>
> >> >> --
> >> >> 谦谦君子
> >> >>
> >> >>
> __
> >> >> OpenStack Development Mailing List (not for usage questions)
> >> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >> --
> >> 谦谦君子
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> 谦谦君子
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][keystone] python3 goal progress and tox_install.sh removal

2018-09-07 Thread Lance Bragstad
Thanks for all the help, everyone. Updating the status of reach repository
and branch with respect to the python3 goal and which reviews are needed in
order to get things squared away. Note that the linked python3 review is
just the one to port the zuul job definitions, and not all patches
generated for the goal. This is because the first patch was triggering the
failure - likely due to the branch being broken by tox_install.sh or new
pip versions among other things. The summary below is a list of things
needed to get the tests passing up to that point, at which point we should
be in a good state to pursue python3 issues if there are any.

Branches in red and bold are in need of reviews, all of which should be
setup to pass tests. If not then they should be dependent on patches to
make them pass.

*keystonemiddleware*
 - master: https://review.openstack.org/#/c/597659/
 - *stable/rocky*: https://review.openstack.org/#/c/597694/
 - *stable/queens*: https://review.openstack.org/#/c/597688/
 - *stable/pike*: https://review.openstack.org/#/c/597682/
 - *stable/ocata*: https://review.openstack.org/#/c/597677/

*keystoneauth*
 - master: https://review.openstack.org/#/c/597655/
 - *stable/rocky*: https://review.openstack.org/#/c/597693/
 - *stable/queens*: https://review.openstack.org/#/c/600564/ needed by
https://review.openstack.org/#/c/597687/
 - *stable/pike*: https://review.openstack.org/#/c/597681/
 - *stable/ocata*: https://review.openstack.org/#/c/598346/ needed by
https://review.openstack.org/#/c/597676/

*python-keystoneclient*
 - master: https://review.openstack.org/#/c/597671/
 - *stable/rocky*: https://review.openstack.org/#/c/597696/
 - *stable/queens*: https://review.openstack.org/#/c/597691/
 - *stable/pike*: https://review.openstack.org/#/c/597685/
 - *stable/ocata*: https://review.openstack.org/#/c/597679/

Hopefully this helps organize things a bit. I was losing my mind
maintaining a mental map.

Let me know if you see anything odd about the above. Otherwise feel free to
give those a review.

Thanks,

Lance

On Fri, Sep 7, 2018 at 2:39 AM Tony Breeds  wrote:

> On Thu, Sep 06, 2018 at 03:01:01PM -0500, Lance Bragstad wrote:
> > I'm noticing some odd cases with respect to the python 3 community goal
> > [0]. So far my findings are specific to keystone repositories, but I can
> > imagine this affecting other projects.
> >
> > Doug generated the python 3 reviews for keystone repositories, including
> > the ones for stable branches. We noticed some issues with the ones
> proposed
> > to stable (keystoneauth, python-keystoneclient) and master
> > (keystonemiddleware). For example, python-keystoneclient's stable/pike
> [1]
> > and stable/ocata [2] branches are both failing with something like [3]:
> >
> > ERROR: You must give at least one requirement to install (see "pip help
> > install")
>
> I've updated 1 and 2 to do the same thing that lots of other repos do
> and just exit 0 in this case.  1 and 2 now have a +1 from zuul.
>
> 
>
> > I've attempted to remove tox_install.sh using several approaches with
> > keystonemiddleware master [7]. None of which passed both unit tests and
> the
> > requirements check.
>
> Doug pointed out the fix here, which I added.  It passed most of the
> gate but failed in an unrelated neutron test so I've rechecked it.
>
> Yours Tony.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] 2018 User Survey Results

2018-09-07 Thread Lance Bragstad
The foundation just gave me a copy of the latest feedback from our users. I
wanted to share this with the group so people have time to digest it prior
to the PTG next week [0].

Here is the total count based on each response:

Federated identity enhancements had *184* responses

Performance improvements had *144* responses

Scaling out to multiple regions had *136* responses

Enhancing policy had *92* responses

Per domain configuration had *79* responses


Next Wednesday I have a time slot set aside to go through the results as a
group. Otherwise we can use the time to refine the questions we present in
the survey, since they haven't changed in years (I think Steve put the ones
we have today in place).


The script I used to count each occurrence is available [1] in case you
recently received survey results and want to parse them in a similar
fashion.


[0]
https://docs.google.com/spreadsheets/d/1wz-GOoFODGWrFuGqVWDunEWsuhC_lvRJLrfUybTj69Q/edit?usp=sharing

[1] https://gist.github.com/lbragstad/a812df72494ffbbbc8c742f4d90333d5
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][keystone] python3 goal progress and tox_install.sh removal

2018-09-06 Thread Lance Bragstad
I'm noticing some odd cases with respect to the python 3 community goal
[0]. So far my findings are specific to keystone repositories, but I can
imagine this affecting other projects.

Doug generated the python 3 reviews for keystone repositories, including
the ones for stable branches. We noticed some issues with the ones proposed
to stable (keystoneauth, python-keystoneclient) and master
(keystonemiddleware). For example, python-keystoneclient's stable/pike [1]
and stable/ocata [2] branches are both failing with something like [3]:

ERROR: You must give at least one requirement to install (see "pip help
install")

Both of those branches still use tox_install.sh [4][5]. Master,
stable/rocky, and stable/queens do not, which passed fine. It was suggested
that we backport patches to the failing branches that remove tox_install.sh
(similar to [6]). I've attempted to do this for python-keystoneclient,
keystonemiddleware, and keystoneauth.

The keystonemiddleware patches specifically are hitting a weird case, where
they either fail tests due to issues installing keystonemiddleware itself,
or pass tests and fail the requirements check. I'm guessing (because I
don't really fully understand the whole issue yet) this is because
keystonemiddleware has an optional dependency for tests and somehow the
installation process worked with tox_install.sh and doesn't work with the
new way we do things with pip and zuul.

I've attempted to remove tox_install.sh using several approaches with
keystonemiddleware master [7]. None of which passed both unit tests and the
requirements check.

I'm wondering if anyone has a definitive summary or context on
tox_install.sh and removing it cleanly for cases like keystonemiddleware.
Additionally, is anyone else noticing issues like this with their stable
branches?

[0] https://governance.openstack.org/tc/goals/stein/python3-first.html
[1] https://review.openstack.org/#/c/597685/
[2] https://review.openstack.org/#/c/597679/
[3]
http://logs.openstack.org/85/597685/1/check/build-openstack-sphinx-docs/4f817dd/job-output.txt.gz#_2018-08-29_20_49_17_877448
[4]
https://git.openstack.org/cgit/openstack/python-keystoneclient/tree/tools/tox_install.sh?h=stable/pike
[5]
https://git.openstack.org/cgit/openstack/python-keystoneclient/tree/tools/tox_install.sh?h=stable/ocata
[6] https://review.openstack.org/#/c/524828/3
[7] https://review.openstack.org/#/c/599003/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] No meeting or office hours September 11th

2018-09-06 Thread Lance Bragstad
I wanted to send out a reminder that we won't be having formal office hours
or a team meeting next week due to the PTG. Both will resume on the 18th of
September.

Thanks,

Lance
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Stein Forum Brainstorming

2018-09-06 Thread Lance Bragstad
I can't believe it's already time to start thinking about forum topics, but
it's upon us [0]!

I've created an etherpad for us to brainstorm ideas that we want to bring
to the forum in Germany [1]. I also linked it to the wiki [2].

Please feel free to throw out ideas. We can go through them as a group
before the submission phase starts if people wish.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134336.html
[1] https://etherpad.openstack.org/p/BER-keystone-forum-sessions
[2]
https://wiki.openstack.org/wiki/Forum/Berlin2018#Etherpads_from_Teams_and_Working_Groups
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] about unified limits

2018-09-06 Thread Lance Bragstad
I wish there was a better answer for this question, but currently there are
only a handful of us working on the initiative. If you, or someone you
know, is interested in getting involved, I'll happily help onboard people.

On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee  wrote:

> On Stein only one service?
> Is there some methods to move this more fast?
> Lance Bragstad  于2018年9月5日周三 下午9:29写道:
> >
> > Not yet. Keystone worked through a bunch of usability improvements with
> the unified limits API last release and created the oslo.limit library. We
> have a patch or two left to land in oslo.limit before projects can really
> start using unified limits [0].
> >
> > We're hoping to get this working with at least one resource in another
> service (nova, cinder, etc...) in Stein.
> >
> > [0]
> https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init
> >
> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee  wrote:
> >>
> >> Hello,
> >> Does nova and cinder  use keystone's unified limits api to do quota
> job?
> >> If not, is there a plan to do this?
> >> Thanks a lot.
> >>
> >> --
> >> 谦谦君子
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> 谦谦君子
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] about unified limits

2018-09-05 Thread Lance Bragstad
Not yet. Keystone worked through a bunch of usability improvements with the
unified limits API last release and created the oslo.limit library. We have
a patch or two left to land in oslo.limit before projects can really start
using unified limits [0].

We're hoping to get this working with at least one resource in another
service (nova, cinder, etc...) in Stein.

[0]
https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init

On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee  wrote:

> Hello,
> Does nova and cinder  use keystone's unified limits api to do quota
> job?
> If not, is there a plan to do this?
> Thanks a lot.
>
> --
> 谦谦君子
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc] TC nomination

2018-09-04 Thread Lance Bragstad
Hi all,


I'd like to submit my candidacy to be a member of the OpenStack Technical
Committee.


My involvement with OpenStack began during the Diablo release. Since then
I've participated in various parts of the community, in both upstream and
downstream roles. Today I mainly focus on authorization and identity
management.


As your elected member of the Technical Committee, I plan to continue
advocating
for cross-project initiatives and easing cross-project collaboration
wherever possible.


One area where I'm heavily invested in this type of work is improving
OpenStack's
authorization system. For example, I've championed a community goal [0],
which eases policy maintenance and upgrades for operators. I've also
contributed
to the improvement of oslo libraries, making it easier for other services
to change policies and consume authorization attributes. I believe isolating
policy from service-specific logic is crucial in letting developers securely
implement system-level and project-level APIs. Finally, I worked to revive
a thread from 2015 [1] that allows us to deliver better support for default
roles out-of-the-box [2]. This will reduce custom policies found in most
deployments, enabling better interoperability between clouds and push OpenStack
to be more self-service than it is today. There is still more work to do,
but all of this makes API protection easier to implement while giving
more functionality
and security to end-users and operators.


Based upon the few examples shared above, I think it's imperative to
approach cross-project initiatives in a hands-on manner. As a member of the
TC, I plan to spend my time helping projects close the gap on goals
accepted by the TC by contributing to them directly. Additionally, I want
to use that experience to collaborate with others and find ways to make
achieving efforts across projects more common than it is today, as opposed
to monolithic efforts that commonly result in burnout and exhaustion for a
select few people.


Tracking Rocky community goals specifically shows that 50% of projects
are still
implementing, reviewing, or have yet to start mutable configuration. 61% are
in the same boat for removing usage of mox. Some efforts take years to
successfully
complete across projects (e.g. volume multi-attach, adopting new API
versions).


Whether the initiatives are a focused effort between two projects, or
a community-wide
goal, they provide significant value to everyone consuming, deploying, or
developing the software we write. I'm running for TC because I want to do
what I can to make cross-project interaction easier through contributing
and building necessary process as a TC member.


Thanks for reading through my candidacy. Safe travels to Denver and
hopefully I'll see you at the PTG.



Lance


[0] https://governance.openstack.org/tc/goals/queens/policy-in-code.html

[1] https://review.openstack.org/#/c/245629

[2] https://review.openstack.org/#/c/566377
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 27 August 2018

2018-08-31 Thread Lance Bragstad
# Keystone Team Update - Week of 27 August 2018

## News

Welcome to Stein development!

## Release Status

Well, Rocky went out the door. A friendly reminder to keep an eye out for
bugs and things we should backport.

## PTG Planning

The topics in the PTG etherpad have been worked into a schedule [0].

The TL;DR is that Monday is going to be a mainly focused on large,
cross-project initiatives (just like what we did in Dublin). Tuesday we are
going to be discussing ways we can improve multi-region support
(edge-related discussions) and federation. Wednesday is relatively free of
topics, but we have a lot of hackathon ideas. This is a good time to
iterate quickly on things we need to get done, clean things up, or share
how something works with respect to keystone (e.g. Flask). Have an idea you
want to propose for Wednesday's hackathon? Just add it to the schedule [0].
Thursday is going to be for keystone-specific topics. Friday we plan to
cover any remaining topics and try and formalize everything into the
roadmap or specifications repo *before* we leave Denver.

If you have comments, questions, or concerns regarding the schedule, please
let someone know and we'll get it addressed.

[0] https://etherpad.openstack.org/p/keystone-stein-ptg

## Stein Roadmap Planning

Harry and I are working through the Rocky roadmap [0] and preparing a new
board for Stein. Most of this prep work should be done prior to the PTG so
that we can finalize and make adjustments in person. If you want to be
involved in this process just ask. Additionally, the Stein series has been
created in launchpad, along with the usual blueprints [1][2]. Feel free to
use accordingly for other blueprints and bugs.

[0] https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap
[1] https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-stein
[2] https://blueprints.launchpad.net/keystone/+spec/removed-as-of-stein

## Open Specs

Search query: https://bit.ly/2Pi6dGj

We landed a couple cleanup patches that re-propose the MFA receipts [0] and
capability lists [1] specifications to Stein. Just a note to make sure we
treat those as living documents by updating them regularly if details
change as we work through the implementations.

The JWT specification [2] also received a facelift and is much more
specific than it was in the past. Please have a gander if you're
interested, or just curious. If the details are still unclear, just let us
know and we can get them proposed prior to PTG discussions in a couple
weeks.

[0]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/stein/mfa-auth-receipt.html
[1]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds.html
[2] https://review.openstack.org/#/c/541903/

## Recently Merged Changes

Search query: https://bit.ly/2IACk3F

We merged 27 changes this week. We also got a good start on the python 3
community goal [0].

Note that there were some patches proposed for the community goal last
week, but the author wasn't listed as a champion for the goal and the
patches contained errors. We weren't able to reach the author and neither
were the goal champions. That said, those patches have been abandoned and
Doug reran the tooling to migrate our jobs. Just something to keep in mind
if you're reviewing those patches.

[0] https://governance.openstack.org/tc/goals/stein/python3-first.html

## Changes that need Attention

Search query: https://bit.ly/2wv7QLK

There are 61 changes that are passing CI, not in merge conflict, have no
negative reviews and aren't proposed by bots. We're making good progress on
the Flask reviews [0], but more reviews are always welcome.

[0]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1776504

## Bugs

This week we opened 4 new bugs and closed 1.

Bugs opened (4)

   - Bug #1789450 (keystone:Undecided) opened by Steven Relf
   https://bugs.launchpad.net/keystone/+bug/1789450


   - Bug #1789849 (keystone:Undecided) opened by Jean-
   https://bugs.launchpad.net/keystone/+bug/1789849


   - Bug #1790148 (keystone:Undecided) opened by FreudianSlip
   https://bugs.launchpad.net/keystone/+bug/1790148


   - Bug #1789351 (keystonemiddleware:Undecided) opened by yatin
   https://bugs.launchpad.net/keystonemiddleware/+bug/1789351


Bugs fixed (1)

   - Bug #1787874 (keystone:Medium) fixed by wangxiyuan
   https://bugs.launchpad.net/keystone/+bug/1787874


## Milestone Outlook

We have a lot of work to do to shape the release between now and milestone
1, which will be October 26th. Otherwise we'll be meeting in Denver in a
couple weeks.

https://releases.openstack.org/stein/schedule.html

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad:
https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator and
https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67

Re: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ?

2018-08-30 Thread Lance Bragstad
This topic has surfaced intermittently ever since keystone implemented
fernet tokens in Kilo. An initial idea was written down shortly afterwords
[0], then we targeted it to Ocata [1], and removed from the backlog around
the Pike timeframe [2]. The commit message of [2] includes meeting links.
The discussion usually tripped attempting to abstract enough of the details
about rotation and setup of keys to work in all cases.

[0] https://review.openstack.org/#/c/311268/
[1] https://review.openstack.org/#/c/363065/
[2] https://review.openstack.org/#/c/439194/

On Thu, Aug 30, 2018 at 5:02 AM Juan Antonio Osorio Robles <
jaosor...@redhat.com> wrote:

> FWIW, instead of barbican, castellan could be used as a key manager.
>
> On 08/30/2018 12:23 PM, Adrian Turjak wrote:
>
>
> On 30/08/18 6:29 AM, Lance Bragstad wrote:
>
> Is that what is being described here ?
>> https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html
>>
>
> This is a separate mechanism for storing secrets, not necessarily
> passwords (although I agree the term credentials automatically makes people
> assume passwords). This is used if consuming keystone's native MFA
> implementation. For example, storing a shared secret between the user and
> keystone that is provided as a additional authentication method along with
> a username and password combination.
>
>
> Is there any interest or plans to potentially allow Keystone's credential
> store to use Barbican as a storage provider? Encryption already is better
> than nothing, but if you already have (or will be deploying) a proper
> secret store with a hardware backend (or at least hardware stored
> encryption keys) then it might make sense to throw that in Barbican.
>
> Or is this also too much of a chicken/egg problem? How safe is it to rely
> on Barbican availability for MFA secrets and auth?
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stepping down as keystone core

2018-08-29 Thread Lance Bragstad
Samuel,

Thanks for all the dedication and hard work upstream. I'm relieved that you
won't be too far away and that you're still involved with the Outreachy
programs. You played an instrumental role in getting keystone involved with
that community.

As always, we'd be happy to have you back in the event your work involves
keystone again.

Best,

Lance

On Wed, Aug 29, 2018 at 2:25 PM Samuel de Medeiros Queiroz <
samuel...@gmail.com> wrote:

> Hi Stackers!
>
> It has been both an honor and privilege to serve this community as a
> keystone core.
>
> I am in a position that does not allow me enough time to devote reviewing
> code and participating of the development process in keystone. As a
> consequence, I am stepping down as a core reviewer.
>
> A big thank you for your trust and for helping me to grow both as a person
> and as professional during this time in service.
>
> I will stay around: I am doing research on interoperability for my masters
> degree, which means I am around the SDK project. In addition to that, I
> recently became the Outreachy coordinator for OpenStack.
>
> Let me know if you are interested on one of those things.
>
> Get in touch on #openstack-outreachy, #openstack-sdks or
> #openstack-keystone.
>
> Thanks,
> Samuel de Medeiros Queiroz (samueldmq)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ?

2018-08-29 Thread Lance Bragstad
On Wed, Aug 29, 2018 at 1:16 PM Waines, Greg 
wrote:

> Makes sense.
>
>
>
> So what is the recommended upstream approach for securely storing user
> passwords in keystone ?
>

Keystone will hash passwords before persisting them in their own table.
Encrypted passwords are never stored.


>
>
> Is that what is being described here ?
> https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html
>

This is a separate mechanism for storing secrets, not necessarily passwords
(although I agree the term credentials automatically makes people assume
passwords). This is used if consuming keystone's native MFA implementation.
For example, storing a shared secret between the user and keystone that is
provided as a additional authentication method along with a username and
password combination.


>
>
>
>
> Greg.
>
>
>
>
>
> *From: *Juan Antonio Osorio Robles 
> *Reply-To: *"openstack-dev@lists.openstack.org" <
> openstack-dev@lists.openstack.org>
> *Date: *Wednesday, August 29, 2018 at 2:00 PM
> *To: *"openstack-dev@lists.openstack.org" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [keystone] [barbican] Keystone's use of
> Barbican ?
>
>
>
> This is not the case. Barbican requires users and systems that use it to
> use keystone for authentication. So keystone can't use Barbican for this.
> Chicken and egg problem.
>
>
>
> On 08/29/2018 08:08 PM, Waines, Greg wrote:
>
> My understanding is that Keystone can be configured to use Barbican to
> securely store user passwords.
>
> Is this true ?
>
>
>
> If yes, is this the standard / recommended / upstream way to securely
> store Keystone user passwords ?
>
>
>
> If yes, I can’t find any descriptions of this is configured ?
>
> Can someone provide some pointers ?
>
>
>
> Greg.
>
>
>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
>
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goal][python3] week 3 update

2018-08-29 Thread Lance Bragstad
On Mon, Aug 27, 2018 at 2:37 PM Doug Hellmann  wrote:

> This is week 3 of the "Run under Python 3 by default" goal
> (https://governance.openstack.org/tc/goals/stein/python3-first.html).
>
> == What we learned last week ==
>
> We have a few enthusiastic folks who want to contribute to the goal
> who have not been involved in the previous discussion with goal
> champions.  If you are one of them, please get in touch with me
> BEFORE beginning any work.
> http://lists.openstack.org/pipermail/openstack-dev/2018-August/133610.html
>
> In the course of adding python 3.6 unit tests to Manilla, a recursion
> bug setting up the SSL context was reported.
> https://bugs.launchpad.net/manila/+bug/1788253 (We could use some
> help debugging it.)
>
> Several projects have their .gitignore files set up to ignore all
> '.' files. I'm not sure why this is the case. It has caused some
> issues with the migration, but I think we've worked around the
> problem in the scripts now.
>
> We extended the scripts for generating the migration patches to
> handle the neutron-specific versions of the unit test jobs for
> python 3.5 and 3.6.
>
> The Storyboard UI has some performance issue when a single story
> has several hundred comments. This is an unusual situation, which
> we don't expect to come up for "normal" stories, but the SB team
> discussed some ways to address it.
>
> Akihiro Mitoki expressed some concern about the new release notes
> job being set up in horizon, and how to test it. The "new" job is
> the same as the "old" job except that it sets up sphinx using
> python3. The versions of sphinx and reno that we rely on for the
> release notes jobs all work under python3, and projects don't have
> any convenient way to install extra dependencies, so we are confident
> that the new version of the job works. If you find that not to be
> true for your project, we can help fix the problem.
>
> We have a few repos with unstable functional tests, and we seem to
> have some instability in the integrated gate as well.
>
> == Ongoing and Completed Work ==
>
> These teams have started or completed their Zuul migration work:
>
> +-+--+---+--+
> | Team| Open | Total | Done |
> +-+--+---+--+
> | Documentation   |0 |12 | yes  |
> | OpenStack-Helm  |5 | 5 |  |
> | OpenStackAnsible|   70 |   270 |  |
> | OpenStackClient |   10 |19 |  |
> | OpenStackSDK|   12 |15 |  |
> | PowerVMStackers |0 |15 | yes  |
> | Technical Committee |0 | 5 | yes  |
> | blazar  |   16 |16 |  |
> | congress|1 |16 |  |
> | cyborg  |2 | 9 |  |
> | designate   |   10 |17 |  |
> | ec2-api |4 | 7 |  |
> | freezer |   26 |30 |  |
> | glance  |   16 |16 |  |
> | horizon |0 | 8 | yes  |
> | ironic  |   22 |60 |  |
> | karbor  |   30 |30 |  |
> | keystone|   35 |35 |  |
> | kolla   |1 | 8 |  |
> | kuryr   |   26 |29 |  |
> | magnum  |   24 |29 |  |
> | manila  |   19 |19 |  |
> | masakari|   18 |18 |  |
> | mistral |0 |25 | yes  |
> | monasca |   20 |69 |  |
> | murano  |   25 |25 |  |
> | octavia |5 |23 |  |
> | oslo|3 |   157 |  |
> | other   |3 | 7 |  |
> | qinling |1 | 6 |  |
> | requirements|0 | 5 | yes  |
> | sahara  |0 |27 | yes  |
> | searchlight |5 |13 |  |
> | solum   |0 |17 | yes  |
> | storlets|5 | 5 |  |
> | swift   |9 |11 |  |
> | tacker  |   16 |16 |  |
> | tricircle   |5 | 9 |  |
> | tripleo |   67 |78 |  |
> | vitrage |0 |17 | yes  |
> | watcher |   12 |17 |  |
> | winstackers |6 |11 |  |
> | zaqar   |   12 |17 |  |
> | zun |0 |13 | yes  |
> +-+--+---+--+
>
> == Next Steps ==
>
> If your team is ready to have your zuul settings migrated, please
> let us know by following up to this email. We will start with the
> volunteers, and then work our way through the other teams.
>
>
The keystone team is ready. Just FYI - there are pre-existing patches
proposed to our repositories, but they weren't initiated by one of the goal
champions [0].

I can help work through issues on our end.

[0]
https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+topic:python3-first


> After 

[openstack-dev] [keystone] Stein PTG Schedule

2018-08-27 Thread Lance Bragstad
I've worked through the list of topics and organized them into a rough
schedule [0]. As it stands right now, Monday is going to be the main
cross-project day (similar to the identity-integration track in Dublin).
We don't have a room on Tuesday and Wednesday, but we will likely have
continued cross-project discussions around federation. Thursday and
Friday are currently staged for keystone-specific topics.

If you see any conflicts or issues with what is proposed, please let me
know.

[0] https://etherpad.openstack.org/p/keystone-stein-ptg



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018

2018-08-24 Thread Lance Bragstad


On 08/22/2018 07:49 AM, Lance Bragstad wrote:
>
> On 08/22/2018 03:23 AM, Adrian Turjak wrote:
>> Bah! I saw this while on holiday and didn't get a chance to respond,
>> sorry for being late to the conversation.
>>
>> On 11/08/18 3:46 AM, Colleen Murphy wrote:
>>> ### Self-Service Keystone
>>>
>>> At the weekly meeting Adam suggested we make self-service keystone a focus 
>>> point of the PTG[9]. Currently, policy limitations make it difficult for an 
>>> unprivileged keystone user to get things done or to get information without 
>>> the help of an administrator. There are some other projects that have been 
>>> created to act as workflow proxies to mitigate keystone's limitations, such 
>>> as Adjutant[10] (now an official OpenStack project) and Ksproj[11] (written 
>>> by Kristi). The question is whether the primitives offered by keystone are 
>>> sufficient building blocks for these external tools to leverage, or if we 
>>> should be doing more of this logic within keystone. Certainly improving our 
>>> RBAC model is going to be a major part of improving the self-service user 
>>> experience.
>>>
>>> [9] 
>>> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-121
>>> [10] https://adjutant.readthedocs.io/en/latest/
>>> [11] https://github.com/CCI-MOC/ksproj
>> As you can probably expect, I'd love to be a part of any of these
>> discussions. Anything I can nicely move to being logic directly
>> supported in Keystone, the less I need to do in Adjutant. The majority
>> of things though I think I can do reasonably well with the primitives
>> Keystone gives me, and what I can't I tend to try and work with upstream
>> to fill the gaps.
>>
>> System vs project scope helps a lot though, and I look forward to really
>> playing with that.
> Since it made sense to queue incorporating system scope after the flask
> work, I just started working with that on the credentials API*. There is
> a WIP series up for review that attempts to do a couple things [0].
> First it tries to incorporate system and project scope checking into the
> API. Second it tries to be more explicit about protection test cases,
> which I think is going to be important since we're adding another scope
> type. We also support three different roles now and it would be nice to
> clearly see who can do what in each case with tests.
>
> I'd be curious to get your feedback here if you have any.
>
> * Because the credentials API was already moved to flask and has room
> for self-service improvements [1]
>
> [0] https://review.openstack.org/#/c/594547/

This should be passing tests at least now, but there are still some
tests left to write. Most of what's in the patch is testing the new
authorization scope (e.g. system).

I'm currently taking advice on ways to extensively test six different
personas without duplication running rampant across test cases (project
admin, project member, project reader, system admin, system member,
system reader).

In summary, it does make the credential API much more self-service
oriented, which is something we should try and do everywhere (I picked
credentials first because it was already moved to flask).

> [1]
> https://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/credential.py#n21
>
>> I sadly won't be at the PTG, but will be at the Berlin summit. Plus I
>> have a lot of Adjutant work planned for Stein, a large chunk of which is
>> refactors and reshuffling blueprints and writing up a roadmap, plus some
>> better entry point tasks for new contributors.
>>
>>> ### Standalone Keystone
>>>
>>> Also at the meeting and during office hours, we revived the discussion of 
>>> what it would take to have a standalone keystone be a useful identity 
>>> provider for non-OpenStack projects[12][13]. First up we'd need to turn 
>>> keystone into a fully-fledged SAML IdP, which it's not at the moment (which 
>>> is a point of confusion in our documentation), or even add support for it 
>>> to act as an OpenID Connect IdP. This would be relatively easy to do (or at 
>>> least not impossible). Then the application would have to use 
>>> keystonemiddleware or its own middleware to route requests to keystone to 
>>> issue and validate tokens (this is one aspect where we've previously 
>>> discussed whether JWT could benefit us). Then the question is what should a 
>>> not-OpenStack application do with keystone's "scoped RBAC"? It would all 
>>> depend on how the resources of the application are grouped

Re: [openstack-dev] [keystone] Keystone Team Update - Week of 20 August 2018

2018-08-24 Thread Lance Bragstad


On 08/24/2018 10:15 AM, Colleen Murphy wrote:
> # Keystone Team Update - Week of 20 August 2018
>
> ## News
>
> We ended up releasing an RC2 after all in order to include placeholder 
> sqlalchemy migrations for Rocky, thanks wxy for catching it!
>
> ## Open Specs
>
> Search query: https://bit.ly/2Pi6dGj
>
> Lance reproposed the auth receipts and application credentials specs that we 
> punted on last cycle for Stein.
>
> ## Recently Merged Changes
>
> Search query: https://bit.ly/2IACk3F
>
> We merged 13 changes this week.
>
> ## Changes that need Attention
>
> Search query: https://bit.ly/2wv7QLK
>
> There are 75 changes that are passing CI, not in merge conflict, have no 
> negative reviews and aren't proposed by bots.
>
> If that seems like a lot more than last week, it's because someone has 
> helpfully proposed many patches supporting the python3-first community 
> goal[1]. However, they haven't coordinated with the goal champions and have 
> missed some steps[2], like proposing the removal of jobs from project-config 
> and proposing jobs to the stable branches. I would recommend coordinating 
> with the python3-first goal champions on merging these patches. The good news 
> is that all of our projects seem to work with python 3.6!
>
> [1] https://governance.openstack.org/tc/goals/stein/python3-first.html
> [2] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133610.html
>
> ## Bugs
>
> This week we opened 4 new bugs and closed 1.
>
> Bugs opened (4) 
> Bug #1788415 (keystone:High) opened by Lance Bragstad 
> https://bugs.launchpad.net/keystone/+bug/1788415 
> Bug #1788694 (keystone:High) opened by Lance Bragstad 
> https://bugs.launchpad.net/keystone/+bug/1788694 
> Bug #1787874 (keystone:Medium) opened by wangxiyuan 
> https://bugs.launchpad.net/keystone/+bug/1787874 
> Bug #1788183 (oslo.policy:Undecided) opened by Stephen Finucane 
> https://bugs.launchpad.net/oslo.policy/+bug/1788183 
>
> Bugs closed (1) 
> Bug #1771203 (python-keystoneclient:Undecided) 
> https://bugs.launchpad.net/python-keystoneclient/+bug/1771203 
>
> Bugs fixed (0)
>
> ## Milestone Outlook
>
> https://releases.openstack.org/rocky/schedule.html
>
> We're at the end of the RC period with the official release happening next 
> week.
>
> ## Shout-outs
>
> Thanks everyone for a great release!

++

I can't say thanks enough to everyone who contributes to this in some
way, shape, or form. I'm looking forward to Stein :)

>
> ## Help with this newsletter
>
> Help contribute to this newsletter by editing the etherpad: 
> https://etherpad.openstack.org/p/keystone-team-newsletter
> Dashboard generated using gerrit-dash-creator and 
> https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018

2018-08-22 Thread Lance Bragstad


On 08/22/2018 03:23 AM, Adrian Turjak wrote:
> Bah! I saw this while on holiday and didn't get a chance to respond,
> sorry for being late to the conversation.
>
> On 11/08/18 3:46 AM, Colleen Murphy wrote:
>> ### Self-Service Keystone
>>
>> At the weekly meeting Adam suggested we make self-service keystone a focus 
>> point of the PTG[9]. Currently, policy limitations make it difficult for an 
>> unprivileged keystone user to get things done or to get information without 
>> the help of an administrator. There are some other projects that have been 
>> created to act as workflow proxies to mitigate keystone's limitations, such 
>> as Adjutant[10] (now an official OpenStack project) and Ksproj[11] (written 
>> by Kristi). The question is whether the primitives offered by keystone are 
>> sufficient building blocks for these external tools to leverage, or if we 
>> should be doing more of this logic within keystone. Certainly improving our 
>> RBAC model is going to be a major part of improving the self-service user 
>> experience.
>>
>> [9] 
>> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-121
>> [10] https://adjutant.readthedocs.io/en/latest/
>> [11] https://github.com/CCI-MOC/ksproj
> As you can probably expect, I'd love to be a part of any of these
> discussions. Anything I can nicely move to being logic directly
> supported in Keystone, the less I need to do in Adjutant. The majority
> of things though I think I can do reasonably well with the primitives
> Keystone gives me, and what I can't I tend to try and work with upstream
> to fill the gaps.
>
> System vs project scope helps a lot though, and I look forward to really
> playing with that.

Since it made sense to queue incorporating system scope after the flask
work, I just started working with that on the credentials API*. There is
a WIP series up for review that attempts to do a couple things [0].
First it tries to incorporate system and project scope checking into the
API. Second it tries to be more explicit about protection test cases,
which I think is going to be important since we're adding another scope
type. We also support three different roles now and it would be nice to
clearly see who can do what in each case with tests.

I'd be curious to get your feedback here if you have any.

* Because the credentials API was already moved to flask and has room
for self-service improvements [1]

[0] https://review.openstack.org/#/c/594547/
[1]
https://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/credential.py#n21

>
> I sadly won't be at the PTG, but will be at the Berlin summit. Plus I
> have a lot of Adjutant work planned for Stein, a large chunk of which is
> refactors and reshuffling blueprints and writing up a roadmap, plus some
> better entry point tasks for new contributors.
>
>> ### Standalone Keystone
>>
>> Also at the meeting and during office hours, we revived the discussion of 
>> what it would take to have a standalone keystone be a useful identity 
>> provider for non-OpenStack projects[12][13]. First up we'd need to turn 
>> keystone into a fully-fledged SAML IdP, which it's not at the moment (which 
>> is a point of confusion in our documentation), or even add support for it to 
>> act as an OpenID Connect IdP. This would be relatively easy to do (or at 
>> least not impossible). Then the application would have to use 
>> keystonemiddleware or its own middleware to route requests to keystone to 
>> issue and validate tokens (this is one aspect where we've previously 
>> discussed whether JWT could benefit us). Then the question is what should a 
>> not-OpenStack application do with keystone's "scoped RBAC"? It would all 
>> depend on how the resources of the application are grouped and whether they 
>> care about multitenancy in some form. Likely each application would have 
>> different needs and it would be difficult to find a one-size-fits-all 
>> approach. We're interested to know whether anyone has a burning use case for 
>> something like this.
>>
>> [12] 
>> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-192
>> [13] 
>> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-07.log.html#t2018-08-07T17:01:30
> This one is interesting because another department at Catalyst is
> actually looking to use Keystone outside of the scope of OpenStack. They
> are building a SaaS platform, and they need authn, authz (with some
> basic RBAC), a service catalog (think API endpoint per software
> offering), and most of those things are useful outside of OpenStack.
> They can then use projects to signify a customer, and a project
> (customer) could have one or more users accessing the management GUIs,
> with roles giving them some RBAC. A large part of this is because they
> can then also piggy back on a lot of work our team has done with
> OpenStack and Keystone and even reuse some of our 

Re: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018

2018-08-11 Thread Lance Bragstad
[14]. Lance will organize these into an agenda soonish.
>
> [14] https://etherpad.openstack.org/p/keystone-stein-ptg
>
> ## Recently Merged Changes
>
> Search query: https://bit.ly/2IACk3F
>
> We merged 16 changes this week.
>
> ## Changes that need Attention
>
> Search query: https://bit.ly/2wv7QLK
>
> There are 54 changes that are passing CI, not in merge conflict, have no
> negative reviews and aren't proposed by bots. Special attention should be
> given to patches that close bugs, and we should make sure we backport any
> critical bugfixes to stable/rocky.
>
> ## Bugs
>
> This week we opened 2 new bugs and closed 3. There don't currently seem to
> be any showstopper bugs for Rocky. orange_julius has been chasing a fun,
> apparently longstanding bug in ldappool[15], our traditionally low-effort
> adopted project.
>
> Bugs opened (2)
> Bug #1786383 (keystone:Undecided) opened by Liyingjun
> https://bugs.launchpad.net/keystone/+bug/1786383
> Bug #1785898 (ldappool:Undecided) opened by Nick Wilburn
> https://bugs.launchpad.net/ldappool/+bug/1785898
>
> Bugs fixed (3)
> Bug #1782704 (keystone:High) fixed by Lance Bragstad
> https://bugs.launchpad.net/keystone/+bug/1782704
> Bug #1780503 (keystone:Medium) fixed by Gage Hugo
> https://bugs.launchpad.net/keystone/+bug/1780503
> Bug #1785164 (keystone:Undecided) fixed by wangxiyuan
> https://bugs.launchpad.net/keystone/+bug/1785164
>
> [15] https://bugs.launchpad.net/ldappool/+bug/1785898
>
> ## Milestone Outlook
>
> https://releases.openstack.org/rocky/schedule.html
>
> This week was the RC1 deadline as well as the string freeze, so we should
> not be merging any changes to strings for Rocky. We have two weeks to
> release another RC if we need to.
>
> ## Help with this newsletter
>
> Help contribute to this newsletter by editing the etherpad:
> https://etherpad.openstack.org/p/keystone-team-newsletter
> Dashboard generated using gerrit-dash-creator and
> https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api

2018-08-09 Thread Lance Bragstad


On 08/09/2018 12:48 PM, Doug Hellmann wrote:
> Excerpts from Matt Riedemann's message of 2018-08-09 12:18:14 -0500:
>> On 8/9/2018 11:47 AM, Doug Hellmann wrote:
>>> Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400:
 For evidence, see:

 http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING

 thousands of these are filling the logs with WARNING-level log messages,
 making it difficult to find anything:

 Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060
 devstack@placement-api.service[14403]: WARNING py.warnings
 [req-a809b022-59af-4628-be73-488cfec3187d
 req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement]
 /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896:
 UserWarning: Policy placement:resource_providers:list failed scope
 check. The token used to make the request was project scoped but the
 policy requires ['system'] scope. This behavior may change in the future
 where using the intended scope is required
 Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060
 devstack@placement-api.service[14403]:   warnings.warn(msg)
 Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060
 devstack@placement-api.service[14403]:

 Is there any way we can get rid of these?

 Thanks,
 -jay

>>> It looks like those are coming out of the policy library? Maybe file a
>>> bug there. I added "oslo" to the subject line to get the team's
>>> attention.
>>>
>>> This feels like something we could fix and backport to rocky.
>>>
>>> Doug
>> I could have sworn I created a bug in oslo.policy for this at one point 
>> for the same reason Jay mentions it, but I guess not.
>>
>> We could simply, on the nova side, add a warnings filter to only log 
>> this once.
>>
> What level should it be logged at in the policy library? Should it be
> logged there at all?

The initial intent behind logging was to make sure operators knew that
they needed to make a role assignment adjustment in order to be
compatible moving forward. I can investigate a way to log things at
least once in oslo.policy though. I fear not logging it at all would
cause failures in upgrade since operators wouldn't know they need to
make that adjustment.

>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api

2018-08-09 Thread Lance Bragstad


On 08/09/2018 12:18 PM, Matt Riedemann wrote:
> On 8/9/2018 11:47 AM, Doug Hellmann wrote:
>> Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400:
>>> For evidence, see:
>>>
>>> http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING
>>>
>>>
>>> thousands of these are filling the logs with WARNING-level log
>>> messages,
>>> making it difficult to find anything:
>>>
>>> Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060
>>> devstack@placement-api.service[14403]: WARNING py.warnings
>>> [req-a809b022-59af-4628-be73-488cfec3187d
>>> req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement]
>>> /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896:
>>> UserWarning: Policy placement:resource_providers:list failed scope
>>> check. The token used to make the request was project scoped but the
>>> policy requires ['system'] scope. This behavior may change in the
>>> future
>>> where using the intended scope is required
>>> Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060
>>> devstack@placement-api.service[14403]:   warnings.warn(msg)
>>> Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060
>>> devstack@placement-api.service[14403]:
>>>
>>> Is there any way we can get rid of these?
>>>
>>> Thanks,
>>> -jay
>>>
>> It looks like those are coming out of the policy library? Maybe file a
>> bug there. I added "oslo" to the subject line to get the team's
>> attention.
>>
>> This feels like something we could fix and backport to rocky.
>>
>> Doug
>
> I could have sworn I created a bug in oslo.policy for this at one
> point for the same reason Jay mentions it, but I guess not.
>

This? https://bugs.launchpad.net/oslo.policy/+bug/1421863

>
> We could simply, on the nova side, add a warnings filter to only log
> this once.
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Paste unmaintained

2018-08-06 Thread Lance Bragstad


On 08/02/2018 09:36 AM, Chris Dent wrote:
> On Thu, 2 Aug 2018, Stephen Finucane wrote:
>
>> Given that multiple projects are using this, we may want to think about
>> reaching out to the author and seeing if there's anything we can do to
>> at least keep this maintained going forward. I've talked to cdent about
>> this already but if anyone else has ideas, please let me know.
>
> I've sent some exploratory email to Ian, the original author, to get
> a sense of where things are and whether there's an option for us (or
> if for some reason us wasn't okay, me) to adopt it. If email doesn't
> land I'll try again with other media
>
> I agree with the idea of trying to move away from using it, as
> mentioned elsewhere in this thread and in IRC, but it's not a simple
> step as at least in some projects we are using paste files as
> configuration that people are allowed (and do) change. Moving away
> from that is the hard part, not figuring out how to load WSGI
> middleware in a modern way.

++

Keystone has been battling this specific debate for several releases.
The mutable configuration goal in addition to some much needed technical
debt cleanup was the final nail. Long story short, moving off of paste
eases the implementations for initiatives we've had in the pipe for a
long time. We started an effort to move to flask in Rocky.

Morgan has been working through the migration since June, and it's been
quite involved [0]. At one point he mentioned trying to write-up how he
approached the migration for keystone. I understand that not every
project structures their APIs the same way, but a high-level guide might
be helpful for some if the long-term goal is to eventually move off of
paste (e.g. how we approached it, things that tripped us up, how we
prepared the code base for flask, et cetera).

I'd be happy to help coordinate a session or retrospective at the PTG if
other groups find that helpful.

[0]
https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+branch:master+topic:bug/1776504
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 30 July 2018

2018-08-03 Thread Lance Bragstad
# Keystone Team Update - Week of 30 July 2018

## News

This week was relatively quiet, but we're working towards RC1 as our
next deadline.

## Recently Merged Changes

Search query: https://bit.ly/2IACk3F

We merged 20 changes this week.

Mainly changes to continue moving APIs to flask and we landed a huge
token provider API refactor.

## Changes that need Attention

Search query: https://bit.ly/2wv7QLK

There are 43 changes that are passing CI, not in merge conflict, have no
negative reviews and aren't proposed by bots.

Reminder that we're in soft string freeze and past the 3rd milestone so
prioritizing bug fixes is beneficial.

## Bugs

This week we opened 4 new bugs, closed 1, and fixed 3.

The main concern with
fixing https://bugs.launchpad.net/keystone/+bug/1778945 was that it will
impact downstream providers, hence the release note. Otherwise it's
cleaned up a ton of technical debt (I appreciate the reviews here).

## Milestone Outlook

This upcoming week is going to be RC1, which we will plan to cut by
Friday unless critical bugs emerge. We do have a list of bugs to target
to RC, but none of them are blockers. If it comes down to it, they can
likely be pushed to Stein. If you notice anything that comes up as a
release blocker, please let me know.

https://bit.ly/2MeXN0L
https://releases.openstack.org/rocky/schedule.html

## Help with this newsletter

Help contribute to this newsletter by editing the
etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator
and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Prospective RC1 Bugs

2018-08-02 Thread Lance Bragstad
Hey all,

I went through all bugs opened during the Rocky release and came up with
a list of ones that might be good to fix before next week [0]. The good
news is that more than half are in progress and none of them are release
blockers, just ones that would be good to get in.

Let me know if you see anything reported this week that needs to get fixed.

[0] https://bit.ly/2MeXN0L



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 23 July 2018

2018-07-27 Thread Lance Bragstad
# Keystone Team Update - Week of 23 July 2018

## News

This week wrapped up rocky-3, but the majority of the things working
through review are refactors that aren't necessarily susceptible to the
deadline.

## Recently Merged Changes

Search query: https://bit.ly/2IACk3F

We merged 32 changes this week, including the remaining patches for
implementing strict two-level hierarchical limits (server and client
support), Flask work, and a security fix.

## Changes that need Attention

Search query: https://bit.ly/2wv7QLK

There are 47 changes that are passing CI, not in merge conflict, have no
negative reviews and aren't proposed by bots.

There are still a lot of patches that need attention, specifically the
work to start converting keystone APIs to consume Flask. These changes
should be transparent to end users, but if you have questions about the
approach or specific reviews, please come ask in #openstack-keystone.
Kristi also has a patch up to implement the mutable config goal for
keystone [0]. This work was dependent on Flask bits that merged earlier
this week, but based on a discussion with the TC we've already missed
the deadline [1]. Reviews here would still be appreciated because it
should help us merge the implementation early in Stein.

[0] https://review.openstack.org/#/c/585417/
[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-27.log.html#t2018-07-27T15:03:49

## Bugs

This week we opened 6 new bugs and fixed 2.

The highlight here is a security bug that was fixed and backported to
all supported releases [0].

[0] https://bugs.launchpad.net/keystone/+bug/1779205

## Milestone Outlook

https://releases.openstack.org/rocky/schedule.html

At this point we're past the third milestone, meaning requirements are
frozen and we're in a soft string freeze. Please be aware of those
things when reviewing patch sets. The next deadline for us is RC target
on August 10th.

## Help with this newsletter

Help contribute to this newsletter by editing the
etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator
and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] PTL Candidacy for the Stein cycle

2018-07-26 Thread Lance Bragstad
Hey everyone,

I'm writing to submit my self-nomination as keystone's PTL for the Stein
release.

We've made significant progress tackling some of the major goals we set
for keystone in Pike. Now that we're getting close to wrapping up some
of those initiatives, I'd like to continue advocating for enhanced RBAC
and unified limits. I think we can do this specifically by using them in
keystone, where applicable, and finalize them in Stein.

While a lot of the work we tackled in Rocky was transparent to users, it
paved the way for us to make strides in other areas. We focused on
refactoring large chunks of code in order to reduce technical debt and
traded some hand-built solutions in favor of well-known frameworks. In
my opinion, these are major accomplishments that drastically simplified
keystone. Because of this, it'll be easier to implement new features we
originally slated for this release. We also took time to smooth out
usability issues with unified limits and implemented support across
clients and libraries. This is going to help services consume keystone's
unified limits implementation early next release.

Additionally, I'd like to take some time in Stein to focus on the next
set of challenges and where we'd like to take keystone in the future.
One area that we haven't really had the bandwidth to focus on is
federation. From Juno to Ocata there was a consistent development focus
on supporting federated deployments, resulting in a steady stream of
features or improvements. Conversely, I think having a break from
constant development will help us approach it with a fresh perspective.
In my opinion, federation improvements are a timely thing to work on
given the use-cases that have been cropping up in recent summits and
PTGs. Ideally, I think it would great to come up with an actionable plan
for making federation easier to use and a first-class tested citizen of
keystone.

Finally, I'll continue to place utmost importance on assisting other
services in how they consume and leverage the work we do.

Thanks for taking a moment to read what I have to say and I look forward
to catching up in Denver.

Lance



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone Team Update - Week of 9 July 2018

2018-07-18 Thread Lance Bragstad


On 07/13/2018 01:33 PM, Colleen Murphy wrote:
> # Keystone Team Update - Week of 9 July 2018
>
> ## News
>
> ### New Core Reviewer
>
> We added a new core reviewer[1]: thanks to XiYuan for stepping up to take 
> this responsibility and for all your hard work on keystone!
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132123.html
>
> ### Release Status
>
> This week is our scheduled feature freeze week, but we did not have quite the 
> tumult of activity we had at feature freeze last cycle. We're pushing the 
> auth receipts work until after the token model refactor is finished[2], to 
> avoid the receipts model having to carry extra technical debt. The 
> fine-grained access control feature for application credentials is also going 
> to need to be pushed to next cycle when more of us can dedicate time to 
> helping with it it[3]. The base work for default roles was completed[4] but 
> the auditing of the keystone API hasn't been completed yet and is partly 
> dependent on the flask work, so it is going to continue on into next 
> cycle[5]. The hierarchical limits work is pretty solid but we're (likely) 
> going to let it slide into next week so that some of the interface details 
> can be worked out[6].
>   
> [2] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-07-10.log.html#t2018-07-10T01:39:27
> [3] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-07-13.log.html#t2018-07-13T14:19:08
> [4] https://review.openstack.org/572243
> [5] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-07-13.log.html#t2018-07-13T14:02:03
> [6] https://review.openstack.org/557696
>
> ### PTG Planning
>
> We're starting to prepare topics for the next PTG in Denver[7] so please add 
> topics to the planning etherpad[8].
>
> [7] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132144.html
> [8] https://etherpad.openstack.org/p/keystone-stein-ptg
>
> ## Recently Merged Changes
>
> Search query: https://bit.ly/2IACk3F
>
> We merged 20 changes this week, including several of the flask conversion 
> patches.
>
> ## Changes that need Attention
>
> Search query: https://bit.ly/2wv7QLK
>
> There are 62 changes that are passing CI, not in merge conflict, have no 
> negative reviews and aren't proposed by bots. The major efforts to focus on 
> are the token model refactor[9], the flaskification work[10], and the 
> hierarchical project limits work[11].
>
> [9] https://review.openstack.org/#/q/is:open+topic:bug/1778945
> [10] https://review.openstack.org/#/q/is:open+topic:bug/1776504
> [11] https://review.openstack.org/#/q/is:open+topic:bp/strict-two-level-model
>
> ## Bugs
>
> This week we opened 3 new bugs and closed 4.
>
> Bugs opened (3) 
> Bug #1780532 (keystone:Undecided) opened by zheng yan 
> https://bugs.launchpad.net/keystone/+bug/1780532 
> Bug #1780896 (keystone:Undecided) opened by wangxiyuan 
> https://bugs.launchpad.net/keystone/+bug/1780896 
> Bug #1781536 (keystone:Undecided) opened by Pawan Gupta 
> https://bugs.launchpad.net/keystone/+bug/1781536 
>
> Bugs closed (0) 
>
> Bugs fixed (4) 
> Bug #1765193 (keystone:Medium) fixed by wangxiyuan 
> https://bugs.launchpad.net/keystone/+bug/1765193 
> Bug #1780159 (keystone:Medium) fixed by Sami Makki 
> https://bugs.launchpad.net/keystone/+bug/1780159 
> Bug #1780896 (keystone:Undecided) fixed by wangxiyuan 
> https://bugs.launchpad.net/keystone/+bug/1780896 
> Bug #1779172 (oslo.policy:Undecided) fixed by Lance Bragstad 
> https://bugs.launchpad.net/oslo.policy/+bug/1779172
>
> ## Milestone Outlook
>
> https://releases.openstack.org/rocky/schedule.html
>
> This week is our scheduled feature freeze. We are likely going to make an 
> extension for the hierarchical project limits work, pending discussion on the 
> mailing list.
>
> Next week is the non-client final release date[12], so work happening in 
> keystoneauth, keystonemiddleware, and our oslo libraries needs to be finished 
> and reviewed prior to next Thursday so a release can be requested in time.
I've starred some reviews that I think we should land before Thursday if
possible [0]. Eyes there would be appreciated. Morgan also reported a
bug that he is working on fixing in keystonemiddleware that we should
try an include as well [1]. I'll add the patch to the query as soon as a
review is proposed to gerrit.

[0]
https://review.openstack.org/#/q/starredby:lbragstad%2540gmail.com+status:open
[1] https://bugs.launchpad.net/keystonemiddleware/+bug/1782404
>
> [12] https://review.

Re: [openstack-dev] [keystone] Feature Status and Exceptions

2018-07-13 Thread Lance Bragstad


On 07/13/2018 02:37 PM, Harry Rybacki wrote:
> On Fri, Jul 13, 2018 at 3:20 PM Lance Bragstad  wrote:
>> Hey all,
>>
>> As noted in the weekly report [0], today is feature freeze for 
>> keystone-related specifications. I wanted to elaborate on each specification 
>> so that our plan is clear moving forward.
>>
>> Unified Limits
>>
>> I propose that we issue a feature freeze exception for this work. Mainly 
>> because the changes are relatively isolated and low-risk. The majority of 
>> the feedback on the approach is being held up by an interface decision, 
>> which doesn't impact users, it's certainly more of a developer preference 
>> [1].
>>
>> That said, I don't think it would be too ambitious to focus reviews on this 
>> next week and iron out the last few bits well before rocky-3.
>>
>> Default Roles
>>
>> The implementation to ensure each of the new defaults is available after 
>> installing keystone is complete. We realized that incorporating those new 
>> roles into keystone's default policies would be a lot easier after the flask 
>> work lands [2]. Instead of doing a bunch of work to incorporate those 
>> default and then re-doing it to accommodate flask, I think we have a safe 
>> checkpoint where we are right now. We can use free cycles during the RC 
>> period to queue up those implementation, mark them with a -2, and hit the 
>> ground running in Stein. This approach feels like the safest compromise 
>> between risk and reward.
>>
> +1 to this approach.

I've proposed a couple updates to the specification, trying to clarify
exactly what was implemented in the release [0].

[0] https://review.openstack.org/#/c/582673/

>
>> Capability Lists
>>
>> The capability lists involves a lot of work, not just within keystone, but 
>> also keystonemiddleware, which will freeze next week. I think it's 
>> reasonable to say that this will be something that has to be pushed to Stein 
>> [3].
>>
>> MFA Receipts
>>
>> Much of the code used in the existing approach uses a lot of the same 
>> patterns from the token provider API within keystone [4]. Since the UUID and 
>> SQL parts of the token provider API have been removed, we're also in the 
>> middle of cleaning up a ton of technical debt in that area [5]. Adrian seems 
>> OK giving us the opportunity to finish cleaning things up before reworking 
>> his proposal for authentication receipts. IMO, this seems totally reasonable 
>> since it will help us ensure the new code for authentication receipts 
>> doesn't have the bad patterns that have plagued us with the token provider 
>> API.
>>
>>
>> Does anyone have objections to any of these proposals? If not, I can start 
>> bumping various specs to reflect the status described here.
>>
>>
>> [0] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132202.html
>> [1] 
>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/strict-two-level-model
>> [2] 
>> https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+branch:master+topic:bug/1776504
>> [3] 
>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/whitelist-extension-for-app-creds
>> [4] 
>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/mfa-auth-receipt
>> [5] 
>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1778945
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Feature Status and Exceptions

2018-07-13 Thread Lance Bragstad


On 07/13/2018 03:37 PM, Johannes Grassler wrote:
> Hello,
>
> On Fri, Jul 13, 2018 at 02:19:35PM -0500, Lance Bragstad wrote:
>> *Capability Lists**
>> *
>> The capability lists involves a lot of work, not just within keystone,
>> but also keystonemiddleware, which will freeze next week. I think it's
>> reasonable to say that this will be something that has to be pushed to
>> Stein [3].
> I was was planning to email you about that, too...I didn't have much
> time for it lately (rushing to get a few changes in Monasca in plus a
> whole bunch of packaging stuff) and with the deadline this close I
> didn't see much of a chance to get anything meaningful in.
>
> So +1 for Stein from my side. This time I can plan for and accomodate it
> by having less Monasca stuff on my plate...

+1

Thanks for confirming. There still seems to be quite a bit of discussion
around the data model and layout. We can use the PTG to focus on that as
a group if needed (and if you'll be there).

>
> Cheers,
>
> Johannes
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Feature Status and Exceptions

2018-07-13 Thread Lance Bragstad
Hey all,

As noted in the weekly report [0], today is feature freeze for
keystone-related specifications. I wanted to elaborate on each
specification so that our plan is clear moving forward.

*Unified Limits**
**
*I propose that we issue a feature freeze exception for this work.
Mainly because the changes are relatively isolated and low-risk. The
majority of the feedback on the approach is being held up by an
interface decision, which doesn't impact users, it's certainly more of a
developer preference [1].

That said, I don't think it would be too ambitious to focus reviews on
this next week and iron out the last few bits well before rocky-3.

*Default Roles**
*
The implementation to ensure each of the new defaults is available after
installing keystone is complete. We realized that incorporating those
new roles into keystone's default policies would be a lot easier after
the flask work lands [2]. Instead of doing a bunch of work to
incorporate those default and then re-doing it to accommodate flask, I
think we have a safe checkpoint where we are right now. We can use free
cycles during the RC period to queue up those implementation, mark them
with a -2, and hit the ground running in Stein. This approach feels like
the safest compromise between risk and reward.

*Capability Lists**
*
The capability lists involves a lot of work, not just within keystone,
but also keystonemiddleware, which will freeze next week. I think it's
reasonable to say that this will be something that has to be pushed to
Stein [3].

*MFA Receipts**
*
Much of the code used in the existing approach uses a lot of the same
patterns from the token provider API within keystone [4]. Since the UUID
and SQL parts of the token provider API have been removed, we're also in
the middle of cleaning up a ton of technical debt in that area [5].
Adrian seems OK giving us the opportunity to finish cleaning things up
before reworking his proposal for authentication receipts. IMO, this
seems totally reasonable since it will help us ensure the new code for
authentication receipts doesn't have the bad patterns that have plagued
us with the token provider API.


Does anyone have objections to any of these proposals? If not, I can
start bumping various specs to reflect the status described here.


[0] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132202.html
[1]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/strict-two-level-model
[2]
https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+branch:master+topic:bug/1776504
[3]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/whitelist-extension-for-app-creds
[4]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/mfa-auth-receipt
[5]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1778945



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Stein PTG Planning Etherpad

2018-07-11 Thread Lance Bragstad
It's getting to be that time of the release (and I'm seeing other
etherpads popping up on the mailing list). I've created one specifically
for keystone [0].

Same drill as the last two PTGs. We'll start by just getting topics
written down and I'll group similar topics into buckets prior to
building a somewhat official schedule.

Please feel free to add topics you'd like to discuss at the PTG.

[0] https://etherpad.openstack.org/p/keystone-stein-ptg



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Adding Wangxiyuan to keystone core

2018-07-10 Thread Lance Bragstad
Hi all,

Today we added Wangxiyuan to the keystone core team [0]. He's been doing
a bunch of great work over the last couple releases and has become a
valuable reviewer [1][2]. He's also been instrumental in pushing forward
the unified limits work not only in keystone, but across projects.

Thanks Wangxiyuan for all your help and welcome to the team!

Lance

[0]
http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-07-10-16.00.log.html#l-100
[1] http://stackalytics.com/?module=keystone-group
[2] http://stackalytics.com/?module=keystone-group=queens



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Lance Bragstad


On 06/28/2018 02:09 PM, Fox, Kevin M wrote:
> I'll weigh in a bit with my operator hat on as recent experience it pertains 
> to the current conversation
>
> Kubernetes has largely succeeded in common distribution tools where OpenStack 
> has not been able to.
> kubeadm was created as a way to centralize deployment best practices, config, 
> and upgrade stuff into a common code based that other deployment tools can 
> build on.
>
> I think this has been successful for a few reasons:
>  * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. 
> (Eating its own dogfood)
>  * was willing to make their api robust enough to handle that self 
> enhancement. (secrets are a thing, orchestration is not optional, etc)
>  * they decided to produce a reference product (very important to adoption 
> IMO. You don't have to "build from source" to kick the tires.)
>  * made the barrier to testing/development as low as 'curl 
> http://..minikube; minikube start' (this spurs adoption and contribution)
>  * not having large silo's in deployment projects allowed better 
> communication on common tooling.
>  * Operator focused architecture, not project based architecture. This 
> simplifies the deployment situation greatly.
>  * try whenever possible to focus on just the commons and push vendor 
> specific needs to plugins so vendors can deal with vendor issues directly and 
> not corrupt the core.
>
> I've upgraded many OpenStacks since Essex and usually it is multiple weeks of 
> prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, 
> something breaks only on the production system and needs hot patching on the 
> spot. About 10% of the time, I've had to write the patch personally.
>
> I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For 
> comparison, what did I have to do? A couple hours of looking at release notes 
> and trying to dig up examples of where things broke for others. Nothing 
> popped up. Then:
>
> on the controller, I ran:
> yum install -y kubeadm #get the newest kubeadm
> kubeadm upgrade plan #check things out
>
> It told me I had 2 choices. I could:
>  * kubeadm upgrade v1.9.8
>  * kubeadm upgrade v1.10.5
>
> I ran:
> kubeadm upgrade v1.10.5
>
> The control plane was down for under 60 seconds and then the cluster was 
> upgraded. The rest of the services did a rolling upgrade live and took a few 
> more minutes.
>
> I can take my time to upgrade kubelets as mixed kubelet versions works well.
>
> Upgrading kubelet is about as easy.
>
> Done.
>
> There's a lot of things to learn from the governance / architecture of 
> Kubernetes..
>
> Fundamentally, there isn't huge differences in what Kubernetes and OpenStack 
> tries to provide users. Scheduling a VM or a Container via an api with some 
> kind of networking and storage is the same kind of thing in either case.
>
> The how to get the software (openstack or k8s) running is about as polar 
> opposite you can get though.
>
> I think if OpenStack wants to gain back some of the steam it had before, it 
> needs to adjust to the new world it is living in. This means:
>  * Consider abolishing the project walls. They are driving bad architecture 
> (not intentionally but as a side affect of structure)
>  * focus on the commons first.

Nearly all the work we're been doing from an identity perspective over
the last 18 months has enabled or directly improved the commons (or what
I would consider the commons). I agree that it's important, but we're
already focusing on it to the point where we're out of bandwidth.

Is the problem that it doesn't appear that way? Do we have different
ideas of what the "commons" are?

>  * simplify the architecture for ops:
>* make as much as possible stateless and centralize remaining state.
>* stop moving config options around with every release. Make it promote 
> automatically and persist it somewhere.
>* improve serial performance before sharding. k8s can do 5000 nodes on one 
> control plane. No reason to do nova cells and make ops deal with it except 
> for the most huge of clouds
>  * consider a reference product (think Linux vanilla kernel. distro's can 
> provide their own variants. thats ok)
>  * come up with an architecture team for the whole, not the subsystem. The 
> whole thing needs to work well.
>  * encourage current OpenStack devs to test/deploy Kubernetes. It has some 
> very good ideas that OpenStack could benefit from. If you don't know what 
> they are, you can't adopt them.
>
> And I know its hard to talk about, but consider just adopting k8s as the 
> commons and build on top of it. OpenStack's api's are good. The 
> implementations right now are very very heavy for ops. You could tie in K8s's 
> pod scheduler with vm stuff running in containers and get a vastly simpler 
> architecture for operators to deal with. Yes, this would be a major 
> disruptive change to OpenStack. But long term, I think it would make for a 
> much healthier 

Re: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5

2018-06-26 Thread Lance Bragstad


On 06/26/2018 08:57 AM, Takashi Yamamoto wrote:
> On Tue, Jun 26, 2018 at 10:13 PM, Doug Hellmann  wrote:
>> Excerpts from Lance Bragstad's message of 2018-06-25 22:51:37 -0500:
>>> Thanks a bunch for digging into this, Tony. I'll follow up with the
>>> oauthlib maintainers and see if they'd be interested in these changes
>>> upstream. If so, I can chip away at it. For now we'll have to settle for
>>> not treating warnings as errors to unblock our documentation gate [0].
>>>
>>> [0] https://review.openstack.org/#/c/577974/
>> How are docstrings from a third-party library making their way into the
>> keystone docs and breaking the build?
> in the same way that docstrings from os-vif affect networking-midonet docs.
> i.e. via class inheritance

Correct, keystone relies in an interface from that library. I've reached
out to their community to see if they would be interested in the fixes
upstream [0], and they were receptive. Until then we might have to
override the offending documentation strings somehow (per Doug's
suggestion in IRC) or disable warning as errors in our build [1].

[0] https://github.com/oauthlib/oauthlib/issues/558
[1] https://review.openstack.org/#/c/577974/

>
>> Doug
>>
>>> On 06/25/2018 07:27 PM, Tony Breeds wrote:
>>>> On Mon, Jun 25, 2018 at 05:42:00PM -0500, Lance Bragstad wrote:
>>>>> Keystone is hitting this, too [0]. I attempted the same solution that
>>>>> Tony posted, but no luck. I've even gone so far as removing every
>>>>> comment from the module to see if that helps narrow down the problem
>>>>> area, but sphinx still trips. The output from the error message isn't
>>>>> very descriptive either. Has anyone else had issues fixing this for
>>>>> python comments, not just docstrings?
>>>>>
>>>>> [0] https://bugs.launchpad.net/keystone/+bug/1778603
>>>> I did a little digging for the keystone problem and it's due to a
>>>> missing ':' in
>>>> https://github.com/oauthlib/oauthlib/blob/master/oauthlib/oauth1/rfc5849/request_validator.py#L819-L820
>>>>
>>>> So the correct way to fix this is to correct that in oauthlib, get it
>>>> released and use that.
>>>>
>>>> I hit additional problems in that enabling -W in oauthlib, to pevent
>>>> this happening in the future, lead me down a rabbit hole I don't really
>>>> have cycles to dig out of.
>>>>
>>>> Here's a dump of where I got to[1].  Clearly it mixes "fixes" with
>>>> debugging but it isn't too hard to reproduce and someone that knows more
>>>> Sphinx will be able to understand the errors better than I can.
>>>>
>>>>
>>>> [1] http://paste.openstack.org/show/724271/
>>>>
>>>> Yours Tony.
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5

2018-06-25 Thread Lance Bragstad
Thanks a bunch for digging into this, Tony. I'll follow up with the
oauthlib maintainers and see if they'd be interested in these changes
upstream. If so, I can chip away at it. For now we'll have to settle for
not treating warnings as errors to unblock our documentation gate [0].

[0] https://review.openstack.org/#/c/577974/

On 06/25/2018 07:27 PM, Tony Breeds wrote:
> On Mon, Jun 25, 2018 at 05:42:00PM -0500, Lance Bragstad wrote:
>> Keystone is hitting this, too [0]. I attempted the same solution that
>> Tony posted, but no luck. I've even gone so far as removing every
>> comment from the module to see if that helps narrow down the problem
>> area, but sphinx still trips. The output from the error message isn't
>> very descriptive either. Has anyone else had issues fixing this for
>> python comments, not just docstrings?
>>
>> [0] https://bugs.launchpad.net/keystone/+bug/1778603
> I did a little digging for the keystone problem and it's due to a
> missing ':' in 
> https://github.com/oauthlib/oauthlib/blob/master/oauthlib/oauth1/rfc5849/request_validator.py#L819-L820
>
> So the correct way to fix this is to correct that in oauthlib, get it
> released and use that.
>
> I hit additional problems in that enabling -W in oauthlib, to pevent
> this happening in the future, lead me down a rabbit hole I don't really
> have cycles to dig out of.
>
> Here's a dump of where I got to[1].  Clearly it mixes "fixes" with
> debugging but it isn't too hard to reproduce and someone that knows more
> Sphinx will be able to understand the errors better than I can.
>
>
> [1] http://paste.openstack.org/show/724271/
>
> Yours Tony.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5

2018-06-25 Thread Lance Bragstad
Keystone is hitting this, too [0]. I attempted the same solution that
Tony posted, but no luck. I've even gone so far as removing every
comment from the module to see if that helps narrow down the problem
area, but sphinx still trips. The output from the error message isn't
very descriptive either. Has anyone else had issues fixing this for
python comments, not just docstrings?

[0] https://bugs.launchpad.net/keystone/+bug/1778603

On 06/20/2018 11:52 PM, Takashi Yamamoto wrote:
> On Thu, Jun 21, 2018 at 12:13 PM, Tony Breeds  wrote:
>> On Wed, Jun 20, 2018 at 08:54:56PM +0900, Takashi Yamamoto wrote:
>>
>>> do you have a plan to submit these changes on gerrit?
>> I didn't but I have now:
>>
>>  * https://review.openstack.org/577028
>>  * https://review.openstack.org/577029
>>
>> Feel free to edit/test as you like.
> thank you!
>
>> Yours Tony.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] default and implied roles changes

2018-06-19 Thread Lance Bragstad
Hi all,

Keystone recently took a big step in implementing the default roles work
that's been a hot topic over the past year [0][1][2][3][4], and a big
piece in making RBAC more robust across OpenStack. We merged a patch [5]
that ensures the roles described in the specification [6] exist. This
was formally a cross-project specification [7], but rescoped to target
keystone directly in hopes of making it a future community goal [8].

If you've noticed issues with various CI infrastructure, it could be due
to the fact a couple new roles are being populated by keystone's
bootstrap command. For example, if your testing infrastructure creates a
role named 'Member' or 'member', you could see HTTP 409s since keystone
is now creating that role by default. You can safely remove code that
ensures that role exists, since keystone will now handle that for you.
These types of changes have been working their way into infrastructure
and deployment projects [9] this week.

If you're seeing something that isn't an HTTP 409 and suspect it is
related to these changes, come find us in #openstack-keystone. We'll be
around to answer questions about the changes in keystone and can assist
in straightening things out.


[0] https://etherpad.openstack.org/p/policy-queens-ptg Queens PTG Policy
Session
[1] https://etherpad.openstack.org/p/queens-PTG-keystone-policy-roadmap
Queens PTG Roadmap Outline
[2] https://etherpad.openstack.org/p/rbac-and-policy-rocky-ptg Rocky PTG
Policy Session
[3] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg Rocky PTG
Identity Integration Track
[4] https://etherpad.openstack.org/p/YVR-rocky-default-roles Rocky Forum
Default Roles Forum Session
[5] https://review.openstack.org/#/c/572243/
[6]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
[7] https://review.openstack.org/#/c/523973/
[8] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130208.html
[9]
https://review.openstack.org/#/q/(status:open+OR+status:merged)+branch:master+topic:fix-member



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 4 June 2018

2018-06-11 Thread Lance Bragstad
# Keystone Team Update - Week of 4 June 2018

## News

Sorry this didn't make it out last week.

This week we were busy wrapping up specification discussion before spec
freeze. Most of which revolved around unified limits [0]. We're also
starting to see implementations for MFA receipts [1] and application
credentials capability lists [2].

[0] https://review.openstack.org/#/c/540803/
[1] 
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:spec/auth_receipts
[2] 
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/whitelist-extension-for-app-creds

## Open Specs

Search query: https://bit.ly/2G8Ai5q

With the last few bits for hierarchical limits addressed and the
specification merged, we don't expect to accept any more specifications
for the Rocky release.

## Recently Merged Changes

Search query: https://bit.ly/2IACk3F

We merged 28 changes last week. Most of which were to move keystone off
its homegrown WSGI implementation. Converting to Flask is a pretty big
move for keystone and the team, but it reduces technical dept and will
help with maintenance costs in the future since it's one less wheel we
have to look after.

## Changes that need Attention

Search query: https://bit.ly/2wv7QLK

There are 50 changes that are passing CI, not in merge conflict, have no
negative reviews and aren't proposed by bots. Please take it look if you
have time to do a review or two.

## Bugs

This week we opened 7 new bugs, closed 5, and fixed 5.

Bugs opened (7)
Bug #1775094 (keystone:Medium) opened by Lance Bragstad
https://bugs.launchpad.net/keystone/+bug/1775094
Bug #1774654 (keystone:Undecided) opened by Wyllys Ingersoll
https://bugs.launchpad.net/keystone/+bug/1774654
Bug #1774688 (keystone:Undecided) opened by Lance Bragstad
https://bugs.launchpad.net/keystone/+bug/1774688
  

Bug #1775140 (keystone:Undecided) opened by Andras Kovi
https://bugs.launchpad.net/keystone/+bug/1775140
 

Bug #1775207 (keystone:Undecided) opened by Pavlo Shchelokovskyy
https://bugs.launchpad.net/keystone/+bug/1775207


Bug #1775295 (keystone:Undecided) opened by johnpham
https://bugs.launchpad.net/keystone/+bug/1775295


Bug #1774722 (oslo.config:Low) opened by Kent Wu
https://bugs.launchpad.net/oslo.config/+bug/1774722 




 

Bugs closed
(5) 

 

Bug #1578466 (keystone:Medium)
https://bugs.launchpad.net/keystone/+bug/1578466

  

Bug #1578401 (keystone:Low)
https://bugs.launchpad.net/keystone/+bug/1578401

 

Bug #1775140 (keystone:Undecided)
https://bugs.launchpad.net/keystone/+bug/1775140

   

Bug #1775295 (keystone:Undecided)
https://bugs.launchpad.net/keystone/+bug/1775295

   

Bug #1774722 (oslo.config:Low)
https://bugs.launchpad.net/oslo.config/+bug/1774722 

  



 

Bugs fixed
(5) 

  

Bug #1728907 (keystone:Low) fixed by Gage Hugo
https://bugs.launchpad.net/keystone/+bug/1728907

  

Bug #1673859 (oslo.policy:Undecided

[openstack-dev] [keystone] test storyboard environment

2018-06-04 Thread Lance Bragstad
Hi all,

The StoryBoard team was nice enough to migrate existing content for all
keystone-related launchpad projects to a dev environment [0]. This gives
us the opportunity to use  StoryBoard with real content.

Log in and check it out. I'm curious to know what the rest of the team
thinks.

[0] https://storyboard-dev.openstack.org/#!/project_group/46



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions about token scopes

2018-06-01 Thread Lance Bragstad
It looks like I had a patch up to improve some developer documentation
that is relevant to this discussion [0].

[0] https://review.openstack.org/#/c/554727/

On 06/01/2018 08:01 AM, Jens Harbott wrote:
> 2018-05-30 20:37 GMT+00:00 Matt Riedemann :
>> On 5/30/2018 9:53 AM, Lance Bragstad wrote:
>>> While scope isn't explicitly denoted by an
>>> attribute, it can be derived from the attributes of the token response.
>>>
>> Yeah, this was confusing to me, which is why I reported it as a bug in the
>> API reference documentation:
>>
>> https://bugs.launchpad.net/keystone/+bug/1774229
>>
>>>> * It looks like python-openstackclient doesn't allow specifying a
>>>> scope when issuing a token, is that going to be added?
>>> Yes, I have a patch up for it [6]. I wanted to get this in during
>>> Queens, but it missed the boat. I believe this and a new release of
>>> oslo.context are the only bits left in order for services to have
>>> everything they need to easily consume system-scoped tokens.
>>> Keystonemiddleware should know how to handle system-scoped tokens in
>>> front of each service [7]. The oslo.context library should be smart
>>> enough to handle system scope set by keystonemiddleware if context is
>>> built from environment variables [8]. Both keystoneauth [9] and
>>> python-keystoneclient [10] should have what they need to generate
>>> system-scoped tokens.
>>>
>>> That should be enough to allow the service to pass a request environment
>>> to oslo.context and use the context object to reason about the scope of
>>> the request. As opposed to trying to understand different token scope
>>> responses from keystone. We attempted to abstract that away in to the
>>> context object.
>>>
>>> [6]https://review.openstack.org/#/c/524416/
>>> [7]https://review.openstack.org/#/c/564072/
>>> [8]https://review.openstack.org/#/c/530509/
>>> [9]https://review.openstack.org/#/c/529665/
>>> [10]https://review.openstack.org/#/c/524415/
>>
>> I think your reply in IRC was more what I was looking for:
>>
>> lbragstad   mriedem: if you install
>> https://review.openstack.org/#/c/524416/5 locally with devstack and setup a
>> clouds.yaml, ``openstack token issue --os-cloud devstack-system-admin``
>> should work 15:39
>> lbragstad   http://paste.openstack.org/raw/722357/  15:39
>>
>> So users with the system role will need to create a token using that role to
>> get the system-scoped token, as far as I understand. There is no --scope
>> option on the 'openstack token issue' CLI.
> IIUC there is no option to the "token issue" command because that
> command creates a token just like any other OSC command would do from
> the global authentication parameters specified, either on the command
> line, in the environment or via a clouds.yaml file. The "token issue"
> command simply outputs the token that is then received instead of
> using it as authentication for the "real" action taken by other
> commands.
>
> So the option to request a system scope would seem to be
> "--os-system-scope all" or the corresponding env var OS_SYSTEM_SCOPE.
> And if you do that, the resulting system-scoped token will directly be
> used when you issue a command like "openstack server list".
>
> One thing to watch out for, however, is that that option seems to be
> silently ignored if the credentials also specify either a project or a
> domain. Maybe generating a warning or even an error in that situation
> would be a cleaner solution.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] failing documentation jobs

2018-05-31 Thread Lance Bragstad
Hi all,

If you've been trying to write documentation patches, you may have
noticed them tripping over unrelated errors when building the docs. We
have a bug opened detailing why this happened [0] and a fix working its
way through the gate [1]. The docs job should be back up and running soon.

[0] https://bugs.launchpad.net/keystone/+bug/1774508
[1] https://review.openstack.org/#/c/571369/



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions about token scopes

2018-05-31 Thread Lance Bragstad


On 05/31/2018 12:09 AM, Ghanshyam Mann wrote:
> On Wed, May 30, 2018 at 11:53 PM, Lance Bragstad  wrote:
>>
>> On 05/30/2018 08:47 AM, Matt Riedemann wrote:
>>> I know the keystone team has been doing a lot of work on scoped tokens
>>> and Lance has been trying to roll that out to other projects (like nova).
>>>
>>> In Rocky the nova team is adding granular policy rules to the
>>> placement API [1] which is a good opportunity to set scope on those
>>> rules as well.
>>>
>>> For now, we've just said everything is system scope since resources in
>>> placement, for the most part, are managed by "the system". But we do
>>> have some resources in placement which have project/user information
>>> in them, so could theoretically also be scoped to a project, like GET
>>> /usages [2].
> Just adding that this is same for nova policy also. As you might know
> spec[1] try to make nova policy more granular but on hold because of
> default roles things. We will do policy rule split with more better
> defaults values like read-only for GET APIs.
>
> Along with that, like you mentioned about scope setting for placement
> policy rules, we need to do same for nova policy also. That can be
> done later or together with nova policy granular. spec.
>
> [1] https://review.openstack.org/#/c/547850/
>
>>> While going through this, I've been hammering Lance with questions but
>>> I had some more this morning and wanted to send them to the list to
>>> help spread the load and share the knowledge on working with scoped
>>> tokens in the other projects.
>> ++ good idea
>>
>>> So here goes with the random questions:
>>>
>>> * devstack has the admin project/user - does that by default get
>>> system scope tokens? I see the scope is part of the token create
>>> request [3] but it's optional, so is there a default value if not
>>> specified?
>> No, not necessarily. The keystone-manage bootstrap command is what
>> bootstraps new deployments with the admin user, an admin role, a project
>> to work in, etc. It also grants the newly created admin user the admin
>> role on a project and the system. This functionality was added in Queens
>> [0]. This should be backwards compatible and allow the admin user to get
>> tokens scoped to whatever they had authorization on previously. The only
>> thing they should notice is that they have another role assignment on
>> something called the "system". That being said, they can start
>> requesting system-scoped tokens from keystone. We have a document that
>> tries to explain the differences in scopes and what they mean [1].
> Another related question is, does scope setting will impact existing
> operator? I mean when policy rule start setting scope, that might
> break the existing operator as their current token (say project
> scoped) might not be able to authorize the policy modified with
> setting the system scope.
>
> In that case, how we are going to avoid the upgrade break. One way can
> be to soft enforcement scope things for a cycle with warning and then
> start enforcing that after one cycle (like we do for any policy rule
> change)? but not sure at this point.

Good question. This was the primary driver behind adding a new
configuration option to the oslo.policy library called `enforce_scope`
[0]. This let's operators turn off scope checking while they do a few
things.

They'll need to audit their users and give administrators of the
deployment access to the system via a system role assignment (as opposed
to the 'admin' role on some random project). They also need to ensure
those people understand the concept of system scope. They might also
send emails or notifications explaining the incoming changes and why
they're being done, et cetera. Ideally, this should buy operators time
to clean things up by reassessing their policy situation with the new
defaults and scope types before enforcing those constraints. If
`enforce_scope` is False, then a warning is logged during the
enforcement check saying something along the lines of "someone used a
token scoped to X to do something in Y".

[0]
https://docs.openstack.org/oslo.policy/latest/configuration/index.html#oslo_policy.enforce_scope

>
>> [0] https://review.openstack.org/#/c/530410/
>> [1] https://docs.openstack.org/keystone/latest/admin/identity-tokens.html
>>
>>> * Why don't the token create and show APIs return the scope?
>> Good question. In a way, they do. If you look at a response when you
>> authenticate for a token or validate a token, you should see an object
>> contained within the token reference for the 

Re: [openstack-dev] Questions about token scopes

2018-05-31 Thread Lance Bragstad


On 05/30/2018 03:37 PM, Matt Riedemann wrote:
> On 5/30/2018 9:53 AM, Lance Bragstad wrote:
>> While scope isn't explicitly denoted by an
>> attribute, it can be derived from the attributes of the token response.
>>
>
> Yeah, this was confusing to me, which is why I reported it as a bug in
> the API reference documentation:
>
> https://bugs.launchpad.net/keystone/+bug/1774229
>
>>> * It looks like python-openstackclient doesn't allow specifying a
>>> scope when issuing a token, is that going to be added?
>> Yes, I have a patch up for it [6]. I wanted to get this in during
>> Queens, but it missed the boat. I believe this and a new release of
>> oslo.context are the only bits left in order for services to have
>> everything they need to easily consume system-scoped tokens.
>> Keystonemiddleware should know how to handle system-scoped tokens in
>> front of each service [7]. The oslo.context library should be smart
>> enough to handle system scope set by keystonemiddleware if context is
>> built from environment variables [8]. Both keystoneauth [9] and
>> python-keystoneclient [10] should have what they need to generate
>> system-scoped tokens.
>>
>> That should be enough to allow the service to pass a request environment
>> to oslo.context and use the context object to reason about the scope of
>> the request. As opposed to trying to understand different token scope
>> responses from keystone. We attempted to abstract that away in to the
>> context object.
>>
>> [6]https://review.openstack.org/#/c/524416/
>> [7]https://review.openstack.org/#/c/564072/
>> [8]https://review.openstack.org/#/c/530509/
>> [9]https://review.openstack.org/#/c/529665/
>> [10]https://review.openstack.org/#/c/524415/
>
> I think your reply in IRC was more what I was looking for:
>
> lbragstad    mriedem: if you install
> https://review.openstack.org/#/c/524416/5 locally with devstack and
> setup a clouds.yaml, ``openstack token issue --os-cloud
> devstack-system-admin`` should work    15:39
> lbragstad    http://paste.openstack.org/raw/722357/    15:39
>
> So users with the system role will need to create a token using that
> role to get the system-scoped token, as far as I understand. There is
> no --scope option on the 'openstack token issue' CLI.
>
>> Uhm, if I understand your question, it depends on how you define the
>> scope types for those APIs. If you set them to system-scope, then an
>> operator will need to use a system-scoped token in order to access those
>> APIs iff the placement configuration file contains placement.conf
>> [oslo.policy] enforce_scope = True. Otherwise, setting that option to
>> false will log a warning to operators saying that someone is accessing a
>> system-scoped API with a project-scoped token (e.g. education needs to
>> happen).
>>
>
> All placement APIs will be system scoped for now, so yeah I guess if
> operators enable scope enforcement they'll just have to learn how to
> deal with system-scope enforced APIs.
>
> Here is another random question:
>
> Do we have any CI jobs running devstack/tempest with scope enforcement
> enabled to see what blows up?
>

Yes and no. There is an effort to include CI testing of some sort,
building on devstack, tempest, and patrole [0]. We actually have a
specification that details how we plan to start testing these changes
with an experimental job, once we get the correct RBAC behavior that we
want [1].

If anyone has cycles or is interested in test coverage for this type of
stuff, please don't hesitate to reach out. We could really use some help
in this area and we have a pretty good plan in place.

[0] https://github.com/openstack/patrole
[1] https://review.openstack.org/#/c/464678/





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions about token scopes

2018-05-30 Thread Lance Bragstad


On 05/30/2018 08:47 AM, Matt Riedemann wrote:
> I know the keystone team has been doing a lot of work on scoped tokens
> and Lance has been trying to roll that out to other projects (like nova).
>
> In Rocky the nova team is adding granular policy rules to the
> placement API [1] which is a good opportunity to set scope on those
> rules as well.
>
> For now, we've just said everything is system scope since resources in
> placement, for the most part, are managed by "the system". But we do
> have some resources in placement which have project/user information
> in them, so could theoretically also be scoped to a project, like GET
> /usages [2].
>
> While going through this, I've been hammering Lance with questions but
> I had some more this morning and wanted to send them to the list to
> help spread the load and share the knowledge on working with scoped
> tokens in the other projects.

++ good idea

>
> So here goes with the random questions:
>
> * devstack has the admin project/user - does that by default get
> system scope tokens? I see the scope is part of the token create
> request [3] but it's optional, so is there a default value if not
> specified?

No, not necessarily. The keystone-manage bootstrap command is what
bootstraps new deployments with the admin user, an admin role, a project
to work in, etc. It also grants the newly created admin user the admin
role on a project and the system. This functionality was added in Queens
[0]. This should be backwards compatible and allow the admin user to get
tokens scoped to whatever they had authorization on previously. The only
thing they should notice is that they have another role assignment on
something called the "system". That being said, they can start
requesting system-scoped tokens from keystone. We have a document that
tries to explain the differences in scopes and what they mean [1].

[0] https://review.openstack.org/#/c/530410/
[1] https://docs.openstack.org/keystone/latest/admin/identity-tokens.html

>
> * Why don't the token create and show APIs return the scope?

Good question. In a way, they do. If you look at a response when you
authenticate for a token or validate a token, you should see an object
contained within the token reference for the purpose of scope. For
example, a project-scoped token will have a project object in the
response [2]. A domain-scoped token will have a domain object in the
response [3]. The same is true for system scoped tokens [4]. Unscoped
tokens do not have any of these objects present and do not contain a
service catalog [5]. While scope isn't explicitly denoted by an
attribute, it can be derived from the attributes of the token response.

[2] http://paste.openstack.org/raw/722349/
[3] http://paste.openstack.org/raw/722351/
[4] http://paste.openstack.org/raw/722348/
[5] http://paste.openstack.org/raw/722350/


>
> * It looks like python-openstackclient doesn't allow specifying a
> scope when issuing a token, is that going to be added?

Yes, I have a patch up for it [6]. I wanted to get this in during
Queens, but it missed the boat. I believe this and a new release of
oslo.context are the only bits left in order for services to have
everything they need to easily consume system-scoped tokens.
Keystonemiddleware should know how to handle system-scoped tokens in
front of each service [7]. The oslo.context library should be smart
enough to handle system scope set by keystonemiddleware if context is
built from environment variables [8]. Both keystoneauth [9] and
python-keystoneclient [10] should have what they need to generate
system-scoped tokens.

That should be enough to allow the service to pass a request environment
to oslo.context and use the context object to reason about the scope of
the request. As opposed to trying to understand different token scope
responses from keystone. We attempted to abstract that away in to the
context object.

[6] https://review.openstack.org/#/c/524416/
[7] https://review.openstack.org/#/c/564072/
[8] https://review.openstack.org/#/c/530509/
[9] https://review.openstack.org/#/c/529665/
[10] https://review.openstack.org/#/c/524415/

>
> The reason I'm asking about OSC stuff is because we have the
> osc-placement plugin [4] which allows users with the admin role to
> work with resources in placement, which could be useful for things
> like fixing up incorrect or leaked allocations, i.e. fixing the
> fallout of a bug in nova. I'm wondering if we define all of the
> placement API rules as system scope and we're enforcing scope, will
> admins, as we know them today, continue to be able to use those APIs?
> Or will deployments just need to grow a system-scope admin
> project/user and per-project admin users, and then use the former for
> working with placement via the OSC plugin?

Uhm, if I understand your question, it depends on how you define the
scope types for those APIs. If you set them to system-scope, then an
operator will need to use a system-scoped token in order to access 

Re: [openstack-dev] [keystone] Signing off

2018-05-30 Thread Lance Bragstad
I remember when I first started contributing upstream, I spent a
Saturday sending you internal emails asking about the intricacies of
database migrations :)

Since then you've give me (or I've stolen) a number of other tools and
techniques. Thanks for everything you've done for this community, Henry.
It's been a pleasure!

On 05/30/2018 03:45 AM, Henry Nash wrote:
> Hi
>  
> It is with a somewhat heavy heart that I have decided that it is time
> to hang up my keystone core status. Having been involved since the
> closing stages of Folsom, I've had a good run! When I look at how far
> keystone has come since the v2 days, it is remarkable - and we should
> all feel a sense of pride in that.
>  
> Thanks to all the hard work, commitment, humour and support from all
> the keystone folks over the years - I am sure we will continue to
> interact and meet among the many other open source projects that many
> of us are becoming involved with. Ad astra!
>  
> Best regards,
>  
> Henry
> Twitter: @henrynash
> linkedIn: www.linkedin.com/in/ henrypnash
>  
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with
> number 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] team dinner

2018-05-20 Thread Lance Bragstad
Alright, based on the responses it looks like Tuesday is going to be the
best option for everyone.

There was one suggestion for sushi and it looks like there are more than
a few places around. Here are the ones I've found:

http://www.momogastown.ca/menus/
http://sushiyan.ca/#/menu
http://urbansushi.com/

There is also other stuff close by like:

http://steamworks.com/brew-pub
https://www.cactusclubcafe.com/?utm_source=google-maps_medium=organic_campaign=coal-harbour

Or if you've gone to a place you'd like to recommend, suggestions are
welcome!


On 05/18/2018 08:39 AM, Lance Bragstad wrote:
> Hey all,
>
> I put together a survey to see if we can plan a night to have supper
> together [0]. I'll start parsing responses tomorrow and see what we can
> get lined up.
>
> Thanks and safe travels to Vancouver,
>
> Lance
>
> [0] https://goo.gl/forms/ogNsf9dUno8BHvqu1
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion

2018-05-18 Thread Lance Bragstad
Here is the link to the session in case you'd like to add it to your
schedule [0].

[0]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21759/openstack-is-mature-time-to-get-serious-on-maintainers

On 05/17/2018 07:55 PM, Rochelle Grober wrote:
>
> Folks,
>
>  
>
> TL;DR
>
> The last session related to extended releases is: OpenStack is
> "mature" -- time to get serious on Maintainers
> It will be in room 220 at 11:00-11:40
>
> The etherpad for the last session in the series on Extended releases
> is here:
>
> https://etherpad.openstack.org/p/YVR-openstack-maintainers-maint-pt3
>
>  
>
> There are links to info on other communities’ maintainer
> process/role/responsibilities also, as reference material on how other
> have made it work (or not).
>
>  
>
> The nitty gritty details:
>
>  
>
> The upcoming Forum is filled with sessions that are focused on issues
> needed to improve and maintain the sustainability of OpenStack
> projects for the long term.  We have discussion on reducing technical
> debt, extended releases, fast forward installs, bringing Ops and User
> communities closer together, etc.  The community is showing it is now
> invested in activities that are often part of “Sustaining Engineering”
> teams (corporate speak) or “Maintainers (OSS speak).  We are doing
> this; we are thinking about the moving parts to do this; let’s think
> about the contributors who want to do these and bring some clarity to
> their roles and the processes they need to be successful.  I am hoping
> you read this and keep these ideas in mind as you participate in the
> various Forum sessions.  Then you can bring the ideas generated during
> all these discussions to the Maintainers session near the end of the
> Summit to brainstorm how to visualize and define this new(ish)
> component of our technical community.
>
>  
>
> So, who has been doing the maintenance work so far?  Mostly (mostly)
> unsung heroes like the Stable Release team, Release team, Oslo team,
> project liaisons and the community goals champions (yes, moving to py3
> is a sustaining/maintenance type of activity).  And some operators
> (Hi, mnaser!).  We need to lean on their experience and what we think
> the community will need to reduce that technical debt to outline what
> the common tasks of maintainers should be, what else might fall in
> their purview, and how to partner with them to better serve them.
>
>  
>
> With API lower limits, new tool versions, placement, py3, and even
> projects reaching “code complete” or “maintenance mode,” there is a
> lot of work for maintainers to do (I really don’t like that term, but
> is there one that fits OpenStack’s community?).  It would be great if
> we could find a way to share the load such that we can have part time
> contributors here.  We know that operators know how to cherrypick,
> test in there clouds, do bug fixes.  How do we pair with them to get
> fixes upstreamed without requiring them to be full on developers?  We
> have a bunch of alumni who have stopped being “cores” and sometimes
> even developers, but who love our community and might be willing and
> able to put in a few hours a week, maybe reviewing small patches,
> providing help with user/ops submitted patch requests, or whatever. 
> They were trusted with +2 and +W in the past, so we should at least be
> able to trust they know what they know.  We  would need some way to
> identify them to Cores, since they would be sort of 1.5 on the voting
> scale, but……
>
>  
>
> So, burn out is high in other communities for maintainers.  We need to
> find a way to make sustaining the stable parts of OpenStack sustainable.
>
>  
>
> Hope you can make the talk, or add to the etherpad, or both.  The
> etherpad is very musch still a work in progress (trying to organize it
> to make sense).  If you want to jump in now, go for it, otherwise it
> should be in reasonable shape for use at the session.  I hope we get a
> good mix of community and a good collection of those who are already
> doing the job without title.
>
>  
>
> Thanks and see you next week.
>
> --rocky
>
>  
>
>  
>
>  
>
> 
>
> 华为技术有限公司 Huawei Technologies Co., Ltd.
>
> Company_logo
>
> Rochelle Grober
>
> Sr. Staff Architect, Open Source
> Office Phone:408-330-5472
> Email:rochelle.gro...@huawei.com
>
> 
>
> 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
> 止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
> 的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
> This e-mail and its attachments contain confidential information from
> HUAWEI, which
> is intended only for the person or entity whose address is listed
> above. Any use of the
> information contained herein in any way (including, but not limited
> to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the
> intended
> recipient(s) is prohibited. If you receive this 

  1   2   3   4   5   6   >