Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-19 Thread Matt Fischer
Amrith,

Some good thoughts in your email. I've replied to a few specific pieces
below. Overall I think it's a good start to a plan.

On Sun, Jun 18, 2017 at 5:35 AM, Amrith Kumar 
wrote:

> Trove has evolved rapidly over the past several years, since integration
> in IceHouse when it only supported single instances of a few databases.
> Today it supports a dozen databases including clusters and replication.
>
> The user survey [1] indicates that while there is strong interest in the
> project, there are few large production deployments that are known of (by
> the development team).
>
> Recent changes in the OpenStack community at large (company realignments,
> acquisitions, layoffs) and the Trove community in particular, coupled with
> a mounting burden of technical debt have prompted me to make this proposal
> to re-architect Trove.
>
> This email summarizes several of the issues that face the project, both
> structurally and architecturally. This email does not claim to include a
> detailed specification for what the new Trove would look like, merely the
> recommendation that the community should come together and develop one so
> that the project can be sustainable and useful to those who wish to use it
> in the future.
>
> TL;DR
>
> Trove, with support for a dozen or so databases today, finds itself in a
> bind because there are few developers, and a code-base with a significant
> amount of technical debt.
>
> Some architectural choices which the team made over the years have
> consequences which make the project less than ideal for deployers.
>
> Given that there are no major production deployments of Trove at present,
> this provides us an opportunity to reset the project, learn from our v1 and
> come up with a strong v2.
>
> An important aspect of making this proposal work is that we seek to
> eliminate the effort (planning, and coding) involved in migrating existing
> Trove v1 deployments to the proposed Trove v2. Effectively, with work
> beginning on Trove v2 as proposed here, Trove v1 as released with Pike will
> be marked as deprecated and users will have to migrate to Trove v2 when it
> becomes available.
>
> While I would very much like to continue to support the users on Trove v1
> through this transition, the simple fact is that absent community
> participation this will be impossible. Furthermore, given that there are no
> production deployments of Trove at this time, it seems pointless to build
> that upgrade path from Trove v1 to Trove v2; it would be the proverbial
> bridge from nowhere.
>
> This (previous) statement is, I realize, contentious. There are those who
> have told me that an upgrade path must be provided, and there are those who
> have told me of unnamed deployments of Trove that would suffer. To this,
> all I can say is that if an upgrade path is of value to you, then please
> commit the development resources to participate in the community to make
> that possible. But equally, preventing a v2 of Trove or delaying it will
> only make the v1 that we have today less valuable.
>
> We have learned a lot from v1, and the hope is that we can address that in
> v2. Some of the more significant things that I have learned are:
>
> - We should adopt a versioned front-end API from the very beginning;
> making the REST API versioned is not a ‘v2 feature’
>
> - A guest agent running on a tenant instance, with connectivity to a
> shared management message bus is a security loophole; encrypting traffic,
> per-tenant-passwords, and any other scheme is merely lipstick on a security
> hole
>

This was a major concern when we deployed it and drove the architectural
decisions. I'd be glad to see it resolved or re-architected.


>
> - Reliance on Nova for compute resources is fine, but dependence on Nova
> VM specific capabilities (like instance rebuild) is not; it makes things
> like containers or bare-metal second class citizens
>
> - A fair portion of what Trove does is resource orchestration; don’t
> reinvent the wheel, there’s Heat for that. Admittedly, Heat wasn’t as far
> along when Trove got started but that’s not the case today and we have an
> opportunity to fix that now
>

+1


>
> - A similarly significant portion of what Trove does is to implement a
> state-machine that will perform specific workflows involved in implementing
> database specific operations. This makes the Trove taskmanager a stateful
> entity. Some of the operations could take a fair amount of time. This is a
> serious architectural flaw
>
> - Tenants should not ever be able to directly interact with the underlying
> storage and compute used by database instances; that should be the default
> configuration, not an untested deployment alternative
>

+1 to this also. Trove should offer a black box DB as a Service, not
something the user sees as an instance+storage that they feel that they can
manipulate.


>
> - The CI should test all databases that are considered to be ‘supported’
> without excessive use of resources

Re: [openstack-dev] [keystone] Colleen Murphy for core

2017-05-02 Thread Matt Fischer
Congrats Colleen!

On Tue, May 2, 2017 at 12:39 PM, De Rose, Ronald 
wrote:

> Congrat Colleen, well deserved!
>
>
>
> -Ron
>
>
>
> *From:* Lance Bragstad [mailto:lbrags...@gmail.com]
> *Sent:* Tuesday, May 2, 2017 11:16 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [keystone] Colleen Murphy for core
>
>
>
> Hey folks,
>
>
>
> During today's keystone meeting we added another member to keystone's core
> team. For several releases, Colleen's had a profound impact on keystone.
> Her reviews are meticulous and of incredible quality. She has no hesitation
> to jump into keystone's most confusing realms and as a result has become an
> expert on several identity topics like federation and LDAP integration.
>
>
>
> I'd like to thank Colleen for all her hard work and upholding the
> stability and usability of the project.
>
>
>
>
>
> Congratulations, Colleen!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] stepping down from puppet-openstack core

2017-04-04 Thread Matt Fischer
I am stepping down as core in the puppet openstack project. This is the
culmination of a long and slow refocus of my work efforts into other areas.
Additionally I'm not sure what the future holds for me at this point, and
although it's possible that I will be doing puppet again but it's not fair
for me to hold this role when I'm not active.

I am very proud of what we, the community, accomplished with these modules
since I started hacking on them in 2014. The modules went from needing lots
of work to being very stable and mostly bug free. It took a lot of work and
some tough decisions from the community to get us to this point and I was
honored to be a part of it.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][keystone] better way to rotate and distribution keystone fernet keys in container env

2017-03-06 Thread Matt Fischer
I don't think it would cause an issue if every controller rotated all at
once. The issues are more along the lines of rotating to key C when there
are tokens out there that are encrypted with keys A and B. In other words
over-rotation. As long as your keys are properly staged, do the rotation
all at once or space them out, should not make any difference.


On Sun, Mar 5, 2017 at 10:52 PM, Jeffrey Zhang 
wrote:

> fix subject typo
>
> On Mon, Mar 6, 2017 at 12:28 PM, Jeffrey Zhang 
> wrote:
>
>> Kolla have support keystone fernet keys. But there are still some
>> topics worth to talk.
>>
>> The key issue is key distribution. Kolla's solution is like
>>
>> * there is a task run frequently by cronjob to check whether
>>   the key should be rotate. This is controlled by
>>   `fernet_token_expiry` variable
>> * When key rotate is required, the task in cron job will generate a
>>   new key by using `keystone-manage fernet-rotate` and distribute all
>>   keys in /etc/keystone/fernet-keys folder to other by using
>>   `rsync --delete`
>>
>> one issue is: there is no global lock in rotate and distribute steps.
>> above command is ran on all controllers. it may cause issues if
>> all controllers run this at the same time.
>>
>> Since we are using Ansible as deployment tools. there is not daemon
>> agent at all to keep rotate and distribution atomic. Is there any
>> easier way to implement a global lock?
>>
>> possible solution:
>> 1. configure cron job with different time on each controller
>> 2. implement a global lock? ( no idea how )
>>
>> [0] https://docs.openstack.org/admin-guide/identity-fernet-token-faq.html
>>
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me
>>
>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-24 Thread Matt Fischer
On Fri, Feb 24, 2017 at 9:09 PM, joehuang  wrote:

> Hello, Matt,
>
> Thank you for your reply, just as what you mentioned, for the slow changed
> data, aync. replication should work. My concerns is that the impact of
> replication delay, for example (though it's quite low chance to happen):
>
> 1) Add new user/group/role in RegionOne, before the new user/group/role
> are replicated to RegionTwo, the new user begin to access RegionTwo
> service, then because the data has not arrived yet, the user's request to
> RegionTwo may be rejected for the token vaildation failed in local
> KeyStone.
>
> 2)In token revoke case. If we remove the user'role in RegionOne, the token
> in RegionOne will be invalid immediately, but before the remove operation
> replicated to the RegionTwo, the user can still use the token to access the
> services in RegionTwo. Although it may last in very short interval.
>
> Is there someone can evaluate the security risk is affordable or not.
>
> Best Regards
> Chaoyi Huang (joehuang)
>
>

We actually had this happen for services like neutron even within a region,
where a network was created on one node and then immediately used on a
second node. We solved it by forcing haproxy to do transactions on one node
(with the others as backups). I only mention this because the scenario you
propose is possible to occur. If you are not dealing with a bunch of data
you could look into enabling causal reads (assuming you are using mysql
galera), but this will probably cause a perf hit (I did not test the
impact).

For scenario 2: I suppose you need to ask yourself, if I remove a user or
role, can I live with 2-5 seconds for that token to be revoked in all
regions? In our case it was not a major concern, but I worked on private
cloud.

For scenario 1: If I were you I think you should figure out whether or not
it's ever likely to really happen before you invest a bunch of time into
solving it. That will depend a lot on your sync time. We only had 2 regions
and we owned the pipes so it was not a major concern.

Sorry I don't have more definite answers for you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-24 Thread Matt Fischer
>
>
> At last, we still have one question:
> For public cloud, it is very common that multi regions are deployed. And
> the distance is usually very far between the regions. So the transport
> delay is really a problem. Fernet token requires the data must be the same.
> Because of the slow connection and high time delay, in our opinion, it is 
> unrealistic
> that let the keystones from different regions to use the same keystone
> datacenter. Any idea about this problem? Thanks.
>
>
>

There's nothing in Fernet tokens that would cause an issue with the
transportation delay. You could mail the Fernet keys to each region and
you're still fine, why? Because key rotation means that the "next key" is
already in place on every box when you rotate keys. There is a widely held
misconception that all keystone nodes must instantaneously sync keys in
every region or it won't work, that is simply not true. In fact the main
reason we switched to Fernet was to REDUCE the load on our cross-region
replication. Without a database full of tokens to deal with, there's
basically nothing to replicate as joe says below. User/group/role changes
for us was more of a few times a day operation rather than getting a token
which is thousands of times per second.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Thank you.

2017-01-24 Thread Matt Fischer
Cody,

Thank you for your contributions over the years.

On Fri, Jan 20, 2017 at 12:29 PM, Cody Herriges  wrote:

> I attempted to send this out last week but think I messed it up by sending
> from my work email address which isn't the one I am signed up to the lists
> with.  Seeing Alex's note in IRC this morning reminded me that I had
> probably screwed it up...
>
> I just wanted to let everyone know how much I truly appreciate the effort
> you've all put into these modules over the years.  For me its been a long
> standing example of the maturity and utility of Puppet.
>
> Also, thank you for accepting me back into the community as a core
> reviewer after a long absence.  Ironically, my push to be move involved in
> the OpenStack community started a movement for me inside Puppet that has
> resulted in a role change from being an operator and developer to being a
> manger in our Business Development team.  This has been happening
> gradually, which is the reason for my reduced presence for several of the
> past few months and became official last week.  Since it is now official it
> marks the completion of the hand off of management of our internal cluster
> to other individuals inside Puppet so I asked Alex to remove me from core.
>
> I'll likely still pop in and out of activity but it'll largely be for
> personal reasons.  I hope to get a more hobby like enjoyment out of the low
> level practioner bits of OpenStack from here on out.
>
> --
> Cody Herriges
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Resource not found when creating db instances.

2017-01-18 Thread Matt Fischer
Trove works fine with neutron. I would look deeper into your logs. Do you
have any errors about issues with Rabbit message timeouts? If so your guest
may have issues talking to Rabbit. That seems to be a common issue.

On Wed, Jan 18, 2017 at 8:59 PM, Amrith Kumar 
wrote:

> Sorry Wang Sen, why do you say Trove is not ready for Neutron"? It has
> worked with Neutron for some releases now.
>
> This does not appear to be at all related to Neutron.
>
> -amrith
>
> --
> amrith.ku...@gmail.com
> On Jan 18, 2017 10:56 PM, "Wang Sen"  wrote:
>
>> Hi all,
>>
>> I met the resource not found error when I'm creating a database
>> instance. The instance stays on build status and turns to error status
>> after timeout.
>>
>> I know trove is not ready for neuton. Is there a work around for this
>> issue ? Thanks in advance.
>>
>> Below is the detailed information.
>>
>> Error Log
>> =
>>
>> /var/log/trove/trove-taskmanager.log:
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task [-] Error
>> during Manager.publish_exists_event
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task Traceback
>> (most recent call last):
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line
>> 220, in run_periodic_tasks
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  task(self, context)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/trove/taskmanager/manager.py", line
>> 429, in publish_exists_event
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  self.admin_context)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
>> line 178, in publish_exist_events
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  notifications = transformer()
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
>> line 271, in __call__
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  client=self.nova_client)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/trove/extensions/mgmt/instances/models.py",
>> line 40, in load_mgmt_instances
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  mgmt_servers = client.servers.list(search_opts={'all_tenants': 1})
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 835,
>> in list
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  "servers")
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 249, in _list
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp,
>> body = self.api.client.get(url)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 480, in get
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return
>> self._cs_request(url, 'GET', **kwargs)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 436, in
>> _cs_request
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  self.authenticate()
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 619, in
>> authenticate
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  self._v2_auth(auth_url)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 684, in
>> _v2_auth
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task return
>> self._authenticate(url, body)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 697, in
>> _authenticate
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task
>>  **kwargs)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 431, in
>> _time_request
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task resp,
>> body = self.request(url, method, **kwargs)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task   File
>> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 425, in
>> request
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic_task raise
>> exceptions.from_response(resp, body, url, method)
>> 2017-01-19 11:27:31.666 22795 ERROR oslo_service.periodic

Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Matt Fischer
>
>
>
> I'm surprised any AD administrator let Keystone write to it. I've always
> hear the inverse that AD admins never would allow keystone to write to it,
> therefore it was never used for Projects or Assignments. Users were
> likewise read-only when AD was involved.
>
> I have seen normal LDAP setups work with Keystone and used in both
> read/write mode (but even still the write-allowed was the extreme minority).
>

Yes agreed. AD administrators are generally pretty protective of write
access. And especially so of some Linux-based open source project writing
into their Windows kingdom. We got over our lack of being able to store
assignment in LDAP, mainly because the blocker was not Keystone, it was
corporate policy.

As for everything else that's been discussed, I think database replication
is easier, and when you're not replicating tokens, there's just not that
much traffic across the WAN. It's been very stable for us, especially since
we started using Fernet tokens.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pike PTL

2016-11-22 Thread Matt Fischer
Steve,

Your tenure as PTL was excellent for the continued stability and
performance of Keystone. You did a great job in taking feedback from
operators also. Thanks for your work!

On Nov 22, 2016 2:06 PM, "De Rose, Ronald"  wrote:

> Thank you Steve, we’ve been lucky to have you as PTL.  Very much
> appreciate your work.
>
>
>
> -Ron
>
>
>
>
>
> *From:* Lance Bragstad [mailto:lbrags...@gmail.com]
> *Sent:* Monday, November 21, 2016 1:23 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [keystone] Pike PTL
>
>
>
> Steve, thanks for all the hard work and dedication over the last 3 cycles.
> I hope you have a nice break and I look forward to working with you on Pike!
>
>
>
> Enjoy you're evenings :)
>
>
>
>
>
>
>
> On Mon, Nov 21, 2016 at 1:38 PM, Steve Martinelli 
> wrote:
>
> one of these days i'll learn how to spell :)
>
>
>
> On Mon, Nov 21, 2016 at 12:52 PM, Steve Martinelli 
> wrote:
>
> Keystoners,
>
>
>
> I do not intend to run for the PTL position of the Pike development cycle.
> I'm sending this out early so I can work with folks interested in the role,
> If you intend to run for PTL in Pike and are interested in learning the
> ropes (or just want to hear more about what the role means) then shoot me
> an email.
>
>
>
> It's been an unforgettable ride. Being PTL a is very rewarding experience,
> I encourage anyone interested to put your name forward. I'm not going away
> from OpenStack, I just think three terms as PTL has been enough. It'll be
> nice to have my evenings back :)
>
>
>
> To *all* the keystone contributors (cores and non-cores), thank you for
> all your time and commitment. More importantly thank you for putting up
> with my many questions, pings, pokes and -1s. Each of you are amazing and
> together you make an awesome team. It has been an absolute pleasure to
> serve as PTL, thank you for letting me do so.
>
>
> stevemar
>
>
>
>
>
> 
>
>
>
> Thanks for the idea Lana [1]
>
> [1] http://lists.openstack.org/pipermail/openstack-docs/
> 2016-November/009357.html
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][tripleo][ansible][puppet][all] changing default token format

2016-11-07 Thread Matt Fischer
How to add yourself to Planet OpenStack:
https://wiki.openstack.org/wiki/AddingYourBlog

As for SuperUser you could reach out to them if you think it's interesting
for users/operators. Generally they'll want to publish it there first then
you follow-up with your blog post a few days later.

On Mon, Nov 7, 2016 at 8:17 AM, Lance Bragstad  wrote:

> That's a good idea. Is there a page detailing the process for contributing
> to the OpenStack Blog? I did some checking but haven't found any resources
> yet. I also asked in #openstack and #openstack-doc.
>
> On Thu, Nov 3, 2016 at 11:04 AM, Rochelle Grober <
> rochelle.gro...@huawei.com> wrote:
>
>> a blog post on the OpenStack sore might be good. superuser? there are
>> folks reading this who can help
>>
>> Sent from HUAWEI AnyOffice
>> *From:*Lance Bragstad
>> *To:*OpenStack Development Mailing List (not for usage questions),
>> openstack-operat...@lists.openstack.org,
>> *Date:*2016-11-03 08:11:20
>> *Subject:*Re: [openstack-dev] [keystone][tripleo][ansible][puppet][all]
>> changing default token format
>>
>> I totally agree with communicating this the best we can. I'm adding the
>> operator list to this thread to increase visibility.
>>
>> If there are any other methods folks think of for getting the word out,
>> outside of what we've already done (release notes, email threads, etc.),
>> please let me know. I'd be happy to drive those communications.
>>
>> On Thu, Nov 3, 2016 at 9:45 AM, Alex Schultz  wrote:
>>
>>> Hey Steve,
>>>
>>> On Thu, Nov 3, 2016 at 8:29 AM, Steve Martinelli 
>>> wrote:
>>> > Thanks Alex and Emilien for the quick answer. This was brought up at
>>> the
>>> > summit by Adam, but I don't think we have to prevent keystone from
>>> changing
>>> > the default. TripleO and Puppet can still specify UUID as their desired
>>> > token format; it is not deprecated or slated for removal. Agreed?
>>> >
>>>
>>> My email was not to tell you to stop.I was just letting you know that
>>> your change does not affect the puppet modules because we define our
>>> default as UUID.  It was just as a heads up to others on this email
>>> that this change should not affect anyone consuming the puppet modules
>>> because our default is still UUID and will be even after keystone's
>>> default changes.
>>>
>>> Thanks,
>>> -Alex
>>>
>>> > On Thu, Nov 3, 2016 at 10:23 AM, Alex Schultz 
>>> wrote:
>>> >>
>>> >> Hey Steve,
>>> >>
>>> >> On Thu, Nov 3, 2016 at 8:11 AM, Steve Martinelli <
>>> s.martine...@gmail.com>
>>> >> wrote:
>>> >> > As a heads up to some of keystone's consuming projects, we will be
>>> >> > changing
>>> >> > the default token format from UUID to Fernet. Many patches have
>>> merged
>>> >> > to
>>> >> > make this possible [1]. The last 2 that you probably want to look
>>> at are
>>> >> > [2]
>>> >> > and [3]. The first flips a switch in devstack to make fernet the
>>> >> > selected
>>> >> > token format, the second makes it default in Keystone itself.
>>> >> >
>>> >> > [1] https://review.openstack.org/#/q/topic:make-fernet-default
>>> >> > [2] DevStack patch: https://review.openstack.org/#/c/367052/
>>> >> > [3] Keystone patch: https://review.openstack.org/#/c/345688/
>>> >> >
>>> >>
>>> >> Thanks for the heads up. In puppet openstack we had already
>>> >> anticipated this and attempted to do the same for the
>>> >> puppet-keystone[0] module as well.  Unfortunately after merging it, we
>>> >> found that tripleo wasn't yet prepared to handle the HA implementation
>>> >> of fernet tokens so we had to revert it[1].  This shouldn't impact
>>> >> anyone currently consuming puppet-keystone as we define uuid as the
>>> >> default for now. Our goal is to do something similar this cycle but
>>> >> there needs to be some further work in the downstream consumers to
>>> >> either define their expected default (of uuid) or support fernet key
>>> >> generation correctly.
>>> >>
>>> >> Thanks,
>>> >> -Alex
>>> >>
>>> >> [0] https://review.openstack.org/#/c/389322/
>>> >> [1] https://review.openstack.org/#/c/392332/
>>> >>
>>> >> >
>>> >> > 
>>> __
>>> >> > OpenStack Development Mailing List (not for usage questions)
>>> >> > Unsubscribe:
>>> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >> >
>>> >>
>>> >> 
>>> __
>>> >> OpenStack Development Mailing List (not for usage questions)
>>> >> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> > 
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cg

Re: [openstack-dev] [puppet] Core nominations

2016-09-15 Thread Matt Fischer
+1 to all. Thanks for your work guys!

On Thu, Sep 15, 2016 at 6:59 AM, Emilien Macchi  wrote:

> While our group keeps moving, it's time to propose again new people
> part of core team.
>
> Dmitry Tantsur / puppet-ironic
> Dmitry is the guardian of puppet-ironic. He's driving most of the
> recent features in this module and he now fully deserves being core on
> it.
>
> Pradeep Kilambi / puppet-aodh,ceilometer,gnocchi,panko
> Prad is our Telemetry guru and he never stops to bring attention on
> these modules! Keep going Prad, we appreciate your help here.
>
> Iury Gregory / all modules
> Iury is our padawan. Still learning, but learning fast, he has been a
> continuous contributor over the last months. He's always here on IRC
> and during meetings to help.
> He always volunteer to help and not for the most fun tasks. (He drove
> the authtoken work during Newton). I would like to reward his work and
> show that we trust him to be a good core reviewer.
> Iury, keep going in your efforts!
>
>
> If your name is not here yet, please keep doing consistent work, help
> in bug triage, maintain stable CI, doing good reviews, improve our
> documentation, etc.
>
> As usual, Puppet OpenStack core team is free to -1 / +1 the proposal.
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Puppet OpenStack PTL non-candidacy

2016-09-09 Thread Matt Fischer
On Fri, Sep 9, 2016 at 10:05 AM, Emilien Macchi  wrote:

> Hi,
>
> I wrote a little blog post about the last cycle in PuppetOpenStack:
> http://my1.fr/blog/puppet-openstack-achievements-during-newton-cycle/
>
> I can't describe how much I liked to be PTL during the last 18 months
> and I wouldn't imagine we would be where we are today when I started
> to contribute on this project.
> Working on it is something I really enjoy because we have interactions
> with all OpenStack community and I can't live without it.
>
> However, I think it's time to pass the PTL torch for Ocata cycle.
> Don't worry, I'll still be around and bother you when CI is broken ;-)
>
> Again, a big thank you for those who work with me,
> --
> Emilien Macchi
>


Emilien,

It's been a pleasure collaborating with you and learning from you for the
past 18 months. As others have said the puppet modules are in such better
shape now thanks to your leadership and focus.

Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-25 Thread Matt Fischer
On Thu, Aug 25, 2016 at 1:13 PM, Steve Martinelli 
wrote:

> The keystone team is pursuing a trigger-based approach to support rolling,
> zero-downtime upgrades. The proposed operator experience is documented here:
>
>   http://docs.openstack.org/developer/keystone/upgrading.html
>
> This differs from Nova and Neutron's approaches to solve for rolling
> upgrades (which use oslo.versionedobjects), however Keystone is one of the
> few services that doesn't need to manage communication between multiple
> releases of multiple service components talking over the message bus (which
> is the original use case for oslo.versionedobjects, and for which it is
> aptly suited). Keystone simply scales horizontally and every node talks
> directly to the database.
>
> Database triggers are obviously a new challenge for developers to write,
> honestly challenging to debug (being side effects), and are made even more
> difficult by having to hand write triggers for MySQL, PostgreSQL, and
> SQLite independently (SQLAlchemy offers no assistance in this case), as
> seen in this patch:
>
>   https://review.openstack.org/#/c/355618/
>
> However, implementing an application-layer solution with
> oslo.versionedobjects is not an easy task either; refer to Neutron's
> implementation:
>
>   https://review.openstack.org/#/q/topic:bp/adopt-oslo-
> versioned-objects-for-db
>
> Our primary concern at this point are how to effectively test the triggers
> we write against our supported database systems, and their various
> deployment variations. We might be able to easily drop SQLite support (as
> it's only supported for our own test suite), but should we expect variation
> in support and/or actual behavior of triggers across the MySQLs, MariaDBs,
> Perconas, etc, of the world that would make it necessary to test each of
> them independently? If you have operational experience working with
> triggers at scale: are there landmines that we need to be aware of? What is
> it going to take for us to say we support *zero* dowtime upgrades with
> confidence?
>
> Steve & Dolph
>
>

No experience to add for triggers, but I'm happy to help test this on a
MySQL Galera cluster. I'd also like to add thanks for looking into this. A
keystone outage is a cloud outage and being able to eliminate them from
upgrades will be beneficial to everyone.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposal: start gating on puppet4

2016-08-10 Thread Matt Fischer
+1 from me also. This will help everyone who is trying to transition to it.

On Wed, Aug 10, 2016 at 1:46 AM, Javier Pena  wrote:

>
>
> - Original Message -
> > Hi,
> >
> > Today Puppet OpenStack CI is running unit and functional test jobs
> > against puppet 3 and puppet 4.
> > Unit jobs for puppet 4 are currently voting and pretty stable.
> > Functional jobs for puppet 4 are not voting but also stable.
> >
> > Even if Puppet4 has not been largely adopted by our community [1] yet,
> > I would like to encourage our users to upgrade the version of Puppet.
> > Fedora ships it by default [2] and for Ubuntu, it's also the default
> > since yakkety [3].
> >
> > [1]
> > https://docs.google.com/spreadsheets/d/1iIQ6YmpdOVctS2-
> wCV6SGPP1NSj8nKD9nv_xtZH9loY/edit?usp=sharing
> > [2] http://koji.fedoraproject.org/koji/packageinfo?packageID=3529
> > [3] http://packages.ubuntu.com/yakkety/puppet
> >
> > So here's my proposal, feel free to bring any feedback:
> > - For stable/mitaka CI and stable/liberty nothing will change.
> > - For current master (future stable/newton in a few months), transform
> > non-voting puppet4 jobs into voting and add them to the gate. Also
> > keep puppet3 unit tests jobs, as voting.
> > - After Newton release (during Ocata cycle), change master CI to only
> > gate functional jobs on puppet4 (and remove puppet3 jobs for
> > puppet-openstack-integration); but keep puppet3 unit tests jobs, as
> > voting.
> > - During Ocata cycle, implement a periodic job that will nightly check
> > we can deploy with Puppet3. The periodic job is something our
> > community interested by Puppet 3 will have to monitor and report any
> > new failure so we can address it.
> >
> > That way, we tell our users:
> > - don't worry if you deploy Liberty, Mitaka, Newton, we will
> > officially support Puppet 3.
> > - if you plan to deploy Puppet 4, we'll officially support you
> > starting from Newton.
> > - if you plan to deploy Ocata with Puppet 3, we won't support you
> > anymore since our functional testing jobs will be gone. Though we'll
> > make our best to be backward compatible thanks to our unit  and
> > periodic functional testing jobs.
> >
> > Regarding packaging:
> > - on Ubuntu, we'll continue rely on what provides Puppetlabs because
> > Xenial doesn't provide Puppet4.
> > - on CentOS7, we are working on getting Puppet 4 packaged in RDO and
> > our CI will certainly use it.
> >
> > Any feedback is welcome,
>
> I like the idea. It gives distros enough time to prepare to Puppet 4, and
> we're supposed to write compatible manifests anyway.
>
> Javier
>
> > --
> > Emilien Macchi
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Propose Sofer Athlan-Guyot (chem) part of Puppet OpenStack core

2016-07-28 Thread Matt Fischer
+1 from me!

On Jul 28, 2016 9:20 AM, "Emilien Macchi"  wrote:

> You might not know who Sofer is but he's actually "chem" on IRC.
> He's the guy who will find the root cause of insane bugs, in OpenStack
> in general but also in Puppet OpenStack modules.
> Sofer has been working on Puppet OpenStack modules for a while now,
> and is already core in puppet-keystone. Many times he brought his
> expertise to make our modules better.
> He's always here on IRC to help folks and has excellent understanding
> at how our project works.
>
> If you want stats:
> http://stackalytics.com/?user_id=sofer-athlan-guyot&metric=commits
> I'm quite sure Sofer will make more reviews over the time but I have
> no doubt he fully deserves to be part of core reviewers now, with his
> technical experience and involvement.
>
> As usual, it's an open decision, please vote +1/-1 about this proposal.
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [puppet] [desginate] An update on the state of puppet-designate (and designate in RDO)

2016-07-05 Thread Matt Fischer
We're using Designate but still on Juno. We're running puppet from around
then, summer of 2015. We'll likely try to upgrade to Mitaka at some point
but Juno Designate "just works" so it's been low priority. Look forward to
your efforts here.

On Tue, Jul 5, 2016 at 7:47 PM, David Moreau Simard  wrote:

> Hi !
>
> tl;dr
> puppet-designate is going under some significant updates to bring it
> up to par right now.
> While I will try to ensure it is well tested and backwards compatible,
> things *could* break. Would like feedback.
>
> I cc'd -operators because I'm interested in knowing if there are any
> users of puppet-designate right now: which distro and release of
> OpenStack?
>
> I'm a RDO maintainer and I took interest in puppet-designate because
> we did not have any proper test coverage for designate in RDO
> packaging until now.
>
> The RDO community mostly relies on collaboration with installation and
> deployment projects such as Puppet OpenStack to test our packaging.
> We can, in turn, provide some level of guarantee that packages built
> out of trunk branches (and eventually stable releases) should work.
> The idea is to make puppet-designate work with RDO, then integrate it
> in the puppet-openstack-integration CI scenarios and we can leverage
> that in RDO CI afterwards.
>
> Both puppet-designate and designate RDO packaging were unfortunately
> in quite a sad state after not being maintained very well and a lot of
> work was required to even get basic tests to pass.
> The good news is that it didn't work with RDO before and now it does,
> for newton.
> Testing coverage has been improved and will be improved even further
> for both RDO and Ubuntu Cloud Archive.
>
> If you'd like to follow the progress of the work, the reviews are
> tagged with the topic "designate-with-rdo" [1].
>
> Let me know if you have any questions !
>
> [1]: https://review.openstack.org/#/q/topic:designate-with-rdo
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-05 Thread Matt Fischer
For the record we're on 3.5.6-1.
On Jul 5, 2016 11:27 AM, "Mike Lowe"  wrote:

> I was having just this problem last week.  We updated to 3.6.2 from 3.5.4
> on ubuntu and stated seeing crashes due to excessive memory usage. I did
> this on each node of my rabbit cluster and haven’t had any problems since
> 'rabbitmq-plugins disable rabbitmq_management’.  From what I could gather
> from rabbitmq mailing lists the stats collection part of the management
> console is single threaded and can’t keep up thus the ever growing memory
> usage from the ever growing backlog of stats to be processed.
>
>
> > On Jul 5, 2016, at 1:02 PM, Joshua Harlow  wrote:
> >
> > Hi ops and dev-folks,
> >
> > We over at godaddy (running rabbitmq with openstack) have been hitting a
> issue that has been causing the `rabbit_mgmt_db` consuming nearly all the
> processes memory (after a given amount of time),
> >
> > We've been thinking that this bug (or bugs?) may have existed for a
> while and our dual-version-path (where we upgrade the control plane and
> then slowly/eventually upgrade the compute nodes to the same version) has
> somehow triggered this memory leaking bug/issue since it has happened most
> prominently on our cloud which was running nova-compute at kilo and the
> other services at liberty (thus using the versioned objects code path more
> frequently due to needing translations of objects).
> >
> > The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511 with
> kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to 3.6.2 seems to
> make the issue go away),
> >
> > # rpm -qa | grep rabbit
> >
> > rabbitmq-server-3.4.0-1.noarch
> >
> > The logs that seem relevant:
> >
> > ```
> > **
> > *** Publishers will be blocked until this alarm clears ***
> > **
> >
> > =INFO REPORT 1-Jul-2016::16:37:46 ===
> > accepting AMQP connection <0.23638.342> (127.0.0.1:51932 ->
> 127.0.0.1:5671)
> >
> > =INFO REPORT 1-Jul-2016::16:37:47 ===
> > vm_memory_high_watermark clear. Memory used:29910180640
> allowed:47126781542
> > ```
> >
> > This happens quite often, the crashes have been affecting our cloud over
> the weekend (which made some dev/ops not so happy especially due to the
> july 4th mini-vacation),
> >
> > Looking to see if anyone else has seen anything similar?
> >
> > For those interested this is the upstream bug/mail that I'm also seeing
> about getting confirmation from the upstream users/devs (which also has
> erlang crash dumps attached/linked),
> >
> > https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg
> >
> > Thanks,
> >
> > -Josh
> >
> > ___
> > OpenStack-operators mailing list
> > openstack-operat...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-05 Thread Matt Fischer
Yes! This happens often but I'd not call it a crash, just the mgmt db gets
behind then eats all the memory. We've started monitoring it and have
runbooks on how to bounce just the mgmt db. Here are my notes on that:

restart rabbitmq mgmt server - this seems to clear the memory usage.

rabbitmqctl eval 'application:stop(rabbitmq_management).'
rabbitmqctl eval 'application:start(rabbitmq_management).'

run GC on rabbit_mgmt_db:
rabbitmqctl eval
'(erlang:garbage_collect(global:whereis_name(rabbit_mgmt_db)))'

status of rabbit_mgmt_db:
rabbitmqctl eval 'sys:get_status(global:whereis_name(rabbit_mgmt_db)).'

Rabbitmq mgmt DB how much memory is used:
/usr/sbin/rabbitmqctl status | grep mgmt_db

Unfortunately I didn't see that an upgrade would fix for sure and any
settings changes to reduce the number of monitored events also require a
restart of the cluster. The other issue with an upgrade for us is the
ancient version of erlang shipped with trusty. When we upgrade to Xenial
we'll upgrade erlang and rabbit and hope it goes away. I'll also probably
tweak the settings on retention of events then too.

Also for the record the GC doesn't seem to help at all.
On Jul 5, 2016 11:05 AM, "Joshua Harlow"  wrote:

> Hi ops and dev-folks,
>
> We over at godaddy (running rabbitmq with openstack) have been hitting a
> issue that has been causing the `rabbit_mgmt_db` consuming nearly all the
> processes memory (after a given amount of time),
>
> We've been thinking that this bug (or bugs?) may have existed for a while
> and our dual-version-path (where we upgrade the control plane and then
> slowly/eventually upgrade the compute nodes to the same version) has
> somehow triggered this memory leaking bug/issue since it has happened most
> prominently on our cloud which was running nova-compute at kilo and the
> other services at liberty (thus using the versioned objects code path more
> frequently due to needing translations of objects).
>
> The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511 with
> kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to 3.6.2 seems to
> make the issue go away),
>
> # rpm -qa | grep rabbit
>
> rabbitmq-server-3.4.0-1.noarch
>
> The logs that seem relevant:
>
> ```
> **
> *** Publishers will be blocked until this alarm clears ***
> **
>
> =INFO REPORT 1-Jul-2016::16:37:46 ===
> accepting AMQP connection <0.23638.342> (127.0.0.1:51932 -> 127.0.0.1:5671
> )
>
> =INFO REPORT 1-Jul-2016::16:37:47 ===
> vm_memory_high_watermark clear. Memory used:29910180640 allowed:47126781542
> ```
>
> This happens quite often, the crashes have been affecting our cloud over
> the weekend (which made some dev/ops not so happy especially due to the
> july 4th mini-vacation),
>
> Looking to see if anyone else has seen anything similar?
>
> For those interested this is the upstream bug/mail that I'm also seeing
> about getting confirmation from the upstream users/devs (which also has
> erlang crash dumps attached/linked),
>
> https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg
>
> Thanks,
>
> -Josh
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [keystone] cinder quota behavior differences after Keystone mitaka upgrade

2016-06-28 Thread Matt Fischer
On Tue, Jun 28, 2016 at 12:32 PM, Potter, Nathaniel <
nathaniel.pot...@intel.com> wrote:

> Hi all,
>
>
>
> I did some digging into this on the cinder side, and it gets a little
> complicated. So, before the target and context are passed into the
> _authorize_show method, they’re retrieved through the get_project_hierarchy
> method in cinder.quota_utils [1]. In that method, they will only have their
> parent_id set if the parent_id isn’t the same as their domain_id [2] – if
> those two fields are equal the parent_id field for the returned
> generic_project object will be None. Based on what Henry said it seems like
> those two fields being the same implies that the project is at the top
> level because its parent is the domain itself (I’m guessing that should be
> true of the admin project?).
>
>
>
> So in your example you have the admin project whose domain_id is default
> and whose parent_id is also default, meaning that the parent_id passed into
> _authorize_show is going to be None. If the target project whose quota you
> want to show is a ‘brother’ project to it and has a parent of default in
> the default domain, it should also have no parent set. Do you happen to
> know which of the three exceptions in _authorize_show  you’re hitting?
>
>
>
> If the admin context project is the one you pasted, it definitely won’t
> have a set parent because its parent and domain are the same. That would
> rule out the exceptions on line 130 and 134 for your  issue because they
> both rely on the context project having a set parent_id [3]. That would
> just leave the case where the target project for the quota you want to be
> showing does have a non-domain parent and isn’t a part of the subtree for
> the admin context you’re making the call with.
>
>
>
> Sorry for a bit of a braindump here, I was just trying to look at all of
> the possibilities to see if any of them could be of help J. I think it
> would definitely be useful to know how exactly it’s failing out for you so
> we can make sure it works the way it should, because I believe the intent
> is definitely to have admins be able to view and set all user quotas.
>
>
>
> Thanks,
>
> Nate
>
>
>
> [1]
> https://github.com/openstack/cinder/blob/master/cinder/api/contrib/quotas.py#L170-L175
>
> [2]
> https://github.com/openstack/cinder/blob/master/cinder/quota_utils.py#L110-L112
>
> [3]
> https://github.com/openstack/cinder/blob/master/cinder/api/contrib/quotas.py#L125-L134
>

We're hitting the first exception:

https://github.com/openstack/cinder/blob/stable/liberty/cinder/api/contrib/quotas.py#L178-L180

In our environment currently everything should have the default domain as
the parent except for some heat stuff.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [keystone] cinder quota behavior differences after Keystone mitaka upgrade

2016-06-28 Thread Matt Fischer
Thanks Henry,

>From a Keystone POV I think it makes sense, but it's causing some
operational headaches, so I'm curious what the cinder team thinks about
this. Not being able to see or set someone's quota as an admin is
frustrating for dealing with support requests.


On Tue, Jun 28, 2016 at 12:38 AM, Henry Nash  wrote:

> Hi Matt,
>
> So the keystone changes were intentional. From Mitaka onwards, a domain is
> represented as a project which is “acting as a domain” (it has an attribute
> “is_domain” set to true). The situation you describe, where what were top
> level projects now have the project acting as the default domain as their
> parent, is what I would expect to happen after the update.
>
> During Mitaka development, we  worked with the cinder folks - who were to
> re-designing their quota code anyway - and this was modified to take
> account of the project structure. I’m not sure if the quota semantics you
> see are what was intended (I’ll let the cinder team comment). Code can, if
> desired, distinguish the case of top projects that are at the top level, vs
> projects somewhere way down the hierarchy, by looking at the parent and
> seeing if it is a project acting as a domain.
>
> Henry
> keystone core
>
> On 27 Jun 2016, at 17:13, Matt Fischer  wrote:
>
> We upgraded our dev environment last week to Keystone stable/mitaka. Since
> then we're unable to show or set quotas on projects of which the admin is
> not a member. Looking at the cinder code, it seems that cinder is pulling a
> project list and attempting to determine a hierarchy.  On Liberty Keystone,
> projects seem to lack parents:
>
>  id=9e839870dd0d4a2f96f9d71b7e7c5a4e, is_domain=False, links={u'self': u'
> https://liberty-endpoint:5000/v3/projects/9e839870dd0d4a2f96f9d71b7e7c5a4e'},
> name=admin, parent_id=None, subtree=None>
>
> In Mitaka, it seems that projects are children of the default domain:
>
>  id=4764ba822ecb43e582794b875751924c, is_domain=False, links={u'self': u'
> http://mitaka-endpoint:5000/v3/projects/4764ba822ecb43e582794b875751924c'},
> name=admin, parent_id=default, subtree=None>
>
> In Liberty since all projects were parentless, the authorize_* code blocks
> were skipped since both conditionals were false:
>
>
> https://github.com/openstack/cinder/blob/stable/liberty/cinder/api/contrib/quotas.py#L174-L191
>
> But now in Mitaka, the code is run, and it fails out since the projects
> are "brothers", both with the parent of the default domain, but not
> hierarchically related.
>
> Previously it was a useful ability for us to be able to (as admins) set
> and view  quotas for cinder projects. Now we need to scope into the user's
> project to even be able to view their quotas, much less change them. This
> seems broken, but I'm not sure where the issue is or why the keystone
> behavior changed. Is this the expected behavior?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [keystone] cinder quota behavior differences after Keystone mitaka upgrade

2016-06-27 Thread Matt Fischer
We upgraded our dev environment last week to Keystone stable/mitaka. Since
then we're unable to show or set quotas on projects of which the admin is
not a member. Looking at the cinder code, it seems that cinder is pulling a
project list and attempting to determine a hierarchy.  On Liberty Keystone,
projects seem to lack parents:

https://liberty-endpoint:5000/v3/projects/9e839870dd0d4a2f96f9d71b7e7c5a4e'},
name=admin, parent_id=None, subtree=None>

In Mitaka, it seems that projects are children of the default domain:

http://mitaka-endpoint:5000/v3/projects/4764ba822ecb43e582794b875751924c'},
name=admin, parent_id=default, subtree=None>

In Liberty since all projects were parentless, the authorize_* code blocks
were skipped since both conditionals were false:

https://github.com/openstack/cinder/blob/stable/liberty/cinder/api/contrib/quotas.py#L174-L191

But now in Mitaka, the code is run, and it fails out since the projects are
"brothers", both with the parent of the default domain, but not
hierarchically related.

Previously it was a useful ability for us to be able to (as admins) set and
view  quotas for cinder projects. Now we need to scope into the user's
project to even be able to view their quotas, much less change them. This
seems broken, but I'm not sure where the issue is or why the keystone
behavior changed. Is this the expected behavior?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] vision on new modules

2016-06-13 Thread Matt Fischer
On Wed, Jun 8, 2016 at 2:42 PM, Emilien Macchi  wrote:

> Hi folks,
>
> Over the last months we've been creating more and more modules [1] [2]
> and I would like to take the opportunity to continue some discussion
> we had during the last Summits about the quality of our modules.
>
> [1] octavia, vitrage, ec2api, tacker, watcher, congress, magnum,
> mistral, zaqar, etc.
> [2] by the end of Newton, we'll have ~ 33 Puppet modules !
>
> Announce your work
> As a reminder, we have defined a process when adding new modules:
> http://docs.openstack.org/developer/puppet-openstack-guide/new-module.html
> This process is really helpful to scale our project and easily add modules.
> If you're about to start a new module, I suggest you to start this
> process and avoid to start it on your personal github, because you'll
> loose the valuable community review on your work.
>
> Iterate
> I've noticed some folks pushing 3000 LOC in Gerrit when adding the
> bits for new Puppet modules (after the first cookiecutter init).
> That's IMHO bad, because it makes reviews harder, slower and expose
> the risk of missing something during the review process. Please write
> modules bits by bits.
> Example: start with init.pp for common bits, then api.pp, etc.
> For each bit, add its unit tests & functional tests (beaker). It will
> allow us to write modules with good design, good tests and good code
> in general.
>
> Write tests
> A good Puppet module is one that we can use to successfully deploy an
> OpenStack service. For that, please add beaker tests when you're
> initiating a module. Not at the end of your work, but for every new
> class or feature.
> It helps to easily detect issues that we'll have when running Puppet
> catalog and quickly fix it. It also helps community to report feedback
> on packaging, Tempest or detect issues in our libraries.
> If you're not familiar with beaker, you'll see in existing modules
> that there is nothing complicated, we basically write a manifest that
> will deploy the service.
>
>
> If you're new in this process, please join our IRC channel on freenode
> #puppet-openstack and don't hesitate to poke us.
>
> Any feedback / comment is highly welcome,
> Thanks,
> --
> Emilien Macchi
>
>
I like the ideas, especially about 3000 line commits. I started with your
tips and added them to the docs:

 https://review.openstack.org/329253 Document Emilien's tips for new
modules
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-03 Thread Matt Fischer
On Fri, Jun 3, 2016 at 1:35 PM, Lance Bragstad  wrote:

> Hey all,
>
> I have been curious about impact of providing performance feedback as part
> of the review process. From what I understand, keystone used to have a
> performance job that would run against proposed patches (I've only heard
> about it so someone else will have to keep me honest about its timeframe),
> but it sounds like it wasn't valued.
>
> I think revisiting this topic is valuable, but it raises a series of
> questions.
>
> Initially it probably only makes sense to test a reasonable set of
> defaults. What do we want these defaults to be? Should they be determined
> by DevStack, openstack-ansible, or something else?
>
> What does the performance test criteria look like and where does it live?
> Does it just consist of running tempest?
>

Keystone especially has some calls that are used 1000x or more relative to
others and so I'd be more concerned about them. For me this is token
validation #1 and token creation #2. Tempest checks them of course but
might be too coarse? There are token benchmarks like the ones Dolph and I
use, they are don't mimic a real work flow.  Something to consider.



>
> From a contributor and reviewer perspective, it would be nice to have the
> ability to compare performance results across patch sets. I understand that
> keeping all performance results for every patch for an extended period of
> time is unrealistic. Maybe we take a daily performance snapshot against
> master and use that to map performance patterns over time?
>

Having some time series data captured would be super useful. Could we have
daily charts stored indefinitely?



>
> Have any other projects implemented a similar workflow?
>
> I'm open to suggestions and discussions because I can't imagine there
> aren't other folks out there interested in this type of pre-merge data
> points.
>
> Thanks!
>
> Lance
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposal about puppet versions testing coverage

2016-05-25 Thread Matt Fischer
On Wed, May 25, 2016 at 1:09 PM, Emilien Macchi  wrote:

> Greating folks,
>
> In a recent poll [1], we asked to our community to tell which version
> of Puppet they were running.
> The motivation is to make sure our Puppet OpenStack CI test the right
> things, that are really useful.
>
> Right now, we run unit test jobs on puppet on 3.3, 3.4, 3.6, 3.8, 4.0
> and latest (current is 4.5).
> We also have functional jobs (non-voting, in periodic pipeline), that
> run puppet 4.5. Those ones break very often because nobody (except
> me?) regularly checks puppet4 periodic jobs.
>
> So here's my proposal, feel fee to comment:
>
> * Reduce puppet versions testing to 3.6, 3.8, 4.5 and latest (keep the
> last one non-voting). It seems that 3.6 and 3.8 are widely used by our
> consumers (default in centos7 & ubuntu LTS), and 4.5 is the latest
> release in the 4.x series.
>


+1



> * Move functional puppet4 jobs from experimental to check pipeline
> (non-voting). They'll bring very useful feedback. It will add 6 more
> jobs in the check queue, but since we will drop 2 unit tests jobs (in
> both check & gate pipelines), it will add 2 jobs at total (in term of
> time, unit tests jobs take 15 min and functional jobs take ~30 min) so
> the impact of node consumption is IMHO not relevant here.
>


What's the plan for making Puppet4 jobs voting? I think this is a good
start but we should more towards voting jobs I think.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Ivan Berezovskiy for puppet-openstack-core

2016-05-19 Thread Matt Fischer
+1 from me!

On Thu, May 19, 2016 at 8:17 AM, Emilien Macchi  wrote:

> Hi,
>
> I don't need to introduce Ivan Berezovskiy (iberezovskiy on IRC), he's
> been doing tremendous work in Puppet OpenStack over the last months,
> in a regular way.
>
> Some highlights about his contributions:
> * Fantastic work on puppet-oslo! I really mean it... Thanks to you and
> others, we have now consistency for Oslo parameters in our modules.
> * Excellent quality of code in general and in reviews.
> * Full understanding of our process (code style, release notes, CI, doc,
> etc).
> * Very often, he helps with CI things (Fuel or Puppet OpenStack CI).
> * Constant presence on IRC meetings and in our channel where he never
> hesitate to give support.
>
> I would like to propose him part of our Puppet OpenStack core team, as
> usual please -1/+1.
>
> Thanks Ivan for your hard work, keep going!
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Matt Fischer
>
>
> If config sample files are being used as a living document then that would
> be a reason to leave the deprecated options in there. In my experience as a
> cloud deployer I never once used them in that manner so it didn't occur to
> me that people might, hence my question to the list.
>
> This may also indicate that people aren't checking release notes as we
> hope they are. A release note is where I would expect to find this
> information aggregated with all the other changes I should be aware of.
> That seems easier to me than aggregating that data myself by checking
> various sources.
>


One way to think about this is that the config file has to be accurate or
the code won't work, but release notes can miss things with no consequences
other than perhaps an annoyed operator. So they are sources of truth about
the state options on of a release or branch.


>
> Anyways, I have no strong cause for removing the deprecated options. I
> just wondered if it was a low hanging fruit and thought I would ask.
>

It's always good to have these kind of conversations, thanks for starting
it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Matt Fischer
On Tue, May 17, 2016 at 12:47 PM, Andrew Laski  wrote:

>
>
>
> On Tue, May 17, 2016, at 02:36 PM, Matt Fischer wrote:
>
> On Tue, May 17, 2016 at 12:25 PM, Andrew Laski  wrote:
>
> I was in a discussion earlier about discouraging deployers from using
> deprecated options and the question came up about why we put deprecated
> options into the sample files generated in the various projects. So, why
> do we do that?
>
> I view the sample file as a reference to be used when setting up a
> service for the first time, or when looking to configure something for
> the first time. In neither of those cases do I see a benefit to
> advertising options that are marked deprecated.
>
> Is there some other case I'm not considering here? And how does everyone
> feel about modifying the sample file generation to exclude options which
> are marked with "deprecated_for_removal"?
>
>
>
>
> Can you clarify what you mean by having them? The way they are now is
> great for deployers I think and people (like me) who work on things like
> puppet and need to update options sometimes. For example, I like this way,
> example from keystone:
>
> # Deprecated group/name - [DEFAULT]/log_config
> #log_config_append = 
>
> Are you proposing removing that top line?
>
>
> That is a different type of deprecation which I didn't do a great job of
> distinguishing.
>
> There is deprecation of where a config option is defined, as in your
> example. I am not proposing to touch that at all. That simply indicates
> that a config option used to be in a different group or used to be named
> something else. That's very useful.
>
> There is deprecation of a config option in the sense that it is going away
> completely. An example would be:
>
> # DEPRECATED: OpenStack metadata service manager (string value)
> # This option is deprecated for removal.
> # Its value may be silently ignored in the future.
> #metadata_manager = nova.api.manager.MetadataManager
>
> I'm wondering if anyone sees a benefit to including that in the sample
> file when it is clearly not meant for use.
>
>
I believe it has value still and the use case is similar. That conveys
information about a feature, which I might be using, is going away. The
release notes and log files provide similar info and if I saw this I'd
probably head there next.

If this is confusing, what if instead the warning was more strong? "this
feature will not work in release X or after". Also what's the confusion
issue, is it just due to a sheer number of config options to dig through as
a new operator?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Matt Fischer
On Tue, May 17, 2016 at 12:25 PM, Andrew Laski  wrote:

> I was in a discussion earlier about discouraging deployers from using
> deprecated options and the question came up about why we put deprecated
> options into the sample files generated in the various projects. So, why
> do we do that?
>
> I view the sample file as a reference to be used when setting up a
> service for the first time, or when looking to configure something for
> the first time. In neither of those cases do I see a benefit to
> advertising options that are marked deprecated.
>
> Is there some other case I'm not considering here? And how does everyone
> feel about modifying the sample file generation to exclude options which
> are marked with "deprecated_for_removal"?
>
>
>

Can you clarify what you mean by having them? The way they are now is great
for deployers I think and people (like me) who work on things like puppet
and need to update options sometimes. For example, I like this way, example
from keystone:

# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = 

Are you proposing removing that top line?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] glance-registry deprecation: Request for feedback

2016-05-12 Thread Matt Fischer
On May 11, 2016 10:03 PM, "Flavio Percoco"  wrote:
>
> Greetings,
>
> The Glance team is evaluating the needs and usefulness of the Glance
Registry
> service and this email is a request for feedback from the overall
community
> before the team moves forward with anything.
>
> Historically, there have been reasons to create this service. Some
deployments
> use it to hide database credentials from Glance public endpoints, others
use it
> for scaling purposes and others because v1 depends on it. This is a good
time
> for the team to re-evaluate the need of these services since v2 doesn't
depend
> on it.
>
> So, here's the big question:
>
> Why do you think this service should be kept around?

I've not seen any responses so far so wanted to just say we have no use
case for it. I assume this also explains the silence from the rest of the
ops. +1 to remove.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from puppet core

2016-05-10 Thread Matt Fischer
On Tue, May 10, 2016 at 9:11 AM, Clayton O'Neill  wrote:

> I’d like to step down as a core reviewer for the OpenStack Puppet
> modules.  For the last cycle I’ve had very little time to spend
> reviewing patches, and I don’t expect that to change in the next
> cycle.  In addition, it used to be that I was contributing regularly
> because we were early upgraders and the modules always needed some
> work early in the cycle.  Under Emilien’s leadership this situation
> has changed significantly and I find that the puppet modules generally
> “just work” for us in most cases.
>
> I intend to still be contribute when I can and I’d like to thank
> everyone for the hard work for the last two cycles.  The OpenStack
> Puppet modules are really in great shape these days.
>


Thanks Clayton!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-02 Thread Matt Fischer
On Mon, May 2, 2016 at 5:26 PM, Clint Byrum  wrote:

> Hello! I enjoyed very much listening in on the default token provider
> work session last week in Austin, so thanks everyone for participating
> in that. I did not speak up then, because I wasn't really sure of this
> idea that has been bouncing around in my head, but now I think it's the
> case and we should consider this.
>
> Right now, Keystones without fernet keys, are issuing UUID tokens. These
> tokens will be in the database, and valid, for however long the token
> TTL is.
>
> The moment that one changes the configuration, keystone will start
> rejecting these tokens. This will cause disruption, and I don't think
> that is fair to the users who will likely be shown new bugs in their
> code at a very unexpected moment.
>

This will reduce the interruption and will also as you said possibly catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those in
our dev environment.

>From an operational POV, I can't imagine that any operators will go to work
one day and find out that they have a new token provider because of a new
default. Wouldn't the settings in keystone.conf be under some kind of
config management? I don't know what distros do with new defaults however,
maybe that would be the surprise?



>
> I wonder if one could merge UUID and Fernet into a provider which
> handles this transition gracefully:
>
> if self._fernet_keys:
>   return self._issue_fernet_token()
> else:
>   return self._issue_uuid_token()
>
> And in the validation, do the same, but also with an eye toward keeping
> the UUID tokens alive:
>
> if self._fernet_keys:
>   try:
> self._validate_fernet_token()
>   except InvalidFernetFormatting:
> self._validate_uuid_token()
> else:
>   self._validate_uuid_token()
>
> So that while one is rolling out new keystone nodes and syncing fernet
> keys, all tokens issued would validated properly, with minimal extra
> cost to support both (basically just a number of UUID tokens will need
> to be parsed twice, once as Fernet, and once as UUID).
>
> Thoughts? I think doing this would make changing the default fairly
> uncontroversial.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Matt Fischer
On Mon, Apr 18, 2016 at 12:52 PM, Morgan Fainberg  wrote:

>
>
> On Mon, Apr 18, 2016 at 7:29 AM, Brant Knudson  wrote:
>
>>
>>
>> On Fri, Apr 15, 2016 at 9:04 PM, Adam Young  wrote:
>>
>>> We all want Fernet to be a reality.  We ain't there yet (Except for
>>> mfish who has no patience) but we are getting closer.  The goal is to get
>>> Fernet as the default token provider as soon as possible. The review to do
>>> this has uncovered a few details that need to be fixed before we can do
>>> this.
>>>
>>> Trusts for V2 tokens were not working correctly.  Relatively easy fix.
>>> https://review.openstack.org/#/c/278693/ Patch is still failing on
>>> Python 3.  The tests are kindof racy due to the revocation event 1 second
>>> granularity.  Some of the tests here have A sleep (1) in them still, but
>>> all should be using the time control aspect of the unit test fixtures.
>>>
>>> Some of the tests also use the same user to validate a token as that
>>> have, for example, a role unassigned.  These expose a problem that the
>>> revocation events are catching too many tokens, some of which should not be
>>> treated as revoked.
>>>
>>> Also, some of the logic for revocation checking has to change. Before,
>>> if a user had two roles, and had one removed, the token would be revoked.
>>> Now, however, the token will validate successful, but the response will
>>> only have the single assigned role in it.
>>>
>>>
>>> Python 3 tests are failing because the Fernet formatter is insisting
>>> that all project-ids be valid UUIDs, but some of the old tests have "FOO"
>>> and "BAR" as ids.  These either need to be converted to UUIDS, or the
>>> formatter needs to be more forgiving.
>>>
>>> Caching of token validations was messing with revocation checking.
>>> Tokens that were valid once were being reported as always valid. Thus, the
>>> current review  removes all caching on token validations, a change we
>>> cannot maintain.  Once all the test are successfully passing, we will
>>> re-introduce the cache, and be far more aggressive about cache invalidation.
>>>
>>> Tempest tests are currently failing due to Devstack not properly
>>> identifying Fernet as the default token provider, and creating the Fernet
>>> key repository.  I'm tempted to just force devstack to always create the
>>> directory, as a user would need it if they ever switched the token provider
>>> post launch anyway.
>>>
>>>
>> There's a review to change devstack to default to fernet:
>> https://review.openstack.org/#/c/195780/ . This was mostly to show that
>> tempest still passes with fernet configured. It uncovered a couple of test
>> issues (similar in nature to the revocation checking issues mentioned in
>> the original note) that have since been fixed.
>>
>> We'd prefer to not have devstack overriding config options and instead
>> use keystone's defaults. The problem is if fernet is the default in
>> keystone then it won't work out of the box since the key database won't
>> exist. One option that I think we should investigate is to have keystone
>> create the key database on startup if it doesn't exist.
>>
>>
> I am unsure if this is the right path, unless we consider possibly moving
> the key-DB for fernet into the SQL backend (possible?) notably so we can
> control a cluster of keystones.
>

-1 this idea. Now we've got to figure out how to lock the DB and where will
those keys/passwords go? The current model is apparently painful for
devstack,  but it works fine for anyone who has an automated deployment
system, or something else simple like rsync/scp can be used. Fernet keys
are simpler to update/replicate than PKI certs and we didn't store those in
the DB to my knowledge. Let's please not optimize this solution for
devstack.




>
> If we aren't making the data shared by default, I would rather have
> devstack override the keystone default as UUID still seems like the sanest
> default due to other config overhead (with filesystem-based fernet keys).
>
> --Morgan
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from puppet-openstack-core

2016-04-18 Thread Matt Fischer
On Mon, Apr 18, 2016 at 9:37 AM, Sebastien Badia  wrote:

> Hello here,
>
> I would like to ask to be removed from the core reviewers team on the
> Puppet for OpenStack project.
>
> I lack dedicated time to contribute on my spare time to the project. And I
> don't work anymore on OpenStack deployments.
>
> In the past months, I stopped reviewing and submitting changes on our
> project,
> that's why I slopes down gradually into the abyss stats of the group :-)
> Community coc¹ suggests I step down considerately.
>
> I've never been very talkative, but retrospectively it was a great
> adventure, I
> learned a lot at your side. I'm very proud to see where the project is now.
>
> So Long, and Thanks for All the Fish
> I whish you the best ♥
>
> Seb
>
> ¹http://www.openstack.org/legal/community-code-of-conduct/
> ²http://stackalytics.com/report/contribution/puppetopenstack-group/90
> --
> Sebastien Badia
>
>
Thanks Sebastian for all your work!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Matt Fischer
Thanks Brant,

I will missing that distinction.

On Mon, Apr 18, 2016 at 9:43 AM, Brant Knudson  wrote:

>
>
> On Mon, Apr 18, 2016 at 10:20 AM, Matt Fischer 
> wrote:
>
>> On Mon, Apr 18, 2016 at 8:29 AM, Brant Knudson  wrote:
>>
>>>
>>>
>>> On Fri, Apr 15, 2016 at 9:04 PM, Adam Young  wrote:
>>>
>>>> We all want Fernet to be a reality.  We ain't there yet (Except for
>>>> mfish who has no patience) but we are getting closer.  The goal is to get
>>>> Fernet as the default token provider as soon as possible. The review to do
>>>> this has uncovered a few details that need to be fixed before we can do
>>>> this.
>>>>
>>>> Trusts for V2 tokens were not working correctly.  Relatively easy fix.
>>>> https://review.openstack.org/#/c/278693/ Patch is still failing on
>>>> Python 3.  The tests are kindof racy due to the revocation event 1 second
>>>> granularity.  Some of the tests here have A sleep (1) in them still, but
>>>> all should be using the time control aspect of the unit test fixtures.
>>>>
>>>> Some of the tests also use the same user to validate a token as that
>>>> have, for example, a role unassigned.  These expose a problem that the
>>>> revocation events are catching too many tokens, some of which should not be
>>>> treated as revoked.
>>>>
>>>> Also, some of the logic for revocation checking has to change. Before,
>>>> if a user had two roles, and had one removed, the token would be revoked.
>>>> Now, however, the token will validate successful, but the response will
>>>> only have the single assigned role in it.
>>>>
>>>>
>>>> Python 3 tests are failing because the Fernet formatter is insisting
>>>> that all project-ids be valid UUIDs, but some of the old tests have "FOO"
>>>> and "BAR" as ids.  These either need to be converted to UUIDS, or the
>>>> formatter needs to be more forgiving.
>>>>
>>>> Caching of token validations was messing with revocation checking.
>>>> Tokens that were valid once were being reported as always valid. Thus, the
>>>> current review  removes all caching on token validations, a change we
>>>> cannot maintain.  Once all the test are successfully passing, we will
>>>> re-introduce the cache, and be far more aggressive about cache 
>>>> invalidation.
>>>>
>>>> Tempest tests are currently failing due to Devstack not properly
>>>> identifying Fernet as the default token provider, and creating the Fernet
>>>> key repository.  I'm tempted to just force devstack to always create the
>>>> directory, as a user would need it if they ever switched the token provider
>>>> post launch anyway.
>>>>
>>>>
>>> There's a review to change devstack to default to fernet:
>>> https://review.openstack.org/#/c/195780/ . This was mostly to show that
>>> tempest still passes with fernet configured. It uncovered a couple of test
>>> issues (similar in nature to the revocation checking issues mentioned in
>>> the original note) that have since been fixed.
>>>
>>> We'd prefer to not have devstack overriding config options and instead
>>> use keystone's defaults. The problem is if fernet is the default in
>>> keystone then it won't work out of the box since the key database won't
>>> exist. One option that I think we should investigate is to have keystone
>>> create the key database on startup if it doesn't exist.
>>>
>>> - Brant
>>>
>>>
>>
>> I'm not a devstack user, but as I mentioned before, I assume devstack
>> called keystone-manage db_sync? Why couldn't it also call keystone-manage
>> fernet_setup?
>>
>>
> When you tell devstack that it's using fernet then it does keystone-manage
> fernet_setup. When you tell devstack to use the default, it doesn't
> fernet_setup because for now it thinks the default is UUID and doesn't
> require keys. One way to have devstack work when fernet is the default is
> to have devstack always do keystone-manage fernet_setup.
>
> Really what we want to do is have devstack work like other deployment
> methods. We can reasonably expect featureful deployers like puppet to
> keystone-manage fernet_setup in the course of setting up keystone. There's
> more basic deployers like RPMs or debs that in the past have said they like
> the defaults to "just work" (like UUID tokens) and not require extra
> commands.
>
> - Brant
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Matt Fischer
On Mon, Apr 18, 2016 at 8:29 AM, Brant Knudson  wrote:

>
>
> On Fri, Apr 15, 2016 at 9:04 PM, Adam Young  wrote:
>
>> We all want Fernet to be a reality.  We ain't there yet (Except for mfish
>> who has no patience) but we are getting closer.  The goal is to get Fernet
>> as the default token provider as soon as possible. The review to do this
>> has uncovered a few details that need to be fixed before we can do this.
>>
>> Trusts for V2 tokens were not working correctly.  Relatively easy fix.
>> https://review.openstack.org/#/c/278693/ Patch is still failing on
>> Python 3.  The tests are kindof racy due to the revocation event 1 second
>> granularity.  Some of the tests here have A sleep (1) in them still, but
>> all should be using the time control aspect of the unit test fixtures.
>>
>> Some of the tests also use the same user to validate a token as that
>> have, for example, a role unassigned.  These expose a problem that the
>> revocation events are catching too many tokens, some of which should not be
>> treated as revoked.
>>
>> Also, some of the logic for revocation checking has to change. Before, if
>> a user had two roles, and had one removed, the token would be revoked.
>> Now, however, the token will validate successful, but the response will
>> only have the single assigned role in it.
>>
>>
>> Python 3 tests are failing because the Fernet formatter is insisting that
>> all project-ids be valid UUIDs, but some of the old tests have "FOO" and
>> "BAR" as ids.  These either need to be converted to UUIDS, or the formatter
>> needs to be more forgiving.
>>
>> Caching of token validations was messing with revocation checking. Tokens
>> that were valid once were being reported as always valid. Thus, the current
>> review  removes all caching on token validations, a change we cannot
>> maintain.  Once all the test are successfully passing, we will re-introduce
>> the cache, and be far more aggressive about cache invalidation.
>>
>> Tempest tests are currently failing due to Devstack not properly
>> identifying Fernet as the default token provider, and creating the Fernet
>> key repository.  I'm tempted to just force devstack to always create the
>> directory, as a user would need it if they ever switched the token provider
>> post launch anyway.
>>
>>
> There's a review to change devstack to default to fernet:
> https://review.openstack.org/#/c/195780/ . This was mostly to show that
> tempest still passes with fernet configured. It uncovered a couple of test
> issues (similar in nature to the revocation checking issues mentioned in
> the original note) that have since been fixed.
>
> We'd prefer to not have devstack overriding config options and instead use
> keystone's defaults. The problem is if fernet is the default in keystone
> then it won't work out of the box since the key database won't exist. One
> option that I think we should investigate is to have keystone create the
> key database on startup if it doesn't exist.
>
> - Brant
>
>

I'm not a devstack user, but as I mentioned before, I assume devstack
called keystone-manage db_sync? Why couldn't it also call keystone-manage
fernet_setup?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-15 Thread Matt Fischer
On Fri, Apr 15, 2016 at 8:04 PM, Adam Young  wrote:

> We all want Fernet to be a reality.  We ain't there yet (Except for mfish
> who has no patience) but we are getting closer.  The goal is to get Fernet
> as the default token provider as soon as possible. The review to do this
> has uncovered a few details that need to be fixed before we can do this.
>

I exist to beta test for you!


>
> Trusts for V2 tokens were not working correctly.  Relatively easy fix.
> https://review.openstack.org/#/c/278693/ Patch is still failing on Python
> 3.  The tests are kindof racy due to the revocation event 1 second
> granularity.  Some of the tests here have A sleep (1) in them still, but
> all should be using the time control aspect of the unit test fixtures.
>
> Some of the tests also use the same user to validate a token as that have,
> for example, a role unassigned.  These expose a problem that the revocation
> events are catching too many tokens, some of which should not be treated as
> revoked.
>
> Also, some of the logic for revocation checking has to change. Before, if
> a user had two roles, and had one removed, the token would be revoked.
> Now, however, the token will validate successful, but the response will
> only have the single assigned role in it.
>
>
> Python 3 tests are failing because the Fernet formatter is insisting that
> all project-ids be valid UUIDs, but some of the old tests have "FOO" and
> "BAR" as ids.  These either need to be converted to UUIDS, or the formatter
> needs to be more forgiving.
>
> Caching of token validations was messing with revocation checking. Tokens
> that were valid once were being reported as always valid. Thus, the current
> review  removes all caching on token validations, a change we cannot
> maintain.  Once all the test are successfully passing, we will re-introduce
> the cache, and be far more aggressive about cache invalidation.
>

Which review removes this, the one above?


>
> Tempest tests are currently failing due to Devstack not properly
> identifying Fernet as the default token provider, and creating the Fernet
> key repository.  I'm tempted to just force devstack to always create the
> directory, as a user would need it if they ever switched the token provider
> post launch anyway.
>

Wouldn't devstack handle the same way it might call keystone manage
bootstrap or db_sync? Devstack could just call fernet_setup if that's the
configured provider?

Also I know that there is a session on this at the summit too, but for me,
I want Fernet validation to be faster. I know work is underway for this in
Newton, but I'd consider adding it to the to-do pile.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Liberty->Mitaka upgrade: is it possible without downtime?

2016-04-14 Thread Matt Fischer
On Thu, Apr 14, 2016 at 7:45 AM, Grasza, Grzegorz  wrote:

> > From: Gyorgy Szombathelyi
> >
> > Unknown column 'user.name' in 'field list'
> >
> > in some operation when the DB is already upgraded to Mitaka, but some
> > keystone instances in a HA setup are still Liberty.
>
> Currently we don't support rolling upgrades in keystone. To do an upgrade,
> you need to upgrade all keystone service instances at once, instead of
> going one-by-one, which means you have to plan for downtime of the keystone
> API.
>
>

Doing them all at once is dangerous if there's an issue during the DB
migration or between the other services and the new code. Better to
shutdown all but one node, and stop mysql as well on the other nodes. Then
upgrade one, run tests, then do the others serially. That way if the first
node has issues, you can quarantine it, restore mysql on the other nodes,
and then destroy and rebuild the first node back on old code. We've had
enough issues with db migrations before (not keystone that I recall
however) that you'd be nuts to trust that it's just going to work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Liberty->Mitaka upgrade: is it possible without downtime?

2016-04-14 Thread Matt Fischer
Unfortunately Keystone does not handle database upgrades like nova. and
they do tend to be disruptive. I have not tried Liberty to mitaka  myself,
but have you tried to validate a token granted on a mitaka node against the
liberty one?  If you are lucky the other nodes will still be able to
validate tokens during the upgrade. Even if other API calls fail this is
slightly less disruptive. What I would do is shut down your entire cluster
except for one node an upgrade that node first. If you find that other
nodes can still validate tokens, leave two up, so that the upgrade restart
doesn't cause a blip. Then upgrade the second node as quickly as possible.
I'd also strongly recommend a db backup before you start.

We did this last week from an early liberty commit to stable and had
incompatible db changes and a token format change and only had a brief
keystone outage.
On Apr 14, 2016 7:39 AM, "Gyorgy Szombathelyi" <
gyorgy.szombathe...@doclerholding.com> wrote:

> Hi!
>
> I just experimenting with upgrading Liberty to Mitaka, and hit an issue:
> In Mitaka, the user table doesn't have 'name' field, so running mixed
> versions of Keystone could result in:
>
> Unknown column 'user.name' in 'field list'
>
> in some operation when the DB is already upgraded to Mitaka, but some
> keystone instances in a HA setup are still Liberty.
>
> Is this change is intentional? Should I ignore the problem and just
> upgrade all instances as fast as possible? Or I just overlooked something?
>
> Br,
> György
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Newton midycle planning

2016-04-13 Thread Matt Fischer
Would like to try and make it, no promises, so don't decide based on me,
but, I'm with Adam:

R-14 June 27-01
or
R-11 July 18-22

work


On Wed, Apr 13, 2016 at 8:19 PM, Adam Young  wrote:

> On 04/13/2016 10:07 PM, Morgan Fainberg wrote:
>
> It is that time again, the time to plan the Keystone midcycle! Looking at
> the schedule [1] for Newton, the weeks that make the most sense look to be
> (not in preferential order):
>
> R-14 June 27-01
>
> Might be interesting having one this early in the cycle.
>
> R-12 July 11-15
>
> Won't be able to make this;  planned family vacation this week.
>
> R-11 July 18-22
>
> Prefer this.
>
>
> As usual this will be a 3 day event (probably Wed, Thurs, Fri), and based
> on previous attendance we can expect ~30 people to attend. Based upon all
> the information (other midcycles, other events, the US July4th holiday), I
> am thinking that week R-12 (the week of the newton-2 milestone) would be
> the best offering. Weeks before or after these three tend to push too close
> to the summit or too far into the development cycle.
>
> I am trying to arrange for a venue in the Bay Area (most likely will be
> South Bay, such as Mountain View, Sunnyvale, Palo Alto, San Jose) since we
> have done east coast and central over the last few midcycles.
>
> Please let me know your thoughts / preferences. In summary:
>
> * Venue will be Bay Area (more info to come soon)
>
> * Options of weeks (in general subjective order of preference): R-12,
> R-11, R-14
>
> Cheers,
> --Morgan
>
> [1] http://releases.openstack.org/newton/schedule.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][performance][profiling] Profiling Mitaka Keystone: some results and asking for a help

2016-04-11 Thread Matt Fischer
On Mon, Apr 11, 2016 at 8:11 AM, Dina Belova  wrote:

> Hey, openstackers!
>
> Recently I was trying to profile Keystone (OpenStack Liberty vs Mitaka)
> using this set of changes
> 
>  (that's
> currently on review - some final steps are required there to finish the
> work) and OSprofiler.
>
> Some preliminary results (all in one OpenStack node) can be found here
> 
>  (raw
> OSprofiler reports are not yet merged to some place and can be found here
> ). The full plan
> 
>  of
> what's going to be tested  can be found in the docs as well. In short I
> wanted to take a look how does Keystone changed its DB/Cache usage from
> Liberty to Mitaka, keeping in mind that there were several changes
> introduced:
>
>- federation support was added (and made DB scheme a bit more complex)
>- Keystone moved to oslo.cache usage
>- local context cache was introduced during Mitaka
>
> First of all - *good job on making Keystone less DB-extensive in case of
> cache turned on*! If Keystone caching is turned on, number of DB queries
> done to Keystone DB in Mitaka is averagely twice less than in Liberty,
> comparing the same requests and topologies. Thanks Keystone community to
> make it happen :)
>
> Although, I faced *two strange issues* during my experiments, and I'm
> kindly asking you, folks, to help me here:
>
>- I've created #1567403
> bug to share
>information - when I turned caching on, local context cache should cache
>identical per API requests function calls not to ping Memcache too often.
>Although I faced such calls, Keystone still used Memcache to gather this
>information. May someone take a look on this and help me figure out what am
>I observing? At the first sight local context cache should work ok, but for
>some reason I do not see it's being used.
>- One more filed bug - #1567413
> - is about a bit
>opposite situation :) When I turned cache off explicitly in the
>keystone.conf file, I still observed some of the values being fetched from
>Memcache... Your help is very appreciated!
>
> Thanks in advance and sorry for a long email :)
>
> Cheers,
> Dina
>
>
Dina,

Thanks for starting this conversation. I had some weird perf results
comparing L to an RC release of Mitaka, but I was holding them until
someone else confirmed what I saw. I'm testing token creation and
validation. From what I saw, token validation slowed down in Mitaka. After
doing my benchmark runs, the traffic to memcache was 8x in Mitaka from what
it was in Liberty. That implies more caching but 8x is a lot and even
memcache references are not free.

I know some of the Keystone folks are looking into this so it will be good
to follow-up on it. Maybe we could talk about this at the summit?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-security] [Security]abandoned OSSNs?

2016-04-11 Thread Matt Fischer
Thanks Michael,

I'm following the thread and I've asked Thierry for this tag to be
subscribable here if we're not using openstack-security anymore so that I
can receive the follow-ups.



On Mon, Apr 11, 2016 at 8:28 AM, Michael Xin 
wrote:

> Matt:
> Thanks for asking this. I forwarded this email to the new email list so
> that folks with better knowledge can answer this.
>
>
> Thanks and have a great day.
>
> Yours,
> Michael
>
>
>
> -
> Michael Xin | Manager, Security Engineering - US
> Product Security  |Rackspace Hosting
> Office #: 501-7341   or  210-312-7341
> Mobile #: 210-284-8674
> 5000 Walzem Road, San Antonio, Tx 78218
>
> --------
> Experience fanatical support
>
> From: Matt Fischer 
> Date: Monday, April 11, 2016 at 9:19 AM
> To: "openstack-secur...@lists.openstack.org" <
> openstack-secur...@lists.openstack.org>
> Subject: [Openstack-security] abandoned OSSNs?
>
> Some folks from our security team here asked me to ensure them that our
> services were patched for all the OSSNs that are listed here:
> https://wiki.openstack.org/wiki/Security_Notes
>
> Most of these are straight-forward, but there are some OSSNs that have
> been allocated an ID but then abandoned. There is no detailed wiki page and
> my best google efforts lead me to a possible IRC mention and maybe an
> abandoned review. The two specifically are OSSN-50/51.
>
> So what am I to do with an "abandoned" OSSN? Has it been decided that
> there is no issue anymore? These are pretty old if I look at the dates
> framing the other OSSNs (49/52), so I assume they aren't urgent. Can we
> ignore these? They sound somewhat scary, for example, "keystonemiddleware
> can allow access after token revocation" but I have no means to say whether
> it affects us or how we can mitigate without more info.
>
> Thoughts?
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] puppet-trove remove templated guestagent.conf

2016-03-24 Thread Matt Fischer
Right now puppet-trove can configure guestagent.conf in two ways. First via
config options in the guestagent class and second via a templated file that
taskmanager.pp handles by default [1]. I'd like to drop this behavior, but
it's not backwards compatible so would like to discuss.

First the templated file is essentially a fork of the
trove_guestagent_config options. There have been options added there and
options moved to different sections there and the template was never
updated. I have a fix up for some of this [2], but there's more work to do.

Second, I believe that the templated file is unnecessary. If you just want
to set guestagent.conf, but not run the service or install the packages
you'd just do this:

  class {'::trove::guestagent':
enabled  => false,
manage_service => false,
ensure_package => absent,

  }

Lastly, forcing guestagent.conf to re-use settings from taskmanager limits
how you can partition credentials for Rabbit. Since the guest agent runs on
VMs, I'd like to use separate Rabbit credentials for it than for
taskmanager which runs in my control plane. Using the templated file this
is not possible since settings are inherited from trove::taskmanager.

This change is not backwards compatible, so it would need a deprecation
cycle.

So with all that said, is there a reason to keep the current way of doing
things?


[1] -
https://github.com/openstack/puppet-trove/blob/2ccdb978fffe990e28512069a4c4f69465ace942/manifests/taskmanager.pp#L299-L304
[2] - https://review.openstack.org/#/c/297293/2
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][zaqar][cloudkitty] Default ports list

2016-03-10 Thread Matt Fischer
On Thu, Mar 10, 2016 at 2:29 PM, Xav Paice  wrote:

> Remember that we're talking here about all the projects, not just
> keystone.  I can't see that we'll move everything to subpaths at any time
> soon, and until that point we still need to at least make an informal
> agreement as to which 'default' port to expect services to live on.  Even
> if that's just for devstack until we get to the subpaths nirvana.
>
> It's great that services are looking to the catalog for the locations of
> endpoints - but unless we're comfortable that every cloud is going to
> select a bunch of (different) random ports for each service until such time
> as subpaths are a reality, then we need to communicate in some way.
>
> I don't see the need for a full web service environment in devstack - all
> that would really do is limit the choices that ops can make about the best
> web server/wsgi service.  If people concentrate efforts on apache/mod_wsgi,
> those wanting to use uwsgi, nginx, gunicorn, etc are going to have a hard
> road.  There's valid choices for using other web services (in fact, there's
> some massive arguments against mod_wsgi).
>
> A simple list is probably enough for a quick ref - it's not a massive
> blocker if two projects slip up and get the same port number, and yes if
> they're doing subpaths and not ports then great.  Doesn't need to be a gate
> item.  But it helps communications if we have a list, even if that's
> temporary.
>
> How about a 'default settings' list for a 'standard' reference
> environment?  When ops deviate from the list (and we will) that's a
> concious decision we make.  Should we say that the ports used in devstack
> are the default list, and if a new project wants to get into devstack then
> they need to choose a port not in use (or subpaths)?
>
>
+1. This is how we would set the default values in things like the puppet
modules. It's a not a good experience if two puppet modules out of the box
collide with each other. As you said operators can make whatever choices
they want later, but lets make it a conscious decision to deviate from the
standard list rather than presenting someone standing up OpenStack a port
collision and leaving them to search for something open to use.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][zaqar][cloudkitty] Default ports list

2016-03-09 Thread Matt Fischer
This is not the first time. Monasca and Murano had a collision too[1]. When
this happens the changes trickle down into automation tools also and
complicates things.

[1] https://bugs.launchpad.net/murano/+bug/1505785

On Wed, Mar 9, 2016 at 3:30 PM, Xav Paice  wrote:

> From an ops point of view, this would be extremely helpful information to
> share with various teams around an organization.  Even a simple wiki page
> would be great.
>
> On 10 March 2016 at 10:35, Fei Long Wang  wrote:
>
>> Hi all,
>>
>> Yesterday I just found cloudkitty is using the same default port ()
>> which is used by Zaqar now. So I'm wondering if there is any rule/policy
>> for those new services need to be aware. I googled but can't find anything
>> about this. The only link I can find is
>> http://docs.openstack.org/liberty/config-reference/content/firewalls-default-ports.html.
>> So my question is should we document the default ports list on an official
>> place given the big tent mode? Thanks.
>>
>> --
>> Cheers & Best regards,
>> Fei Long Wang (王飞龙)
>> --
>> Senior Cloud Software Engineer
>> Tel: +64-48032246
>> Email: flw...@catalyst.net.nz
>> Catalyst IT Limited
>> Level 6, Catalyst House, 150 Willis Street, Wellington
>> --
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Using multiple token formats in a one openstack cloud

2016-03-09 Thread Matt Fischer
On Wed, Mar 9, 2016 at 7:19 AM, Adam Young  wrote:

> On 03/09/2016 01:11 AM, Tim Bell wrote:
>
>
> From: Matt Fischer < m...@mattfischer.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday 8 March 2016 at 20:35
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [keystone] Using multiple token formats in a
> one openstack cloud
>
> I don't think your example is right: "PKI will validate that token
> without going to any keystone server". How would it track revoked tokens?
> I'm pretty sure that they still get validated, they are stored in the DB
> even.
>
> I also disagree that there are different use cases. Just switch to fernet
> and save yourself what's going to be weeks of pain with probably no
> improvement in anything with this idea.
>
>
> Is there any details on how to switch to Fernet for a running cloud ? I
> can see a migration path where the cloud is stopped, the token format
> changed and the cloud restarted.
>
> It seems more complex (and maybe insane, as Adam would say) to do this for
> a running cloud without disturbing the users of the cloud.
>
> Tim
>
>
> So, Fernet does not persist, UUID does.  I would guess that a transition
> plan would involve being able to fall back to a persisted UUID if the
> Fernet validation does not work.
>


When we did this we had a few disadvantages. We also upgraded Keystone to
liberty and switched to apache at the same time. This caused db migrations
and more work for puppet. Also as I mentioned the old kilo keystone
middleware can't figure out what to do when you switch token providers,
either restart services or wait for them to think that their tokens have
expired. Because of this we used ansible scripts to orchestrate the process
including db backups (standard when you upgrade a service for us).

Without this degree of difficulty, the transition should be pretty easy.
You setup the keys ahead of time, change the config file, and restart
keystone. The other services should figure it out. If not, you just do a
rolling restart with ansible of API services. Keystone is down for as long
as this takes and your API services are down until you can bounce them. We
maintain a "safe" list of services to restart (which we usually use for
Rabbit issues) and just ran through that list.

You go back later and remove all the junk you had to do for UUIDs like
cronjobs.

I would recommend testing a "failed" transition as well. We did this in our
lab and documented steps for the fallback process to minimize the risk in
production.

I'd be happy to share all my notes + ansible scripts but they were probably
overly cautious since it included an upgrade which meant DB migrations.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Using multiple token formats in a one openstack cloud

2016-03-08 Thread Matt Fischer
>
>
> I don't think your example is right: "PKI will validate that token
> without going to any keystone server". How would it track revoked tokens?
> I'm pretty sure that they still get validated, they are stored in the DB
> even.
>
> I also disagree that there are different use cases. Just switch to fernet
> and save yourself what's going to be weeks of pain with probably no
> improvement in anything with this idea.
>
>
> Is there any details on how to switch to Fernet for a running cloud ? I
> can see a migration path where the cloud is stopped, the token format
> changed and the cloud restarted.
>
> It seems more complex (and maybe insane, as Adam would say) to do this for
> a running cloud without disturbing the users of the cloud.
>
>
It requires a brief outage as you switch the provider over. We stopped all
but 1 node in the cluster then modified it, we did liberty + fernet +
apache all at the same time to avoid multiple restarts. As for the other
services, newer keystone middlewares will realize "hey my token doesn't
work anymore" and will get a new one. At the time we did ours, this was not
the case, so we bounced every service that uses the middleware. All in all
in was a brief outage, basically the length of time to upgrade a few
packages and restart a service on a single node.. My opinion is that it was
far less invasive than something like upgrading neutron, but the APIs were
down for a brief time.

Come to my talk in Austin and we'll cover it a bit more.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [horizon] [qa] keystone versionless endpoints and v3

2016-03-08 Thread Matt Fischer
On Tue, Feb 23, 2016 at 8:49 PM, Jamie Lennox  wrote:

>
>
> On 18 February 2016 at 10:50, Matt Fischer  wrote:
>
>> I've been having some issues with keystone v3 and versionless endpoints
>> and I'd like to know what's expected to work exactly in Liberty and beyond.
>> I thought with v3 we used versionless endpoints but it seems to cause some
>> breakages and some disagreement as to what should work.
>>
>
> Excellent! I'm really glad someone is looking into this beyond the simple
> cases.
>
>
>> Here's what I've found:
>>
>> Using versionless endpoints:
>>  - horizon project selector doesn't work (v3 api configured in horizon
>> local_settings) [1]
>>  - keystone client doesn't work (expected v3 I think)
>>  - nova/neutron etc seem ok with a few exceptions [2]
>>
>> Adding /v3 to my endpoints:
>>  - openstackclient seems to double up the /v3 reference which fails [3],
>> this breaks puppet-openstack, in addition to general CLI usage.
>>
>> Adding /v2.0 to my endpoints:
>>  - things seem to work the best this way
>>  - this matches the install docs too
>>  - its not very "v3-onic"
>>
>>
>> My goal is to be as v3 as possible, but everything needs to work 100%.
>> Given that...
>>
>> What's the correct and supported way to setup endpoints such that
>> Keystone v3 works?
>>
>
> So the problem with switching to v3 is that a lot of services and clients
> were designed to assume you would have a /v2.0 on your URL. To work with v3
> they therefore inspect the url and essentially s/v2.0/v3 before making
> calls. Any of the services using the keystoneclient/keystoneauth session
> stuff correctly shouldn't have this problem - but that is certainly not
> everyone.
>
> It does however explain why you see problems with /v3 where /v2.0 seems to
> work even for the v3 API.
>
>
>> Are services expected to handle versionless keystone endpoints properly?
>>
>
> Services should never need to manipulate the catalog. This is what's
> causing the problem. If they leave it up to the client to do this then it
> will handle the unversioned endpoint.
>
>
>>
>>
> Can I ignore that keystoneclient doesn't work with versionless? Does this
>> imply that services that use the python library (like Horizon) will also be
>> broken?
>>
>
> This I'm surprised by. Do you mean the keystone CLI utility that ships
> with keystoneclient? If so the decision was made it should never support v3
> and to use openstackclient instead. I haven't actually looked at this in a
> long time but we should probably fix it even though it's been deprecated
> for a long time now.
>
>
>> Do I need/Should I have both v2.0 and v3 endpoints in my catalog?
>>
>> No. And particularly with the new catalog formats that went through the
> cross project working group recently we made the decision that these
> endpoints should not contain a version number at all. This is not ready yet
> but we are working towards that goal.
>
>
>> [1] its making curl calls without a version on the endpoint, causing it
>> to fail. I will file a bug pending the outcome of this discussion.
>>
>> [2] specifically neutron_admin_auth_url in nova.conf doesn't seem to work
>> without a Keystone API version on it. For cinder keymgr_encryption_auth_url
>> also seems to need it. I assume I'll eventually also hit some of these:
>> https://etherpad.openstack.org/p/v3-only-devstack
>>
>
> Can you file bugs for both of these? I've worked on both these sections
> before so should be able to have a look into it.
>
> I was going to finish by saying that we have unversioned endpoints in
> devstack - but looking again now and we don't :( There have been various
> reverted patches in the v3 transition and this must have been one of them.
>
> For now i would suggest keeping the endpoints with the /v2.0 prefix as
> even things using v3 API know how to work around this. The goal is to go
> versionless everywhere (including other services, long goal but the others
> will be easier than keystone) and anything you find that isn't working
> isn't using the clients correctly so file a bug and add me to it.
>
>
> Jamie
>

Jamie,

Apologies for the delay in response and thanks for the information.  I had
come to the same conclusion as you after sending this, leaving /v2.0 on the
URLs in the catalog but specifying v3 seems to work the best for now in
Liberty. I look forward to the day when v3+versionless is the default!

I will bring my test env back up later this week and work on bugs for both
issues that I called out.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Using multiple token formats in a one openstack cloud

2016-03-08 Thread Matt Fischer
I don't think your example is right: "PKI will validate that token without
going to any keystone server". How would it track revoked tokens? I'm
pretty sure that they still get validated, they are stored in the DB even.

I also disagree that there are different use cases. Just switch to fernet
and save yourself what's going to be weeks of pain with probably no
improvement in anything with this idea.

On Tue, Mar 8, 2016 at 9:56 AM, rezroo  wrote:

> The basic idea is to let the openstack clients decide what sort of token
> optimization to use - for example, while a normal client uses uuid tokens,
> some services like heat or magnum may opt for pki tokens for their
> operations. A service like nova, configured for PKI will validate that
> token without going to any keystone server, but if it gets a uuid token
> then validates it with a keystone endpoint. I'm under the impression that
> the different token formats have different use-cases, so am wondering if
> there is a conceptual reason why multiple token formats are an either/or
> scenario.
>
>
> On 3/8/2016 8:06 AM, Matt Fischer wrote:
>
> This would be complicated to setup. How would the Openstack services
> validate the token? Which keystone node would they use? A better question
> is why would you want to do this?
>
> On Tue, Mar 8, 2016 at 8:45 AM, rezroo  wrote:
>
>> Keystone supports both tokens and ec2 credentials simultaneously, but as
>> far as I can tell, will only do a single token format (uuid, pki/z, fernet)
>> at a time. Is it possible or advisable to configure keystone to issue
>> multiple token formats? For example, I could configure two keystone
>> servers, each using a different token format, so depending on endpoint
>> used, I could get a uuid or pki token. Each service can use either token
>> format, so is there a conceptual or implementation issue with this setup?
>> Thanks,
>> Reza
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Using multiple token formats in a one openstack cloud

2016-03-08 Thread Matt Fischer
This would be complicated to setup. How would the Openstack services
validate the token? Which keystone node would they use? A better question
is why would you want to do this?

On Tue, Mar 8, 2016 at 8:45 AM, rezroo  wrote:

> Keystone supports both tokens and ec2 credentials simultaneously, but as
> far as I can tell, will only do a single token format (uuid, pki/z, fernet)
> at a time. Is it possible or advisable to configure keystone to issue
> multiple token formats? For example, I could configure two keystone
> servers, each using a different token format, so depending on endpoint
> used, I could get a uuid or pki token. Each service can use either token
> format, so is there a conceptual or implementation issue with this setup?
> Thanks,
> Reza
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposal to create puppet-neutron-core and add Sergey Kolekonov

2016-03-04 Thread Matt Fischer
+1 from me!

gmail/openstack-dev is doing its thing where I see your email 4 hours
before Emilien's original, so apologies for the reply ordering

On Fri, Mar 4, 2016 at 8:49 AM, Cody Herriges  wrote:

> Emilien Macchi wrote:
> > Hi,
> >
> > To scale-up our review process, we created pupept-keystone-core and it
> > worked pretty well until now.
> >
> > I propose that we continue this model and create puppet-neutron-core.
> >
> > I also propose to add Sergey Kolekonov in this group.
> > He's done a great job helping us to bring puppet-neutron rock-solid for
> > deploying OpenStack networking.
> >
> > http://stackalytics.com/?module=puppet-neutron&metric=marks
> > http://stackalytics.com/?module=puppet-neutron&metric=commits
> > 14 commits and 47 reviews, present on IRC during meetings & bug triage,
> > he's always helpful. He has a very good understanding of Neutron &
> > Puppet so I'm quite sure he would be a great addition.
> >
> > As usual, please vote!
>
> +1 from me.  Excited to continue seeing neutron get better.
>
> --
> Cody
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] how to run rspec tests? r10k issue

2016-02-26 Thread Matt Fischer
This worked great. Thanks for this and the upstream fix.

On Fri, Feb 26, 2016 at 6:25 AM, Sofer Athlan-Guyot 
wrote:

> Hi Matt,
>
> Matt Fischer  writes:
>
> > I ended up symlinking the r10k binary I have installed to the place it
> > wants it to be and it worked. I do have that in my Gemfile. Question
> > is, can we make this work without manual steps?
>
> Well, I was thinking I had a smart way to fix as it was working on my
> env.  It turns out that it was only because of a strange setup that my
> "GEM_HOME=~/" trick was working.
>
> In the end, I discovered that bundler always set GEM_HOME and so the
> scripts of the puppet-openstack-integration always set GEM_BIN_DIR to
> the wrong path.
>
> I've created this bug report[1] and this fix[2].  It seems to be working
> well on my env.  It shouldn't change anything to the Openstack CI as the
> function is install_all is not used inside a proper ZUUL environment.
>
> One less pain,
>
> I discovered afterwards that you've already created the bug report
> there[3].  Sorry for the duplicate.
>
> As a side note, it's kind of hard to test it as the directory where the
> function is is recreated at each run of "bundle exec rake spec_prep"
> from gerrit/master. So here is what I did:
>  1. run it once and let it fail;
>  2. apply this:
>
> cat > /tmp/fix < --- lib/puppet-openstack_spec_helper/rake_tasks.rb.orig 2016-02-26
> 14:19:00.955396358 +0100
> +++ lib/puppet-openstack_spec_helper/rake_tasks.rb  2016-02-26
> 14:19:09.856505122 +0100
> @@ -49,7 +49,7 @@
>  zuul_branch = ENV['ZUUL_BRANCH']
>  zuul_url = ENV['ZUUL_URL']
>  repo = 'openstack/puppet-openstack-integration'
> -rm_rf(repo)
> +#rm_rf(repo)
>  if File.exists?('/usr/zuul-env/bin/zuul-cloner')
>zuul_clone_cmd = ['/usr/zuul-env/bin/zuul-cloner']
>zuul_clone_cmd += ['--cache-dir', '/opt/git']
> @@ -59,7 +59,7 @@
>zuul_clone_cmd += ['git://git.openstack.org', "#{repo}"]
>sh(*zuul_clone_cmd)
>  else
> -  sh("git clone https://git.openstack.org/#{repo} #{repo}")
> +#  sh("git clone https://git.openstack.org/#{repo} #{repo}")
>  end
>  script = ['env']
>  script += ["PUPPETFILE_DIR=#{Dir.pwd}/spec/fixtures/modules"]
> EOF
>
> running this (from your bundler env)
>
>cat /tmp/fix | patch -d $(bundle show puppet-openstack_spec_helper) -p0
>
> and then, still from your bundler env, you can apply the patch:
>
> --- openstack/puppet-openstack-integration/functions.orig   2016-02-26
> 14:22:10.246709340 +0100
> +++ openstack/puppet-openstack-integration/functions2016-02-26 14:22:
> 15.395772257 +0100
> @@ -48,7 +48,7 @@
>  # - ``SCRIPT_DIR`` must be set to script path
>  # - ``GEM_BIN_DIR`` must be set to Gem bin directory
>  install_all() {
> -  PUPPETFILE=${SCRIPT_DIR}/Puppetfile ${GEM_BIN_DIR}r10k puppetfile
> install -v
> +  PUPPETFILE=${SCRIPT_DIR}/Puppetfile r10k puppetfile install -v
>  }
>
>  # Install Puppet OpenStack modules and dependencies by using
>
> Kinda complicated ... certainly why I didn't bother earlier.
>
> [1] https://bugs.launchpad.net/puppet-openstack-integration/+bug/1550331
> [2] https://review.openstack.org/285285
> [3] https://bugs.launchpad.net/puppet-keystone/+bug/1548872
>
> >
> > On Thu, Feb 18, 2016 at 4:57 PM, Alex Schultz 
> > wrote:
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Feb 18, 2016 at 3:26 PM, Matt Fischer
> >  wrote:
> >
> >
> > Is anyone able to share the secret of running spec tests since
> > the r10k transition? bundle install && bundle exec rake spec
> > have issues because r10k is not being installed. Since I'm not
> > the only one hopefully this question will help others.
> >
> >
> >
> > +
> >
>  
> PUPPETFILE=/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/Puppetfile
> >
> > + /var/lib/gems/1.9.1/bin/r10k puppetfile install -v
> >
>  
> /etc/puppet/modules/keystone/openstack/puppet-openstack-integration/functions:
> > line 51: /var/lib/gems/1.9.1/bin/r10k: No such file or
> > directory
> > rake aborted!
> >
> >
> >
> > I assume you were trying to run the tests on the keystone module
> > so it should have been installed with the bundle install as it is
> > listed in the Gemfile[0]. Are you sure you

Re: [openstack-dev] [puppet] Austin Design Summit space needs

2016-02-24 Thread Matt Fischer
On Wed, Feb 24, 2016 at 8:30 AM, Emilien Macchi  wrote:

> Puppet OpenStack folks,
>
> As usual, Thierry Carrez sent an e-mail to PTLs about space needs for
> the next OpenStack Summit in Austin.
>
>
> We can have 3 kinds of slots:
>
> * Fishbowl slots (Wed-Thu) - we had 2 in Tokyo.
> Our traditional largish rooms organized in fishbowl style, with
> advertised session content on the summit schedule for increased external
> participation. Ideal for when wider feedback is essential.
>
> * Workroom slots (Tue-Thu) - we had 3 in Tokyo.
> Smaller rooms organized in boardroom style, with topic buried in the
> session description, in an effort to limit attendance and not overcrowd
> the room. Ideal to get work done and prioritize work in small teams.
>
> * Contributors meetup (Fri) - we had 0 in Tokyo.
> Half-day session(s) on the Friday to get into the Newton action while
> decisions and plans are still hot, or to finish discussions started
> during the week, whatever works for you.
>
>
> I suggest we keep the same model as Tokyo, I think it worked pretty well
> for us.
> Though I'm wondering if we should also ask for a contributors meetup?
> Do we really need that?
>
>
> Any feedback from developers and operators are highly welcome.
>


I think what we had in Tokyo worked pretty well for the workroom's I
attended. I don't recall however if we did a general feedback issues
session in a larger room? Chris or Colleen used to lead these back in
ATL/Paris days. With some of the questions on the ML recently seems like it
would be interesting to find out what issues people are having outside of
the usual operators. Perhaps that's one of the fishbowls?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Matt Fischer
>
> >  * would it better to keep the ocata cycle at a more normal length, and
> >then run the "contributor events" in Mar/Sept, as opposed to Feb/Aug?
> >(again to avoid the August black hole)
> >
>
> Late March is treacherous in the US, as spring break is generally around
> the last week of March. So I think it just has to stay mid-March or
> earlier.
>
>
Spring break here and many other places is the 2nd week of March, but it
varies in every school district in every state. I think any week in March
is bad in general if you're worried about this but it will be impossible to
avoid all of them.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Matt Fischer
On Mon, Feb 22, 2016 at 11:51 AM, Tim Bell  wrote:

>
>
>
>
>
> On 22/02/16 17:27, "John Garbutt"  wrote:
>
> >On 22 February 2016 at 15:31, Monty Taylor  wrote:
> >> On 02/22/2016 07:24 AM, Russell Bryant wrote:
> >>> On Mon, Feb 22, 2016 at 10:14 AM, Thierry Carrez <
> thie...@openstack.org
>  > wrote:
>  Hi everyone,
>  TL;DR: Let's split the events, starting after Barcelona.
> >>> This proposal sounds fantastic.  Thank you very much to those that help
> >>> put it together.
> >> Totally agree. I think it's an excellent way to address the concerns and
> >> balance all of the diverse needs we have.
> >
> >tl;dr
> >+1
> >Awesome work ttx.
> >Thank you!
> >
> >Cheaper cities & venues should make it easier for more contributors to
> >attend. Thats a big deal. This also feels like enough notice to plan
> >for that.
> >
> >I think this means summit talk proposal deadline is both after the
> >previous release, and after the contributor event for the next
> >release? That should help keep proposals concrete (less guess work
> >when submitting). Nice.
> >
> >Dev wise, it seems equally good timing. Initially I was worried about
> >the event distracting from RC bugs, but actually I can see this
> >helping.
> >
> >I am sure there are more questions that will pop up. Like I assume
> >this means there is no ATC free pass to the summit? And I guess a
> >small nominal fee for the contributor meetup (like the recent ops
> >meetup, to help predict numbers of accurately)? I guess that helps
> >level the playing field for contributors who don't put git commits in
> >the repo (I am thinking vocal operators that don't contribute code).
> >But I probably shouldn't go into all that just yet.
>
> I would like to find a way to allow contributors cheaper access to the
> summits. Many of the devOPS contributors are patching test cases,
> configuration management recipes and documentation which should be rewarded
> in some form.
>
> Assuming that many of the ATCs are not so motivated to attend the summit,
> the cost in offering access to the event would not be significant.
>
> Charging for the Ops meetups was, to my understanding, more to confirm
> commitment to attend given limited space.
>
> Thus, I would be in favour of a preferential rate for contributors
> (whether ATC is the right criteria is a different question) for summits.
>
>
> Tim


I believe this is already the case. Unless I'm mistaken contributing to a
big tent config management project like the openstack puppet modules or
chef counts for ATC. I'm not sure if osad is big tent but if so it would
also count. Test cases and Docs also already count.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Matt Fischer
Cross-post to openstack-operators...

As an operator, there's value in me attending some of the design summit
sessions to provide feedback and guidance. But I don't really need to be in
the room for a week discussing minutiae of implementations. So I probably
can't justify 2 extra trips just to give a few hours of
feedback/discussion. If this is indeed the case for some other folks we'll
need to do a good job of collecting operator feedback at the operator
sessions (perhaps hopefully with reps from each major project?). We don't
want projects operating in a vacuum when it comes to major decisions.

Also where do the current operators design sessions and operators midcycle
fit in here?

(apologies for not replying directly to the first message, gmail seems to
have lost it).



On Mon, Feb 22, 2016 at 8:24 AM, Russell Bryant  wrote:

>
>
> On Mon, Feb 22, 2016 at 10:14 AM, Thierry Carrez 
> wrote:
>
>> Hi everyone,
>>
>> TL;DR: Let's split the events, starting after Barcelona.
>>
>
> This proposal sounds fantastic.  Thank you very much to those that help
> put it together.
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials correctly ?

2016-02-19 Thread Matt Fischer
You shouldn't have to do any of that, it should just work. I have OSC 2.0.0
in my environment though (Ubuntu). I'm just guessing but perhaps that
client is too old? Maybe a Fedora user could recommend a version.

On Fri, Feb 19, 2016 at 7:38 AM, Matthew Mosesohn 
wrote:

> Hi Michal,
>
> Just add --os-identity-api-version=3 to your command it will work. The
> provider uses v3 openstackclient via env var
> OS_IDENTITY_API_VERSION=3. The default is still 2.
>
> Best Regards,
> Matthew Mosesohn
>
> On Fri, Feb 19, 2016 at 5:25 PM, Matt Fischer 
> wrote:
> > What version of openstack client do you have? What version of the module
> are
> > you using?
> >
> > On Feb 19, 2016 7:20 AM, "Ptacek, MichalX" 
> wrote:
> >>
> >> Hi all,
> >>
> >>
> >>
> >> I was playing some time with puppet-keystone deployments,
> >>
> >> and also reported one issue related to this:
> >>
> >> https://bugs.launchpad.net/puppet-keystone/+bug/1547394
> >>
> >> but in general my observations are that keystone_service is using v3
> >> credentials with openstack cli commands that are not compatible
> >>
> >>
> >>
> >> e.g.
> >>
> >> Error: Failed to apply catalog: Execution of '/bin/openstack service
> list
> >> --quiet --format csv --long' returned 2: usage: openstack service list
> [-h]
> >> [-f {csv,table}] [-c COLUMN]
> >>   [--max-width ]
> >>   [--quote {all,minimal,none,nonnumeric}]
> >> openstack service list: error: unrecognized arguments: --long
> >>
> >>
> >>
> >>
> >>
> >> It can’t be bug, because whole module will not work due to this J
> >>
> >> I think I miss something important somewhere …
> >>
> >>
> >>
> >> My latest manifest file is :
> >>
> >>
> >>
> >> Exec { logoutput => 'on_failure' }
> >>
> >> package { 'curl': ensure => present }
> >>
> >>
> >>
> >> node keystone {
> >>
> >>
> >>
> >>   class { '::mysql::server': }
> >>
> >>   class { '::keystone::db::mysql':
> >>
> >> password => 'keystone',
> >>
> >>   }
> >>
> >>
> >>
> >>   class { '::keystone':
> >>
> >> verbose => true,
> >>
> >> debug   => true,
> >>
> >> database_connection => 'mysql://
> keystone:keystone@127.0.0.1/keystone',
> >>
> >> catalog_type=> 'sql',
> >>
> >> admin_token => 'admin_token',
> >>
> >>   }
> >>
> >>
> >>
> >>   class { '::keystone::roles::admin':
> >>
> >> email=> 'exam...@abc.com',
> >>
> >> password => 'ChangeMe',
> >>
> >>   }
> >>
> >>
> >>
> >>   class { '::keystone::endpoint':
> >>
> >> public_url => "http://${::fqdn}:5000/v2.0";,
> >>
> >> admin_url  => "http://${::fqdn}:35357/v2.0";,
> >>
> >>   }
> >>
> >> }
> >>
> >>
> >>
> >> Env variables looks as follows(before service list is called with
> --long)
> >>
> >> {"OS_IDENTITY_API_VERSION"=>"3", "OS_TOKEN"=>"admin_token",
> >> "OS_URL"=>"http://127.0.0.1:35357/v3"}
> >>
> >> Debug: Executing: '/bin/openstack service list --quiet --format csv
> >> --long'
> >>
> >>
> >>
> >> Thanks for any hint,
> >>
> >> Michal
> >>
> >> --
> >> Intel Research and Development Ireland Limited
> >> Registered in Ireland
> >> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> >> Registered Number: 308263
> >>
> >> This e-mail and any attachments may contain confidential material for
> the
> >> sole use of the intended recipient(s). Any review or distribution by
> others
> >> is strictly prohibited. If you are not the intended recipient, please
> >> contact the sender and delete all copies.
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials correctly ?

2016-02-19 Thread Matt Fischer
What version of openstack client do you have? What version of the module
are you using?
On Feb 19, 2016 7:20 AM, "Ptacek, MichalX"  wrote:

> Hi all,
>
>
>
> I was playing some time with puppet-keystone deployments,
>
> and also reported one issue related to this:
>
> https://bugs.launchpad.net/puppet-keystone/+bug/1547394
>
> but in general my observations are that keystone_service is using v3
> credentials with openstack cli commands that are not compatible
>
>
>
> e.g.
>
> Error: Failed to apply catalog: Execution of '/bin/openstack service list
> --quiet --format csv --long' returned 2: usage: openstack service list [-h]
> [-f {csv,table}] [-c COLUMN]
>   [--max-width ]
>   [--quote {all,minimal,none,nonnumeric}]
> openstack service list: error: unrecognized arguments: --long
>
>
>
>
>
> It can’t be bug, because whole module will not work due to this J
>
> I think I miss something important somewhere …
>
>
>
> My latest manifest file is :
>
>
>
> Exec { logoutput => 'on_failure' }
>
> package { 'curl': ensure => present }
>
>
>
> node keystone {
>
>
>
>   class { '::mysql::server': }
>
>   class { '::keystone::db::mysql':
>
> password => 'keystone',
>
>   }
>
>
>
>   class { '::keystone':
>
> verbose => true,
>
> debug   => true,
>
> database_connection => 'mysql://keystone:keystone@127.0.0.1/keystone',
>
> catalog_type=> 'sql',
>
> admin_token => 'admin_token',
>
>   }
>
>
>
>   class { '::keystone::roles::admin':
>
> email=> 'exam...@abc.com',
>
> password => 'ChangeMe',
>
>   }
>
>
>
>   class { '::keystone::endpoint':
>
> public_url => "http://${::fqdn}:5000/v2.0";,
>
> admin_url  => "http://${::fqdn}:35357/v2.0";,
>
>   }
>
> }
>
>
>
> Env variables looks as follows(before service list is called with --long)
>
> {"OS_IDENTITY_API_VERSION"=>"3", "OS_TOKEN"=>"admin_token", "OS_URL"=>"
> http://127.0.0.1:35357/v3"}
>
> Debug: Executing: '/bin/openstack service list --quiet --format csv --long'
>
>
>
> Thanks for any hint,
>
> Michal
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] how to run rspec tests? r10k issue

2016-02-18 Thread Matt Fischer
I ended up symlinking the r10k binary I have installed to the place it
wants it to be and it worked. I do have that in my Gemfile. Question is,
can we make this work without manual steps?

On Thu, Feb 18, 2016 at 4:57 PM, Alex Schultz  wrote:

>
>
> On Thu, Feb 18, 2016 at 3:26 PM, Matt Fischer 
> wrote:
>
>> Is anyone able to share the secret of running spec tests since the r10k
>> transition? bundle install && bundle exec rake spec have issues because
>> r10k is not being installed. Since I'm not the only one hopefully this
>> question will help others.
>>
>> +
>> PUPPETFILE=/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/Puppetfile
>> + /var/lib/gems/1.9.1/bin/r10k puppetfile install -v
>> /etc/puppet/modules/keystone/openstack/puppet-openstack-integration/functions:
>> line 51: /var/lib/gems/1.9.1/bin/r10k: No such file or directory
>> rake aborted!
>>
>
> I assume you were trying to run the tests on the keystone module so it
> should have been installed with the bundle install as it is listed in the
> Gemfile[0].  Are you sure your module is up to date?
>
> -Alex
>
> [0] https://github.com/openstack/puppet-keystone/blob/master/Gemfile#L26
>
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] how to run rspec tests? r10k issue

2016-02-18 Thread Matt Fischer
Is anyone able to share the secret of running spec tests since the r10k
transition? bundle install && bundle exec rake spec have issues because
r10k is not being installed. Since I'm not the only one hopefully this
question will help others.

+
PUPPETFILE=/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/Puppetfile
+ /var/lib/gems/1.9.1/bin/r10k puppetfile install -v
/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/functions:
line 51: /var/lib/gems/1.9.1/bin/r10k: No such file or directory
rake aborted!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [horizon] [qa] keystone versionless endpoints and v3

2016-02-17 Thread Matt Fischer
I've been having some issues with keystone v3 and versionless endpoints and
I'd like to know what's expected to work exactly in Liberty and beyond. I
thought with v3 we used versionless endpoints but it seems to cause some
breakages and some disagreement as to what should work.

Here's what I've found:

Using versionless endpoints:
 - horizon project selector doesn't work (v3 api configured in horizon
local_settings) [1]
 - keystone client doesn't work (expected v3 I think)
 - nova/neutron etc seem ok with a few exceptions [2]

Adding /v3 to my endpoints:
 - openstackclient seems to double up the /v3 reference which fails [3],
this breaks puppet-openstack, in addition to general CLI usage.

Adding /v2.0 to my endpoints:
 - things seem to work the best this way
 - this matches the install docs too
 - its not very "v3-onic"


My goal is to be as v3 as possible, but everything needs to work 100%.
Given that...

What's the correct and supported way to setup endpoints such that Keystone
v3 works?

Are services expected to handle versionless keystone endpoints properly?

Can I ignore that keystoneclient doesn't work with versionless? Does this
imply that services that use the python library (like Horizon) will also be
broken?

Do I need/Should I have both v2.0 and v3 endpoints in my catalog?


[1] its making curl calls without a version on the endpoint, causing it to
fail. I will file a bug pending the outcome of this discussion.

[2] specifically neutron_admin_auth_url in nova.conf doesn't seem to work
without a Keystone API version on it. For cinder keymgr_encryption_auth_url
also seems to need it. I assume I'll eventually also hit some of these:
https://etherpad.openstack.org/p/v3-only-devstack

[3] "Making authentication request to
http://127.0.0.1:5000/v3/v3/auth/tokens";
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Push Mitaka beta tag

2016-02-15 Thread Matt Fischer
Emilien,

More tags like this cannot hurt, it makes it easier to follow things,
thanks for doing this.

On Mon, Feb 15, 2016 at 9:13 AM, Emilien Macchi  wrote:

> Hi,
>
> While Puppet modules releases are independently managed, we have some
> requests from both RDO & Debian folks to push a first tag in our Puppet
> modules, for Mitaka release, so they can start provide Mitaka packaging
> based on tag, and not on commits.
>
> This is something we never did before, usually we wait until the end of
> the cycle and try to push a tag soon after the official release.
> But we want to experiment beta tags and see if it helps.
>
> The Mitaka tag would be 8.0.0b1 and pushed by the end of February (I'll
> work on it).
> Though stable/mitaka branch won't be created until official Mitaka
> release. Same thing for release notes, that will be provided at the end
> of the cycle as usual.
>
> Any thoughts are welcome,
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] compatibility of puppet upstream modules

2016-02-05 Thread Matt Fischer
I'm not sure tbh, we don't have to deal with a proxy, but why not just
comment this part out for now?

Please file a bug on this against puppet-glance if there is a config option
we could add.
On Feb 5, 2016 4:59 AM, "Ptacek, MichalX"  wrote:

> Thanks Matt’s,  I was able to get system to vanilla state again ….
>
> And also isolated initial problem,
>
> my first puppet deployment failed on following error:
>
>
>
> Debug: Executing '/usr/bin/openstack image list --quiet --format csv
> --long'
>
> Debug: Executing '/usr/bin/openstack image create --format shell cirros
> --public --container-format=bare --disk-format=qcow2 --copy-from=
> http://download.cirros-cloud
>
> .net/0.3.4/cirros-0.3.4-x86_64-disk.img'
>
> Error: Execution of '/usr/bin/openstack image create --format shell cirros
> --public --container-format=bare --disk-format=qcow2 --copy-from=
> http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img'
> returned 1: 400 Bad Request: The HTTP URL is invalid. (HTTP 400)
>
> Error: /Stage[main]/Main/Glance_image[cirros]/ensure: change from absent
> to present failed: Execution of '/usr/bin/openstack image create --format
> shell cirros --public --container-format=bare --disk-format=qcow2
> --copy-from=
> http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img'
> returned 1: 400 Bad Request: The HTTP URL is invalid. (HTTP 400)
>
>
>
> *which is caused by “glance not able to download image when behind proxy”
> (even when properly configured in .bashrc – http_proxy, https_proxy,
> no_proxy)*
>
> at least what I found about this so far is that in other deployment tools
> it’s handled by some additional config to skip that part:
>
> like RDO
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1147716
>
> or store images locally before “image create” is called like from devstack:
>
>
>
> 2016-02-01 09:45:59.709 | + for image_url in '${IMAGE_URLS//,/ }'
>
> 2016-02-01 09:45:59.709 | + upload_image
> http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-uec.tar.gz
>
> …
>
> 2016-02-01 09:45:59.894 | + '[' -n
> /home2/openstack/devstack/files/images/cirros-0.3.4-x86_64-uec/cirros-0.3.4-x86_64-vmlinuz
> ']'
>
> 2016-02-01 09:45:59.894 | ++ openstack --os-cloud=devstack-admin image
> create cirros-0.3.4-x86_64-uec-kernel --public --container-format aki
> --disk-format aki
>
>
>
> Is there any known way how to get puppet deployments working on systems
> behind proxy ?
>
>
>
> Thanks a lot,
>
> Michal
>
>
>
>
>
>
>
> *From:* Matt Fischer [mailto:m...@mattfischer.com]
> *Sent:* Thursday, February 04, 2016 7:12 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [puppet] compatibility of puppet upstream
> modules
>
>
>
> If you can't isolate the exact thing you need to get cleaned up here it
> can be difficult to unwind. You'll either need to read the code to see
> what's triggering the db setup (which is probably the package installs) or
> start on a clean box. I'd recommend the latter.
>
>
>
> On Thu, Feb 4, 2016 at 10:35 AM, Ptacek, MichalX 
> wrote:
>
> Hi Emilien,
>
>
>
> It seems that keystone database is not populated, because of something,
> which happened on previous runs (e.g. some packages installation),
>
>
>
> Following rows are visible just in log from first attempt
>
> Debug: Executing '/usr/bin/mysql -e CREATE USER 'keystone'@'127.0.0.1'
> IDENTIFIED BY PASSWORD '*936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A''
>
> Debug: Executing '/usr/bin/mysql -e GRANT USAGE ON *.* TO 
> 'keystone'@'127.0.0.1'
> WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR
> 0 MAX_UPDATES_PER_HOUR 0'
>
> ….
>
> ….
>
> I tried to clean databases & uninstall packages installed during
> deployment, but maybe I miss something as it simply doesn’t  work J
>
>
>
> Is there any procedure, how can I restore system to “vanilla state” before
> puppet modules installation ?
>
> It looks to me that when deployment failed, it’s very difficult to
> “unstack” it
>
>
>
> Thanks in advance,
>
> Michal
>
>
>
> *From:* Ptacek, MichalX
> *Sent:* Thursday, February 04, 2016 11:14 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* RE: [openstack-dev] [puppet] compatibility of puppet upstream
> modules
>
>
>
&g

Re: [openstack-dev] [puppet] compatibility of puppet upstream modules

2016-02-04 Thread Matt Fischer
If you can't isolate the exact thing you need to get cleaned up here it can
be difficult to unwind. You'll either need to read the code to see what's
triggering the db setup (which is probably the package installs) or start
on a clean box. I'd recommend the latter.

On Thu, Feb 4, 2016 at 10:35 AM, Ptacek, MichalX 
wrote:

> Hi Emilien,
>
>
>
> It seems that keystone database is not populated, because of something,
> which happened on previous runs (e.g. some packages installation),
>
>
>
> Following rows are visible just in log from first attempt
>
> Debug: Executing '/usr/bin/mysql -e CREATE USER 'keystone'@'127.0.0.1'
> IDENTIFIED BY PASSWORD '*936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A''
>
> Debug: Executing '/usr/bin/mysql -e GRANT USAGE ON *.* TO 
> 'keystone'@'127.0.0.1'
> WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR
> 0 MAX_UPDATES_PER_HOUR 0'
>
> ….
>
> ….
>
> I tried to clean databases & uninstall packages installed during
> deployment, but maybe I miss something as it simply doesn’t  work J
>
>
>
> Is there any procedure, how can I restore system to “vanilla state” before
> puppet modules installation ?
>
> It looks to me that when deployment failed, it’s very difficult to
> “unstack” it
>
>
>
> Thanks in advance,
>
> Michal
>
>
>
> *From:* Ptacek, MichalX
> *Sent:* Thursday, February 04, 2016 11:14 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* RE: [openstack-dev] [puppet] compatibility of puppet upstream
> modules
>
>
>
>
>
>
>
> -Original Message-
> From: Emilien Macchi [mailto:emil...@redhat.com ]
> Sent: Thursday, February 04, 2016 10:06 AM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [puppet] compatibility of puppet upstream
> modules
>
>
>
>
>
>
>
> On 02/03/2016 04:03 PM, Ptacek, MichalX wrote:
>
> > Hi all,
>
> >
>
> >
>
> >
>
> > I have one general question,
>
> >
>
> > currently I am deploying liberty openstack as described in
>
> > https://wiki.openstack.org/wiki/Puppet/Deploy
>
> >
>
> > Unfortunately puppet modules specified in
>
> > puppet-openstack-integration/Puppetfile are not compatible
>
>
>
> Did you take the file from stable/liberty branch?
>
>
> https://github.com/openstack/puppet-openstack-integration/tree/stable/liberty
>
>
>
> *[Michal Ptacek]*  I am deploying scenario003 with stable/liberty
>
> >
>
> > and some are also missing as visible from following output of “puppet
>
> > module list”
>
> >
>
> >
>
> >
>
> > Warning: Setting templatedir is deprecated. See
>
> > http://links.puppetlabs.com/env-settings-deprecations
>
> >
>
> >(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in
>
> > `issue_deprecation_warning')
>
> >
>
> > Warning: Module 'openstack-openstacklib' (v7.0.0) fails to meet some
>
> > dependencies:
>
> >
>
> >   'openstack-barbican' (v0.0.1) requires 'openstack-openstacklib'
>
> > (>=6.0.0 <7.0.0)
>
> >
>
> >   'openstack-zaqar' (v0.0.1) requires 'openstack-openstacklib'
>
> > (>=6.0.0
>
> > <7.0.0)
>
> >
>
> > Warning: Module 'puppetlabs-postgresql' (v4.4.2) fails to meet some
>
> > dependencies:
>
> >
>
> >   'openstack-openstacklib' (v7.0.0) requires 'puppetlabs-postgresql'
>
> > (>=3.3.0 <4.0.0)
>
> >
>
> > Warning: Missing dependency 'deric-storm':
>
> >
>
> >   'openstack-monasca' (v1.0.0) requires 'deric-storm' (>=0.0.1 <1.0.0)
>
> >
>
> > Warning: Missing dependency 'deric-zookeeper':
>
> >
>
> >   'openstack-monasca' (v1.0.0) requires 'deric-zookeeper' (>=0.0.1
>
> > <1.0.0)
>
> >
>
> > Warning: Missing dependency 'dprince-qpid':
>
> >
>
> >   'openstack-cinder' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
>
> >
>
> >   'openstack-manila' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
>
> >
>
> >   'openstack-nova' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
>
> >
>
> > Warning: Missing dependency 'jdowning-influxdb':
>
> >
>
> >   'openstack-monasca' (v1.0.0) requires 'jdowning-influxdb' (>=0.3.0
>
> > <1.0.0)
>
> >
>
> > Warning: Missing dependency 'opentable-kafka':
>
> >
>
> >   'openstack-monasca' (v1.0.0) requires 'opentable-kafka' (>=1.0.0
>
> > <2.0.0)
>
> >
>
> > Warning: Missing dependency 'puppetlabs-stdlib':
>
> >
>
> >   'antonlindstrom-powerdns' (v0.0.5) requires 'puppetlabs-stdlib' (>=
>
> > 0.0.0)
>
> >
>
> > Warning: Missing dependency 'puppetlabs-corosync':
>
> >
>
> >   'openstack-openstack_extras' (v7.0.0) requires 'puppetlabs-corosync'
>
> > (>=0.1.0 <1.0.0)
>
> >
>
> > /etc/puppet/modules
>
> >
>
> > ├──antonlindstrom-powerdns (v0.0.5)
>
> >
>
> > ├──duritong-sysctl (v0.0.11)
>
> >
>
> > ├──nanliu-staging (v1.0.4)
>
> >
>
> > ├──openstack-barbican (v0.0.1)
>
> >
>
> > ├──openstack-ceilometer (v7.0.0)
>
> >
>
> > ├──openstack-cinder (v7.0.0)
>
> >
>
> > ├──openstack-designate (v7.0.0)
>
> >
>
> > ├──openstack-glance (v7.0.0)
>
> >
>
> > ├──openstack-gnocchi (v7.0.0)
>
> >
>
> > ├──openstack-heat (v7

Re: [openstack-dev] [puppet] Midcycle Sprint Summary

2016-02-02 Thread Matt Fischer
Perhaps we should cover and assign each module in the meeting after the
release?

Actually removing the code and tests in many cases would be a good
assignment for people trying to get more commits and experience.
On Feb 1, 2016 2:22 PM, "Cody Herriges"  wrote:

> Emilien Macchi wrote:
> > Last week, we had our midcycle sprint.
> > Our group did a great job and here is a summary of what we worked on:
> >
>
> My attention at the office was stolen quite a few times by finishing up
> work for our production cloud deployment but I worked on the
> pupept-cinder Mitaka deprecations when I could.  First round was done
> which was the removal of old code previously deprecated and I have
> started on a second pass which is new deprecations that are being
> introduced in Mitaka by upstream cinder.
>
> This is the fist time I've sat down to actually just hunt and implement
> deprecations and the number one thing I learned is that it is really
> time consuming.  We'll need several people working on this if we want
> them complete for every module by release time.
>
>
> --
> Cody
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] URLs are not reported in the endpoint listing

2016-02-02 Thread Matt Fischer
I've seen similar odd behavior when using the Keystone client to try to
list endpoints created using the v3 API (via puppet). Try using the
openstack client  and the v3 endpoint. Be sure to set --os-api-version 3.
On Feb 2, 2016 3:06 AM, "Pradip Mukhopadhyay" 
wrote:

> Hello,
>
>
> I did a stacking recently, noticing a behavior:
>
> eystone --os-username admin --os-password secretadmin --os-tenant-name
> admin --os-auth-url http://host:5000/v2.0 endpoint-list
>
> Returns null URLs for public/internal/admin.
>
>
>
> +--+---+---+-+--+--+
> |id|   region  | publicurl | internalurl |
> adminurl |service_id|
>
> +--+---+---+-+--+--+
> | 169f7f5090ea442c8ae534d6cd38c484 | RegionOne |   |
> |  | 8d30999ba36943359b4e7c4ae4f0a15c |
> | 255f7316074d4aecb34b69e3f28309c1 | RegionOne |   |
> |  | f26931e1fa43438da4c32fe530f33796 |
>
>
> Some of the keystone CLIs are not working. e.g. user-list is working. But
> not the others, say service-list/role-list. It is returning : The resource
> could not be found. (HTTP 404) (Request-ID:
> req-b52eace0-b65a-4ba3-afb9-616689a0833e)
>
>
> Not sure what I have messed up.
>
>
> Any help would be solicited.
>
>
>
>
> --pradip
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] separated controller/compute installations using puppet modules

2016-01-28 Thread Matt Fischer
The way I'd recommend is to write your own manifests that include the
openstack modules. I'd use roles and profiles which make it easy to move
things around, but two simple manifests will also work. As Emilien once
said we give you the ingredients but don't cook for you. If you want to
just do two manifests, the integration tests you found are a good start but
you will have to maintain them on your own.

Don't use the openstack module, it's abadonded and should be
removed/deleted, IMHO.
On Jan 28, 2016 6:56 AM, "Ptacek, MichalX"  wrote:

> Hi All,
>
>
>
> I have one very general question,
>
> we would like to deploy role-separated (compute & controller) openstack
> via puppet modules.
>
> Unfortunately puppet-openstack-integration supports only “all-in-one”
> scenarios.
>
> https://github.com/openstack/puppet-openstack-integration
>
>
>
> this project looks like continuation of former
>
> https://forge.puppetlabs.com/puppetlabs/openstack
>
> which is supporting that,  but ended it’s support with Juno release ….
>
>
>
> we are targeting liberty deployments and would like to use puppet modules,
>
> what is the easiest way how to deploy role separated openstack based on
> puppet modules (except fuel) ?
>
>
>
> I miss this info in https://wiki.openstack.org/wiki/Puppet
>
>
>
> thanks,
>
> Michal
>
>
>
>
>
>
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from Puppet Core

2016-01-27 Thread Matt Fischer
Mathieu,

Thank you for all the work you've done over the past few years in this
community. You've done a lot and also done a lot to help answer questions
and mentor new folks.

On Wed, Jan 27, 2016 at 1:13 PM, Mathieu Gagné  wrote:

> Hi,
>
> I would like to ask to be removed from the core reviewers team on the
> Puppet for OpenStack project.
>
> My day to day tasks and focus no longer revolve solely around Puppet and
> I lack dedicated time to contribute to the project.
>
> In the past months, I stopped actively reviewing changes compared to
> what I used to at the beginning when the project was moved to
> StackForge. Community code of conduct suggests I step down
> considerately. [1]
>
> I'm very proud of what the project managed to achieve in the past
> months. It would be a disservice to the community to pretend I'm still
> able or have time to review changes. A lot changed since and I can no
> longer keep up or pretend I can review changes pedantically or
> efficiently as I used to.
>
> Today is time to formalize and face the past months reality by
> announcing my wish to be removed from the core reviewers team.
>
> I will be available to answer questions or move ownership of anything I
> still have under my name.
>
> Wishing you the best.
>
> Mathieu
>
> [1] http://www.openstack.org/legal/community-code-of-conduct/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [infra] adding a third scenario in Puppet OpenStack integration jobs

2016-01-26 Thread Matt Fischer
Also +1 for ceph

And Fernet is a great idea, Keystone is moving towards a day where it's
default.

On Tue, Jan 26, 2016 at 2:20 PM, David Moreau Simard  wrote:

> +1 for adding puppet-ceph and Ceph integration in Nova, Cinder and Glance.
>
> This means there would be two scenarios involving Cinder (lvm+iscsi and
> RBD) and three scenarios involving Glance (file, RBD, Swift)
>
> I find it redundant to install components in the same way across all three
> scenarios. Perhaps we could have a keystone with fernet tokens in one
> scenario and another backend elsewhere.
>
> I'm only familiar with KVM as a hypervisor but maybe we could also throw
> another one in there (Xen ?)
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
> On Jan 26, 2016 12:06 PM, "Emilien Macchi"  wrote:
>
>> Hi folks,
>>
>> Puppet OpenStack integration jobs [1] are very helpful to perform
>> functional testing when deploying OpenStack with our modules.
>>
>> We current run scenario 001 and 002 on both centos7 & trusty, which is 4
>> jobs in the check queue.
>>
>> I would like to propose some changes, feel free to comment:
>> * a complete scenario with ceph, using openstack/puppet-ceph integrated
>> with Nova, Glance, Cinder and eventually Gnocchi.
>> * more Neutron testing: LBaaSv1 first, Octavia in the future, FWaaS,
>> VPNaaS.
>> * more services: murano, mistral, manila, designate
>> * switch from ipv4 local loopback (127.0.0.1) to to ipv6 local loopback
>> binding.
>>
>> Those changes would probably require one more scenario, which represents
>> 2 more jobs. Adding [infra] tag here so they can weigh on it.
>>
>> Any feedback is welcome,
>> Thanks,
>>
>> [1] https://github.com/openstack/puppet-openstack-integration#description
>> --
>> Emilien Macchi
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [oslo] Proposal of adding puppet-oslo to OpenStack

2016-01-24 Thread Matt Fischer
One thing that might be tough for operators is dealing with different
versions of openstack projects which require different versions of oslo.
Right now we have some stuff on Liberty, some stuff not. As we containerize
more services that's going to get even more true. Right now we can solve
this by using different versions of the puppet modules, but that will break
with a common module. To some extent we already have this problem with some
common stuff like openstacklib. I don't have a solution for this other than
being careful with deprecations, and I'll admit that this is just a
theoretical concern for now. As long as we stay within 1 release for our
modules we should probably be ok.

On Sun, Jan 24, 2016 at 1:02 AM, Matthew Mosesohn 
wrote:

> I would personally like to see Keystone get transitioned first, but it
> really doesn't matter where we start if we reach the right goal in the end.
> Since Emelien's work on refactoring all the providers for puppet-keystone,
> it has become a test bed for project-wide features. I'm really excited to
> see consistency in oslo config across services, so keep up the good work!
>
> On Sun, Jan 24, 2016 at 7:05 AM, Xingchao Yu  wrote:
>
>> Hi, all:
>>
>> I spend some times to collect oslo.* versions of openstack
>> projects(which has related puppet module), please check it in following
>> table:
>>
>> https://github.com/openstack/puppet-oslo#module-description
>>
>> From the table, we can find most of oslo.* libraries are the same
>> among the openstack projects(except aodh, gnocchi).
>>
>> So from the table, we could use puppet-oslo to replace configuration
>> of oslo.* in related modules gradually.
>>
>> Thanks & Regards.
>>
>>
>> 2016-01-21 23:58 GMT+08:00 Emilien Macchi :
>>
>>>
>>>
>>> On 01/21/2016 08:15 AM, Doug Hellmann wrote:
>>> > Excerpts from Cody Herriges's message of 2016-01-19 15:50:05 -0800:
>>> >> Colleen Murphy wrote:
>>> >>> On Tue, Jan 19, 2016 at 9:57 AM, Xingchao Yu >> >>> > wrote:
>>> >>>
>>> >>> Hi, Emilien:
>>> >>>
>>> >>>  Thanks for your efforts on this topic, I didn't attend V
>>> >>> release summit and missed related discussion about puppet-oslo.
>>> >>>
>>> >>>  As the reason for not using a unified way to manage oslo_*
>>> >>> parameters is there maybe exist different oslo_* version between
>>> >>> openstack projects.
>>> >>>
>>> >>>  I have an idea to solve this potential problem,we can
>>> maintain
>>> >>> several versions of puppet-oslo, each module can map to different
>>> >>> version of puppet-oslo.
>>> >>>
>>> >>> It would be something like follows: (the map info is not
>>> true,
>>> >>> just for example)
>>> >>>
>>> >>> In Mitaka release
>>> >>> puppet-nova maps to puppet-oslo with 8.0.0
>>> >>> puppet-designate maps to puppet-oslo with 7.0.0
>>> >>> puppet-murano maps to puppet-oslo with 6.0.0
>>> >>>
>>> >>> In Newton release
>>> >>> puppet-nova maps to puppet-oslo with 9.0.0
>>> >>> puppet-designate maps to puppet-oslo with 9.0.0
>>> >>> puppet-murano maps to puppet-oslo with 7.0.0
>>> >>>
>>> >>> For the simplest case of puppet infrastructure configuration, which
>>> is a
>>> >>> single puppetmaster with one environment, you cannot have multiple
>>> >>> versions of a single puppet module installed. This means you
>>> absolutely
>>> >>> cannot have an openstack infrastructure depend on having different
>>> >>> versions of a single module installed. In your example, a user would
>>> not
>>> >>>  be able to use both puppet-nova and puppet-designate since they are
>>> >>> using different versions of the puppet-oslo module.
>>> >>>
>>> >>> When we put out puppet modules, we guarantee that version X.x.x of a
>>> >>> given module works with the same version of every other module, and
>>> this
>>> >>> proposal would totally break that guarantee.
>>> >>>
>>> >>
>>> >> How does OpenStack solve this issue?
>>> >>
>>> >> * Do they literally install several different versions of the same
>>> >> python library?
>>> >> * Does every project vendor oslo?
>>> >> * Is the oslo library its self API compatible with older versions?
>>> >
>>> > Each Oslo library has its own version. Only one version of each
>>> > library is installed at a time. We use the global requirements list
>>> > to sync compatible requirements specifications across all OpenStack
>>> > projects to make them co-installable. And we try hard to maintain
>>> > API compatibility, using SemVer versioning to indicate when that
>>> > was not possible.
>>> >
>>> > If you want to have a single puppet module install all of the Oslo
>>> > libraries, you could pull the right versions from the
>>> upper-constraints.txt
>>> > file in the openstack/requirements repository. That file lists the
>>> > versions that were actually tested in the gate.
>>>
>>> Thanks for this feedback Doug!
>>> So I propose we create the module in op

Re: [openstack-dev] [puppet] proposing Alex Schultz part of core team

2016-01-05 Thread Matt Fischer
+1 from me!

On Tue, Jan 5, 2016 at 10:55 AM, Emilien Macchi  wrote:

> Hi,
>
> Alex Schultz (mwhahaha on IRC) has been a very active contributor over
> the last months in the Puppet OpenStack group:
> * He's doing a lot of reviews and they are very valuable. He's in my
> opinion fully aware of our conventions and has nice insights to improve
> our modules.
> * He's very helpful to work on bugs or new features when needed.
> * Always present during meetings and actively participating.
> * Always on IRC, he never hesitates to give a hand on something or help
> people.
>
> I think we're very lucky to have Alex part of our group and I would like
> to promote him core reviewer of all our modules.
>
> Team, please vote if you like the idea,
>
> Thanks,
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] deprecation warning everywhere issue

2015-12-22 Thread Matt Fischer
I've pinged you on IRC to this effect but this has broken the stable
branches which now have unresolvable dependencies on the service whose name
has changed.


Error: Could not find resource 'Keystone_endpoint[RegionOne/glance]' for
relationship on 'Service[glance-api]' on node
openstack-puppet-test.openstacklocal
Error: Could not find resource 'Keystone_endpoint[RegionOne/glance]' for
relationship on 'Service[glance-api]' on node
openstack-puppet-test.openstacklocal

This break is specifically in glance, here:

manifests/keystone/auth.pp

  if $configure_endpoint {
Keystone_endpoint["${region}/${real_service_name}"]  ~> Service <| name
== 'glance-api' |>
Keystone_endpoint["${region}/${real_service_name}"] -> Glance_image<||>
  }

I have not checked the other modules. I will be around for reviews on this
if you ping me via email.



On Tue, Dec 22, 2015 at 1:42 PM, Matt Fischer  wrote:

> Thanks Emilien,
>
> This is what I was mentioning to you on IRC last week as a must fix for
> Mitaka. I'd like to also backport this to Liberty once it lands.
>
> On Mon, Dec 21, 2015 at 10:48 AM, Emilien Macchi 
> wrote:
>
>> Hello,
>>
>> I just reported [1] which affects puppet-keystone but also *all* modules.
>> Since [2], you now have a lot of warnings about the new way to declare
>> keystone_endpoint resource.
>>
>> This is not really acceptable and provides a poor end-user experience to
>> have (by default) a lot of warnings.
>>
>> The patch that will fix it in puppet-keystone is [3] (please review it).
>> To fix all other modules, we need to update unit tests and sometimes
>> keystone/auth.pp in the module. It will requires a Depends-On the
>> puppet-keystone patch, which means puppet-keystone patch will fail
>> integration tests (circular dependency). Ex with [4] (puppet-glance)
>>
>> So here is the plan:
>> * let's review [3] but do not merge it.
>> * let's review [4] and other that will follow (on same Gerrit topic).
>> * Once all patches have been submitted, I'll send a patch to
>> puppet-openstack-integration with Depends-On of all other patches and
>> see Integration testing, so we don't break our CI.
>>
>> You can follow all this work on the "endpoint/warnings" Gerrit topic [5].
>>
>> Any other suggestion is welcome,
>> Please review,
>>
>> [1] https://bugs.launchpad.net/puppet-keystone/+bug/1528308
>> [2]
>>
>> http://git.openstack.org/cgit/openstack/puppet-keystone/commit/?id=0a4e06abb0f5b3f324464ff5219d2885816311ce
>> [3] https://review.openstack.org/#/c/259996/
>> [4] https://review.openstack.org/#/c/260044/
>> [5] https://review.openstack.org/#/q/topic:endpoint/warnings
>> --
>> Emilien Macchi
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] deprecation warning everywhere issue

2015-12-22 Thread Matt Fischer
Thanks Emilien,

This is what I was mentioning to you on IRC last week as a must fix for
Mitaka. I'd like to also backport this to Liberty once it lands.

On Mon, Dec 21, 2015 at 10:48 AM, Emilien Macchi  wrote:

> Hello,
>
> I just reported [1] which affects puppet-keystone but also *all* modules.
> Since [2], you now have a lot of warnings about the new way to declare
> keystone_endpoint resource.
>
> This is not really acceptable and provides a poor end-user experience to
> have (by default) a lot of warnings.
>
> The patch that will fix it in puppet-keystone is [3] (please review it).
> To fix all other modules, we need to update unit tests and sometimes
> keystone/auth.pp in the module. It will requires a Depends-On the
> puppet-keystone patch, which means puppet-keystone patch will fail
> integration tests (circular dependency). Ex with [4] (puppet-glance)
>
> So here is the plan:
> * let's review [3] but do not merge it.
> * let's review [4] and other that will follow (on same Gerrit topic).
> * Once all patches have been submitted, I'll send a patch to
> puppet-openstack-integration with Depends-On of all other patches and
> see Integration testing, so we don't break our CI.
>
> You can follow all this work on the "endpoint/warnings" Gerrit topic [5].
>
> Any other suggestion is welcome,
> Please review,
>
> [1] https://bugs.launchpad.net/puppet-keystone/+bug/1528308
> [2]
>
> http://git.openstack.org/cgit/openstack/puppet-keystone/commit/?id=0a4e06abb0f5b3f324464ff5219d2885816311ce
> [3] https://review.openstack.org/#/c/259996/
> [4] https://review.openstack.org/#/c/260044/
> [5] https://review.openstack.org/#/q/topic:endpoint/warnings
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] including openstacklib::openstackclient

2015-12-08 Thread Matt Fischer
We decided in the meeting today to just to a naked include:

https://review.openstack.org/#/c/253311/
https://review.openstack.org/#/c/254824/

On Tue, Dec 8, 2015 at 11:29 AM, Cody Herriges  wrote:

> Matt Fischer wrote:
> > I found this bug in the liberty branch [1] over the weekend in the
> > handling of openstack client between glance & keystone. As a part of
> > fixing that I've discussed with Clayton and Michael Chapman just what
> > the right way is to include the openstackclient.
> >
> > Keystone does it by conditionally including the class in client.pp [2].
> > Glance does it with an ensure_resources call in the main class [3].
> >
> > Michael Chapman was of the opinion we should just include the
> > openstacklib::openstackclient unconditionally and let hiera figure it
> > out (hope I'm paraphrasing his opinion). That is cleaner but perhaps
> > less flexible.
> >
> > Whatever solution we pick, I want to be consistent and back-portable.
> >
> > Thoughts?
> >
> >
> > [1] - https://bugs.launchpad.net/puppet-openstacklib/+bug/1523643
> > [2]-
> https://github.com/openstack/puppet-keystone/blob/master/manifests/client.pp#L20-L26
>
> This way is "ok."
>
> > [3]-
> https://github.com/openstack/puppet-glance/blob/master/manifests/init.pp#L33
>
> The glance way seems bad since it maintaining its own private
> implementation for a thing that intended to be shared.
>
>
> The best is probably to just unconditionally including
> Class[openstacklib::openstackclient] using the include function across
> all modules and removing the option for each module to override the
> package_ensure parameter for Class[openstacklib::openstackclient].
> This'll leave you open to resource conflicts based on manifest parse
> order though.  For example, if I want to set
> Class[openstacklib::openstackclient]'s package_ensure parameter to
> latest I need to declare a class resource;
>
> class { '::openstacklib::openstackclient':
>   package_ensure => $ensure,
> }
>
> This is all fine and good if I do this at the top of the manifest in my
> composite class then cross my fingers that all subsequent declarations
> of the class are using the include function.  If this happens the other
> way around Puppet will throw a duplicate resource definition error.
>
> To ease into the API and manifest change you could basically combine the
> glance and keystone example and use the ensure_resource function to
> ensure a class resource type with the name
> ::openstacklib::openstackclient exists.  That just puts you in the
> situation where the first puppet-* module class to declare it with a
> certain set of parameters will win.
>
>
> --
> Cody
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposing Cody Herriges part of Puppet OpenStack core

2015-12-08 Thread Matt Fischer
+1

On Tue, Dec 8, 2015 at 2:07 PM, Rich Megginson  wrote:

> On 12/08/2015 09:49 AM, Emilien Macchi wrote:
>
> Hi,
>
> Back in "old days", Cody was already core on the modules, when they were
> hosted by Puppetlabs namespace.
> His contributions [1] are very valuable to the group:
> * strong knowledge on Puppet and all dependencies in general.
> * very helpful to debug issues related to Puppet core or dependencies
> (beaker, etc).
> * regular attendance to our weekly meeting
> * pertinent reviews
> * very understanding of our coding style
>
> I would like to propose having him back part of our core team.
> As usual, we need to vote.
>
>
> +1
>
> Thanks,
>
> [1]http://stackalytics.openstack.org/?metric=commits&release=all&project_type=all&user_id=ody-cat
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] including openstacklib::openstackclient

2015-12-07 Thread Matt Fischer
I found this bug in the liberty branch [1] over the weekend in the handling
of openstack client between glance & keystone. As a part of fixing that
I've discussed with Clayton and Michael Chapman just what the right way is
to include the openstackclient.

Keystone does it by conditionally including the class in client.pp [2].
Glance does it with an ensure_resources call in the main class [3].

Michael Chapman was of the opinion we should just include the
openstacklib::openstackclient unconditionally and let hiera figure it out
(hope I'm paraphrasing his opinion). That is cleaner but perhaps less
flexible.

Whatever solution we pick, I want to be consistent and back-portable.

Thoughts?


[1] - https://bugs.launchpad.net/puppet-openstacklib/+bug/1523643
[2] -
https://github.com/openstack/puppet-keystone/blob/master/manifests/client.pp#L20-L26
[3] -
https://github.com/openstack/puppet-glance/blob/master/manifests/init.pp#L33
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-23 Thread Matt Fischer
On Mon, Nov 23, 2015 at 9:42 AM, Morgan Fainberg 
wrote:

> Hi everyone,
>
> This email is being written in the context of Keystone more than any other
> project but I strongly believe that other projects could benefit from a
> similar evaluation of the policy.
>
> Most projects have a policy that prevents the following scenario (it is a
> social policy not enforced by code):
>
> * Employee from Company A writes code
> * Other Employee from Company A reviews code
> * Third Employee from Company A reviews and approves code.
>
> This policy has a lot of history as to why it was implemented. I am not
> going to dive into the depths of this history as that is the past and we
> should be looking forward. This type of policy is an actively distrustful
> policy. With exception of a few potentially bad actors (again, not going to
> point anyone out here), most of the folks in the community who have been
> given core status on a project are trusted to make good decisions about
> code and code quality. I would hope that any/all of the Cores would also
> standup to their management chain if they were asked to "just push code
> through" if they didn't sincerely think it was a positive addition to the
> code base.
>
> Now within Keystone, we have a fair amount of diversity of core reviewers,
> but we each have our specialities and in some cases (notably KeystoneAuth
> and even KeystoneClient) getting the required diversity of reviews has
> significantly slowed/stagnated a number of reviews.
>
> What I would like us to do is to move to a trustful policy. I can
> confidently say that company affiliation means very little to me when I was
> PTL and nominating someone for core. We should explore making a change to a
> trustful model, and allow for cores (regardless of company affiliation)
> review/approve code. I say this since we have clear steps to correct any
> abuses of this policy change.
>
> With all that said, here is the proposal I would like to set forth:
>
> 1. Code reviews still need 2x Core Reviewers (no change)
> 2. Code can be developed by a member of the same company as both core
> reviewers (and approvers).
> 3. If the trust that is being given via this new policy is violated, the
> code can [if needed], be reverted (we are using git here) and the actors in
> question can lose core status (PTL discretion) and the policy can be
> changed back to the "distrustful" model described above.
>
> I hope that everyone weighs what it means within the community to start
> moving to a trusting-of-our-peers model. I think this would be a net win
> and I'm willing to bet that it will remove noticeable roadblocks [and even
> make it easier to have an organization work towards stability fixes when
> they have the resources dedicated to it].
>
> Thanks for your time reading this.
>
> Regards,
> --Morgan
> PTL Emeritus, Keystone
>


I happen to disagree with it in the general case. Developers, even cores or
especially cores, can be subject to political and career pressure to get
changes merged. Some employees are judged by managers on the number of
commits/features they land. This puts pressure on them to push things
through. I hope this is the exception, but I think it does happen. Part of
being a core is wielding influence, soft power in other words; it's not
unreasonable to expect a core reviewer to be able to get a +2 from someone
outside their company. That's my opinion on the general case.

In the specific case of a project like puppet-openstack, we are not a large
team of reviewers and so although we generally try our best to avoid having
+2s from the same company or merging each other's work, it does sometimes
happen. We still strive to have at least one +2 from someone outside our
company. So I think some projects are already doing this (we are) but it
requires a strong PTL who is willing to call out abuse and an understanding
amongst the cores about what the social policy is.

So on a project-by-project basis I think rules may already be bent/modified
by the teams. I'm not sure if they're codified anywhere other than just
known as an expectation.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] review the core-reviewer members

2015-11-19 Thread Matt Fischer
I too would like to thank Dan, Michael, and François for all their hard
work. Michael and Dan in particular have helped me personally learn a bunch
and been helpful in answering questions.

On Thu, Nov 19, 2015 at 5:45 AM, Emilien Macchi  wrote:

> So here is a status:
>
> * François Charlier told me he's not working anymore on Puppet
> OpenStack, and wants to be dropped from core-reviewer list.
> I would like to personally thank him, he was the guy who showed me what
> is Puppet and how to write Puppet code. Thanks a lot for your work in
> our community, you're welcome anytime if you want to contribute again.
> * Dan Bode and Michael Chapman are AFIK not contributing to Puppet
> OpenStack for a while, regarding stats. I think it makes sense to drop
> them too.
>
> Dan created Puppet OpenStack modules some time ago and thanks to his
> work, we created a community in OpenStack, and reached a maturity.
> Thank you for all our work in the modules, specially moving them to
> Stackforge back in 2013, it was a great move for everyone.
>
> Michael was working on the Puppet modules for long time too and was huge
> contributor until some months ago. AFIK he's not working anymore on it.
>
> Again, we will always welcome you back if you decide to contribute again.
>
> During the Summit, we decided to update the core-reviewer list to give a
> chance for new people, but also dropping some people that do not work on
> the project anymore.
>
> I'll proceed to this action today:
> drop François, Dan and Michael from the core-reviewer list [1].
>
> Thanks again for all your contributions, I sincerely hope we will have
> the chance to work together again later.
>
> [1] https://review.openstack.org/#/admin/groups/134,members
>
>
> On 10/31/2015 03:40 PM, Emilien Macchi wrote:
> > Hi,
> >
> > At the summit we discussed about updating the core-reviewer members [1].
> > To continue the OpenStack meritocracy model, I believe we should keep
> > the list consistent with how our group is currently working and who is
> > really active [2].
> > Puppet OpenStack group would not be here without these people and we
> > really appreciate the work done by our community.
> > If you're not involved anymore in Puppet OpenStack project (following
> > meetings, mailing-list, doing reviews and sending patches), we would
> > appreciate your insight about updating this list.
> >
> > [1] https://review.openstack.org/#/admin/groups/134,members
> > [2]
> http://stackalytics.com/report/contribution/puppetopenstack-group/180
> >
> > Thanks,
> > --
> > Emilien Macchi
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Records for floating addresses are not removed when an instance is removed

2015-11-13 Thread Matt Fischer
You can do it like we did for juno Designate as covered in our Vancouver
talk start about 21 minutes:

https://www.youtube.com/watch?v=N8y51zqtAPA

We've not ported the code to Kilo or Liberty yet but the approach may still
work.


On Fri, Nov 13, 2015 at 9:49 AM, Jaime Fernández  wrote:

> When removing an instance (with one floating address assigned) in Horizon,
> designate-sink only receives an event for instance removal. As a result,
> only the instance is removed but the floating addresses records are not
> removed.
> I'm not sure if it's a bug in openstack (I guess that it should also
> notify about the unassignment of floating addresses) or it should be
> considered in the nova notification handler (
> https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py#L72
> ).
> However, it is not possible to add metadata in the floating IP records to
> save the instance_id and remove them easily when an instance is removed.
> What's the best approach to remove the floating address records of an
> instance that is being removed?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] about $::os_service_default

2015-11-13 Thread Matt Fischer
This work is already being done by Clayton (and to a lesser extent me).
>From the openstack modules POV it mainly involves moving the packaging code
into a separate place [1][2] and then integrating with puppet-os_docker[3].
This os_docker work is only done for designate and heat and of course
requires os_docker which is not official and is rather specific to us.

As long as the external hooks are in place folks could plugin their own
venv, docker, tarball, or whatever way to install code.

[1] -
https://github.com/openstack/puppet-designate/commit/5a37cc81276cb8f8ee6dca9b9b532930e6ac86de
[2] -
https://github.com/openstack/puppet-heat/commit/dca9fe942b99b9c30e31167e4736058767738f21
[3] - https://github.com/twc-openstack/puppet-os_docker



On Fri, Nov 13, 2015 at 11:13 AM, Cody Herriges  wrote:

> Yanis Guenane wrote:
> >
> > On 11/03/2015 02:57 PM, Emilien Macchi wrote:
> >> I'm seeing a lot of patches using the new $::os_service_default.
> >>
> >> Please stop trying to using it at this time. The feature is not stable
> >> yet and we're testing it only for puppet-cinder module.
> >> I've heard Yanis found something that is not backward compatible with
> >> logging, but he's away this week so I suggest we wait next week.
> >>
> >> In the meantime, please do not use $::os_service_default outside
> >> puppet-cinder.
> >>
> >> Thanks a lot,
> > After a deeper investigation, the issue with logging[1] is only true if
> > a user is using the puppet-openstack to only configure the component and
> > not relying on it to install the RDO/UCA packages.
> >
> > On RDO, the file /usr/lib/systemd/system/openstack-cinder-api.service is
> > provided. It specifies :
> >
> >   ExecStart=/usr/bin/cinder-api --config-file
> > /usr/share/cinder/cinder-dist.conf \
> > --config-file /etc/cinder/cinder.conf --logfile
> /var/log/cinder/api.log
> >
> > On Ubuntu, the file /etc/init/cinder-api.conf is provided. It specfies :
> >
> >   exec start-stop-daemon --start --chuid cinder --exec
> /usr/bin/cinder-api \
> >  -- --config-file=/etc/cinder/cinder.conf
> > --log-file=/var/log/cinder/cinder-api.log
> >
> > In my understanding, this means that when using packages none of log-dir
> > and log-file will ever be taken in account.
> >
> > So the only use case moving those values to $::os_service_default might
> > impact are for the people relying directly on the python package.
> >
> > This raises two questions I'd like to ask :
> >
> >   * Do lot of people use puppet-openstack modules relying on the python
> > package directly ?
> >   * Should we be opinionated here ? If user relies on the python
> > packages, we can consider that an advanced use-case and expect the user
> > to know exactly what she needs to configure. Plus we do not handle the
> > use case where we want a file for cinder-volume.log and
> cinder-backup.log.
> >
> > [1]
> >
> https://trello.com/c/XLJJJBF0/71-move-modules-to-the-os-service-default-pattern
> >
>
> My opinion is that installing directly from python pip is not currently
> officially supported in the modules and specifically trying to take that
> use case into account when we do not support it either leaves us in a
> place where we have to go full in on supporting them or put the modules
> in a state that thoroughly frustrates and misleads users.
>
> If we were going to put priority on which packaging systems to support
> next; I'd prefer docker over pip.
>
>
> --
> Cody
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #58 and next week

2015-11-08 Thread Matt Fischer
We have a very light schedule if anyone would like to discuss bugs or other
issues, it would be a good time to do so.

On Sat, Nov 7, 2015 at 12:29 PM, Emilien Macchi 
wrote:

> Hello!
>
> Here's an initial agenda for our weekly meeting, Tuesday at 1500 UTC
> in #openstack-meeting-4:
>
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20151110
>
> I'm off all next week (holidays without IRC, and rare access to
> e-mails...), so mfish will be chair.
>
> During the week, if you have any problem with Puppet CI, you can ping
> degorenko on IRC.
> When I come back, I'll work on 7.0.0 release to create our
> stable/liberty branch.
>
> Thanks and see you soon,
> Emilien
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Creating puppet-keystone-core and proposing Richard Megginson core-reviewer

2015-11-03 Thread Matt Fischer
Sorry I replied to this right away but used the wrong email address and it
bounced!

> I've appreciated all of richs v3 contributions to keystone. +1 from me.

On Tue, Nov 3, 2015 at 4:38 AM, Sofer Athlan-Guyot 
wrote:

> He's very good reviewer with a deep knowledge of keystone and puppet.
> Thank you Richard for your help.
>
> +1
>
> Emilien Macchi  writes:
>
> > At the Summit we discussed about scaling-up our team.
> > We decided to investigate the creation of sub-groups specific to our
> > modules that would have +2 power.
> >
> > I would like to start with puppet-keystone:
> > https://review.openstack.org/240666
> >
> > And propose Richard Megginson part of this group.
> >
> > Rich is leading puppet-keystone work since our Juno cycle. Without his
> > leadership and skills, I'm not sure we would have Keystone v3 support
> > in our modules.
> > He's a good Puppet reviewer and takes care of backward compatibility.
> > He also has strong knowledge at how Keystone works. He's always
> > willing to lead our roadmap regarding identity deployment in
> > OpenStack.
> >
> > Having him on-board is for us an awesome opportunity to be ahead of
> > other deployments tools and supports many features in Keystone that
> > real deployments actually need.
> >
> > I would like to propose him part of the new puppet-keystone-core
> > group.
> >
> > Thank you Rich for your work, which is very appreciated.
>
> --
> Sofer Athlan-Guyot
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] operator_roles in puppet-swift?

2015-11-01 Thread Matt Fischer
I'd like to get some clarification and hopefully correction on the values
for the two operator_roles variables. One is in manifests/keystone/auth.pp,
and it claims "Array of strings. List of roles Swift considers as admin.".
The other is in manifests/proxy/keystone.pp and it claims to be "a list of
keystone roles a user must have to gain access to Swift.".  "gain access
to" does not imply admin to me, it implies basic features won't work, but
I'm not sure that's really what it means.

So are these in fact separate concepts? What I read is that despite them
having the same name, one is for admins, and one is needed to use swift at
all. However, since they both default to ["admin", "SwiftOperator"], I
don't really think that's true.

Can someone clarify and then fix the comments or code?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] publicURL vs internalURL for resource validation

2015-10-24 Thread Matt Fischer
>From an operations point of view I'd also prefer all service to service
calls to go through the internalURL is there a reason it's not default?
On Oct 24, 2015 7:56 AM, "Attila Szlovencsak" 
wrote:

> Hi!
>
> I am using Openstack Kilo (2015.1.1)
> As I learned from the code, heat-engine uses the endpoint-type "publicURL",
> when validating templates. I also see that I can override that from
> heat.conf via [clients_XXX]/endpoint_type.
>
>
> heat/engine/clients/os/nova.py
> 
> def _create(self):
> endpoint_type = self._get_client_option('nova', 'endpoint_type')
> management_url = self.url_for(service_type='compute',
>   endpoint_type=endpoint_type)
>
>
> /heat/common/config.py
> =
> # these options define baseline defaults that apply to all clients
> default_clients_opts = [
> cfg.StrOpt('endpoint_type',
>default='publicURL',
>
> 
> My questions:
>
> 1. Shouldn't we use the  interalURL instead as default?  In a typical case,
> the controller node sits behind a load-balancer, and IP for the publicURLs
> are held by the load-balancer. The controller node (so heat-engine) might
> not have access to the publicURL at all.
>
> 2. Instead of creating and "endpoint_type" entry in heat.conf for each and
> every service,  is there a simpler way to force using the internalURL?
>
> Thanks in advance,
> Attila
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Potential critical issue, due Puppet mix stderr and stdout while execute commands

2015-10-22 Thread Matt Fischer
On Thu, Oct 22, 2015 at 12:52 AM, Sergey Vasilenko 
wrote:

>
> On Thu, Oct 22, 2015 at 6:16 AM, Matt Fischer 
> wrote:
>
>> I thought we had code in other places that split out stderr and only
>> logged it if there was an actual error but I cannot find the reference now.
>> I think that matches the original proposal. Not sure I like idea #3.
>
>
> Matthew, this topic not about SSL. ANY warnings, ANY output to stderr from
> CLI may corrupt work of providers from puppet-* modules for openstack
> components.
>
> IMHO it's a very serious bug, that potential affect openstack puppet
> modules.
>
> I see 3 way for fix it:
>
>1. Long way. big patch to puppet core for add ability to collect
>stderr and stdout separately. But most of existing puppet providers waits
>that stderr and stdout was mixed when handle errors of execution (non-zero
>return code). Such patch will broke backward compatibility, if will be
>enabled by default.
>2. Middle way. We should write code for redefine 'commands' method.
>New commands should collect stderr and stdout separately, but if error
>happens return stderr (with ability access to stdout too).
>3. Short way. Modify existing providers to use json output instead
>plain-text or csv. JSON output may be separated from any garbage (warnings)
>slightly. I make this patch as example of this way:
>https://review.openstack.org/#/c/238156/ . Anyway json more formalized
>format for data exchange, than plain text.
>
> IMHO way #1 is a best solution, but not easy.
>
>
I must confess that I'm a bit confused about this. It wasn't a secret that
we're calling out to commands and parsing the output. It's been discussed
over and over on this list as recently as last week, so this has been a
known possible issue for quite a long time. In my original email I was
agreeing with you, so I'm not sure why we're arguing now. Anyway...

I think we need to split stderr and stdout and log stderr on errors, your
idea #2. Using json like openstack-client can do does not solve this
problem for us, you still can end up with a bunch of junk on stderr.

This would be a good quick discussion in Tokyo if you guys will be there.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Potential critical issue, due Puppet mix stderr and stdout while execute commands

2015-10-21 Thread Matt Fischer
I thought we had code in other places that split out stderr and only logged
it if there was an actual error but I cannot find the reference now. I
think that matches the original proposal. Not sure I like idea #3.

On Wed, Oct 21, 2015 at 9:21 AM, Stanislaw Bogatkin 
wrote:

> I spoken with Sergii about this and prepared a patch for get rid of
> SecurityWarning [0] - it was easy. But we can't get rid from 
> InsecurePlatformWarning
> so easy way. I see next options:
> 1. Update python version as [1] said - should be hard task
> 2. Downgrade urllib version to one without such warning - is a bad idea,
> as for me
> 3. Rewrite code to use non-standard ssl python module (pyOpenSSL, for
> example) - may be a massive task
> 4. Use something like 2>/dev/null to don't show stderr when call the
> command - doesn't looks good, cause problem can be seen on other places (I
> saw similar problems with keystone provider, for example)
> 5. Rewrite code to split stderr/stdout, as Sergey proposed - is a most
> reasonable idea, as for me.
>
> [0] https://review.openstack.org/#/c/237379
> [1]
> https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning
>
>
> On Wed, Oct 21, 2015 at 10:02 AM, Sergey Vasilenko <
> svasile...@mirantis.com> wrote:
>
>> Hi, guys!
>>
>> Now I observe potential-dangerous situation in the providers of
>> puppet-neutron module. I want share details, because not only
>> puppet-neutron module may be broken by warnings from Openstack CLI
>> utilities.
>>
>>
>>  After updating urllib3 library on my lab, commands like 'neutron net
>> list' began to throw warnings, like:
>>
>>> root@node-2:~# neutron net-list
>>> /usr/lib/python2.7/dist-packages/urllib3/util/ssl_.py:90:
>>> InsecurePlatformWarning: A true SSLContext object is not available. This
>>> prevents urllib3 from configuring SSL appropriately and may cause certain
>>> SSL connections to fail. For more information, see
>>> https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning
>>> .
>>>   InsecurePlatformWarning
>>> /usr/lib/python2.7/dist-packages/urllib3/connection.py:251:
>>> SecurityWarning: Certificate has no `subjectAltName`, falling back to check
>>> for a `commonName` for now. This feature is being removed by major browsers
>>> and deprecated by RFC 2818. (See
>>> https://github.com/shazow/urllib3/issues/497 for details.)
>>>   SecurityWarning
>>>
>>> +--+---+---+
>>> | id   | name  | subnets
>>>   |
>>>
>>> +--+---+---+
>>> | 9e1c0866-51f0-4659-8d5c-1c5d0843dab4 | net04_ext |
>>> 29c952ec-2a13-46fc-a8a1-6e2468a92a95 172.18.171.0/24  |
>>> | d70b399b-668b-4861-b092-4876ec65df60 | net04 |
>>> b87fbfd1-0e52-4ab6-8987-286ef0912d1f 192.168.111.0/24 |
>>>
>>> +--+---+---+
>>>
>>
>> root@node-2:~#
>>
>>
>> Such urllib3 based warnings is only particular case. Warnings may appear
>> by another reason while call any Openstack utilities.
>>
>> Such warnings lead to broke work of puppet-neutron manifests:
>>
>>> 2015-10-20 16:42:11 +
>>> /Stage[main]/Main/Openstack::Network::Create_network[net04]/Neutron_network[net04]
>>> (info): Evaluated in 5.51 seconds
>>> 2015-10-20 16:42:11 + Puppet (debug): Prefetching neutron resources
>>> for neutron_subnet
>>> 2015-10-20 16:42:11 + Puppet (debug): Executing '/usr/bin/neutron
>>> subnet-list --format=csv --column=id --quote=none'
>>> 2015-10-20 16:42:13 + Puppet (debug): Executing '/usr/bin/neutron
>>> subnet-show --format=shell InsecurePlatformWarning'
>>> 2015-10-20 16:42:16 + Puppet::Type::Neutron_subnet::ProviderNeutron
>>> (notice): Unable to complete neutron request due to non-fatal error:
>>> "Execution of '/usr/bin/neutron subnet-show --format=shell
>>> InsecurePlatformWarning' returned 1:
>>> /usr/lib/python2.7/dist-packages/urllib3/util/ssl_.py:90:
>>> InsecurePlatformWarning: A true SSLContext object is not available. This
>>> prevents urllib3 from configuring SSL appropriately and may cause certain
>>> SSL connections to fail. For more information, see
>>> https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
>>> InsecurePlatformWarning
>>> /usr/lib/python2.7/dist-packages/urllib3/connection.py:251:
>>> SecurityWarning: Certificate has no `subjectAltName`, falling back to check
>>> for a `commonName` for now. This feature is being removed by major browsers
>>> and deprecated by RFC 2818. (See
>>> https://github.com/shazow/urllib3/issues/497 for details.)
>>>   SecurityWarningUnable to find subnet with name
>>> 'InsecurePlatformWarning'
>>> ". Retrying for 7 sec.
>>
>>  .
>>
>> Unable to find subnet with name 'InsecurePlatformWarning'
>>> ". Retrying 

Re: [openstack-dev] [puppet][Fuel] OpenstackLib Client Provider Better Exception Handling

2015-10-15 Thread Matt Fischer
On Thu, Oct 15, 2015 at 4:10 AM, Vladimir Kuklin 
wrote:

> Gilles,
>
> 5xx errors like 503 and 502/504 could always be intermittent operational
> issues. E.g. when you access your keystone backends through some proxy and
> there is a connectivity issue between the proxy and backends which
> disappears in 10 seconds, you do not need to rerun the puppet completely -
> just retry the request.
>
> Regarding "REST interfaces for all Openstack API" - this is very close to
> another topic that I raised ([0]) - using native Ruby application and
> handle the exceptions. Otherwise whenever we have an OpenStack client
> (generic or neutron/glance/etc. one) sending us a message like '[111]
> Connection refused' this message is very much determined by the framework
> that OpenStack is using within this release for clients. It could be
> `requests` or any other type of framework which sends different text
> message depending on its version. So it is very bothersome to write a bunch
> of 'if' clauses or gigantic regexps instead of handling simple Ruby
> exception. So I agree with you here - we need to work with the API
> directly. And, by the way, if you also support switching to native Ruby
> OpenStack API client, please feel free to support movement towards it in
> the thread [0]
>
> Matt and Gilles,
>
> Regarding puppet-healthcheck - I do not think that puppet-healtcheck
> handles exactly what I am mentioning here - it is not running exactly at
> the same time as we run the request.
>
> E.g. 10 seconds ago everything was OK, then we had a temporary
> connectivity issue, then everything is ok again in 10 seconds. Could you
> please describe how puppet-healthcheck can help us solve this problem?
>


You are right, it probably won't. At that point you are using puppet to
work around some fundamental issues in your OpenStack deployment.


>
> Or another example - there was an issue with keystone accessing token
> database when you have several keystone instances running, or there was
> some desync between these instances, e.g. you fetched the token at keystone
> #1 and then you verify it again keystone #2. Keystone #2 had some issues
> verifying it not due to the fact that token was bad, but due to the fact
> that that keystone #2 had some issues. We would get 401 error and instead
> of trying to rerun the puppet we would need just to handle this issue
> locally by retrying the request.
>
> [0] http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/66423
>

Another one that is a deployment architecture problem. We solved this by
configuring the load balancer to direct keystone traffic to a single db
node, now we solve it with Fernet tokens. If you have this specific issue
above it's going to manifest in all kinds of strange ways and can even
happen to control services like neutron/nova etc as well. Which means even
if we get puppet to pass with a bunch of retries, OpenStack is not healthy
and the users will not be happy about it.

I don't want to give them impression that I am completely opposed to
retries, but on the other hand, when my deployment is broken, I want to
know quickly, not after 10 minutes of retries, so we need to balance that.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][Fuel] OpenstackLib Client Provider Better Exception Handling

2015-10-14 Thread Matt Fischer
On Thu, Oct 8, 2015 at 5:38 AM, Vladimir Kuklin 
wrote:

> Hi, folks
>
> * Intro
>
> Per our discussion at Meeting #54 [0] I would like to propose the uniform
> approach of exception handling for all puppet-openstack providers accessing
> any types of OpenStack APIs.
>
> * Problem Description
>
> While working on Fuel during deployment of multi-node HA-aware
> environments we faced many intermittent operational issues, e.g.:
>
> 401/403 authentication failures when we were doing scaling of OpenStack
> controllers due to difference in hashing view between keystone instances
> 503/502/504 errors due to temporary connectivity issues
> non-idempotent operations like deletion or creation - e.g. if you are
> deleting an endpoint and someone is deleting on the other node and you get
> 404 - you should continue with success instead of failing. 409 Conflict
> error should also signal us to re-fetch resource parameters and then decide
> what to do with them.
>
> Obviously, it is not optimal to rerun puppet to correct such errors when
> we can just handle an exception properly.
>
> * Current State of Art
>
> There is some exception handling, but it does not cover all the
> aforementioned use cases.
>
> * Proposed solution
>
> Introduce a library of exception handling methods which should be the same
> for all puppet openstack providers as these exceptions seem to be generic.
> Then, for each of the providers we can introduce provider-specific
> libraries that will inherit from this one.
>
> Our mos-puppet team could add this into their backlog and could work on
> that in upstream or downstream and propose it upstream.
>
> What do you think on that, puppet folks?
>
> [0]
> http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-10-06-15.00.html
>

I think that we should look into some solutions here as I'm generally for
something we can solve once and re-use. Currently we solve some of this at
TWC by serializing our deploys and disabling puppet site wide while we do
so. This avoids the issue of Keystone on one node removing and endpoint
while the other nodes (who still have old code) keep trying to add it back.

For connectivity issues especially after service restarts, we're using
puppet-healthcheck [0] and I'd like to discuss that more in Tokyo as an
alternative to explicit retries and delays. It's in the etherpad so
hopefully you can attend.

[0] - https://github.com/puppet-community/puppet-healthcheck
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Denis Egorenko core

2015-10-13 Thread Matt Fischer
On Tue, Oct 13, 2015 at 2:29 PM, Emilien Macchi  wrote:

> Denis Egorenko (degorenko) is working on Puppet OpenStack modules for
> quite some time now.
>
> Some statistics [1] about his contributions (last 6 months):
> * 270 reviews
> * 49 negative reviews
> * 216 positive reviews
> * 36 disagreements
> * 30 commits
>
> Beside stats, Denis is always here on IRC participating to meetings,
> helping our group discussions, and is always helpful with our community.
>
> I honestly think Denis is on the right path to become a good core member
> team, he has strong knowledge in OpenStack deployments, knows enough
> about our coding style and his involvement in the project is really
> great. He's also a huge consumer of our modules since he's working on Fuel.
>
> I would like to open the vote to promote Denis part of Puppet OpenStack
> core reviewers.
>
> [1] http://stackalytics.com/report/contribution/puppetopenstack-group/180
> --
> Emilien Macchi
>
>
>
Denis has given me some great feedback on reviews and has shown a good
understanding of puppet-openstack.

+1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][Fuel] Using Native Ruby Client for Openstack Providers

2015-10-13 Thread Matt Fischer
>From a technical point of view, not forking and using a native library
makes total sense. I think it would likely be faster and certainly cleaner
than parsing output. Unfortunately I don't think that we have the resources
to actively maintain the library. I think that's the main blocker for me.

On Tue, Oct 13, 2015 at 7:13 AM, Vladimir Kuklin 
wrote:

> Puppetmaster and Fuelers,
>
> Last week I mentioned that I would like to bring the theme of using native
> ruby OpenStack client and use it within the providers.
>
> Emilien told me that I had already been late and the decision was made
> that puppet-openstack decided to not work with Aviator based on [0]. I went
> through this thread and did not find any unresolvable issues with using
> Aviator in comparison with potential benefits it could have brought up.
>
> What I saw actually was like that:
>
> * Pros
>
> 1) It is a native ruby client
> 2) We can import it in puppet and use all the power of Ruby
> 3) We will not need to have a lot of forks/execs for puppet
> 4) You are relying on negotiated and structured output provided by API
> (JSON) instead of introducing workarounds for client output like [1]
>
> * Cons
>
> 1) Aviator is not actively supported
> 2) Aviator does not track all the upstream OpenStack features while native
> OpenStack client does support them
> 3) Ruby community is not really interested in OpenStack (this one is
> arguable, I think)
>
> * Proposed solution
>
> While I completely understand all the cons against using Aviator right
> now, I see that Pros above are essential enough to change our mind and
> invest our own resources into creating really good OpenStack binding in
> Ruby.
> Some are saying that there is not so big involvement of Ruby into
> OpenStack. But we are actually working with Puppet/Ruby and are invloved
> into community. So why should not we own this by ourselves and lead by
> example here?
>
> I understand that many of you do already have a lot of things on their
> plate and cannot or would not want to support things like additional
> library when native OpenStack client is working reasonably well for you.
> But if I propose the following scheme to get support of native Ruby client
> for OpenStack:
>
> 1) we (community) have these resources (speaking of the company I am
> working for, we at Mirantis have a set of guys who could be very interested
> in working on Ruby client for OpenStack)
> 2) we gradually improve Aviator code base up to the level that it
> eliminates issues that are mentioned in  'Cons' section
> 3) we introduce additional set of providers and allow users and operators
> to pick whichever they want
> 4) we leave OpenStackClient default one
>
> Would you support it and allow such code to be merged into upstream
> puppet-openstack modules?
>
>
> [0]
> https://groups.google.com/a/puppetlabs.com/forum/#!searchin/puppet-openstack/aviator$20openstackclient/puppet-openstack/GJwDHNAFVYw/ayN4cdg3EW0J
> [1]
> https://github.com/openstack/puppet-swift/blob/master/lib/puppet/provider/swift_ring_builder.rb#L21-L86
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] WARNING - breaking backwards compatibility in puppet-keystone

2015-10-07 Thread Matt Fischer
I thought the agreement was that default would be assumed so that we didn't
break backwards compatibility?
On Oct 7, 2015 10:35 AM, "Rich Megginson"  wrote:

> tl;dr You must specify a domain when using domain scoped resources.
>
> If you are using domains with puppet-keystone, there is a proposed patch
> that will break backwards compatibility.
>
> https://review.openstack.org/#/c/226624/ Replace indirection calls
>
> "Indirection calls are replaced with #fetch_project and #fetch_user methods
> using python-openstackclient (OSC).
>
> Also removes the assumption that if a resource is unique within a domain
> space
> then the domain doesn't have to be specified."
>
> It is the last part which is causing backwards compatibility to be
> broken.  This patch requires that a domain scoped resource _must_ be
> qualified with the domain name if _not_ in the 'Default' domain.
> Previously, you did not have to qualify a resource name with the domain if
> the name was unique in _all_ domains.  The problem was this code relied
> heavily on puppet indirection, and was complex and difficult to maintain.
> We removed it in favor of a very simple implementation: if the name is not
> qualified with a domain, it must be in the 'Default' domain.
>
> Here is an example from puppet-heat - the 'heat_admin' user has been
> created in the 'heat_stack' domain previously.
>
> ensure_resource('keystone_user_role',  'heat_admin@::heat_stack", {
>   'roles' => ['admin'],
> })
>
> This means "assign the user 'heat_admin' in the unspecified domain to have
> the domain scoped role 'admin' in the 'heat_stack' domain". It is a domain
> scoped role, not a project scoped role, because in "@::heat_stack" there is
> no project, only a domain. Note that the domain for the 'heat_admin' user
> is unspecified. In order to specify the domain you must use
> 'heat_admin::heat_stack@::heat_stack'. This is the recommended fix - to
> fully qualify the user + domain.
>
> The breakage manifests itself like this, from the logs::
>
> 2015-10-02 06:07:39.574 | Debug: Executing '/usr/bin/openstack user
> show --format shell heat_admin --domain Default'
> 2015-10-02 06:07:40.505 | Error:
> /Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin@::heat]:
> Could not evaluate: No user heat_admin with domain  found
>
> This is from the keystone_user_role code. Since the role user was
> specified as 'heat_admin' with no domain, the keystone_user_role code looks
> for 'heat_admin' in the 'Default' domain and can't find it, and raises an
> error.
>
> Right now, the only way to specify the domain is by adding '::domain_name'
> to the user name, as 'heat_admin::heat_stack@::heat_stack'.  Sofer is
> working on a way to add the domain name as a parameter of
> keystone_user_role - https://review.openstack.org/226919 - so in the near
> future you will be able to specify the resource like this:
>
>
> ensure_resource('keystone_user_role',  'heat_admin@::heat_stack", {
>   'roles' => ['admin'],
>   'user_domain_name' => 'heat_stack',
> })
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-09-30 Thread Matt Fischer
Thanks for summarizing this Mark. What's the best way to get feedback about
this to the TC? I'd love to see some of the items which I think are common
sense for anyone who can't just blow away devstack and start over to get
added for consideration.

On Tue, Sep 29, 2015 at 11:32 AM, Mark Voelker  wrote:

>
> Mark T. Voelker
>
>
>
> > On Sep 29, 2015, at 12:36 PM, Matt Fischer  wrote:
> >
> >
> >
> > I agree with John Griffith. I don't have any empirical evidences to back
> > my "feelings" on that one but it's true that we weren't enable to enable
> > Cinder v2 until now.
> >
> > Which makes me wonder: When can we actually deprecate an API version? I
> > *feel* we are fast to jump on the deprecation when the replacement isn't
> > 100% ready yet for several versions.
> >
> > --
> > Mathieu
> >
> >
> > I don't think it's too much to ask that versions can't be deprecated
> until the new version is 100% working, passing all tests, and the clients
> (at least python-xxxclients) can handle it without issues. Ideally I'd like
> to also throw in the criteria that devstack, rally, tempest, and other
> services are all using and exercising the new API.
> >
> > I agree that things feel rushed.
>
>
> FWIW, the TC recently created an assert:follows-standard-deprecation tag.
> Ivan linked to a thread in which Thierry asked for input on it, but FYI the
> final language as it was approved last week [1] is a bit different than
> originally proposed.  It now requires one release plus 3 linear months of
> deprecated-but-still-present-in-the-tree as a minimum, and recommends at
> least two full stable releases for significant features (an entire API
> version would undoubtedly fall into that bucket).  It also requires that a
> migration path will be documented.  However to Matt’s point, it doesn’t
> contain any language that says specific things like:
>
> In the case of major API version deprecation:
> * $oldversion and $newversion must both work with
> [cinder|nova|whatever]client and openstackclient during the deprecation
> period.
> * It must be possible to run $oldversion and $newversion concurrently on
> the servers to ensure end users don’t have to switch overnight.
> * Devstack uses $newversion by default.
> * $newversion works in Tempest/Rally/whatever else.
>
> What it *does* do is require that a thread be started here on
> openstack-operators [2] so that operators can provide feedback.  I would
> hope that feedback like “I can’t get clients to use it so please don’t
> remove it yet” would be taken into account by projects, which seems to be
> exactly what’s happening in this case with Cinder v1.  =)
>
> I’d hazard a guess that the TC would be interested in hearing about
> whether you think that plan is a reasonable one (and given that TC election
> season is upon us, candidates for the TC probably would too).
>
> [1] https://review.openstack.org/#/c/207467/
> [2]
> http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59
>
> At Your Service,
>
> Mark T. Voelker
>
>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ops] Operator Local Patches

2015-09-30 Thread Matt Fischer
Is the purge deleted a replacement for nova-manage db archive-deleted? It
hasn't worked for several cycles and so I assume it's abandoned.
On Sep 30, 2015 4:16 PM, "Matt Riedemann" 
wrote:

>
>
> On 9/29/2015 6:33 PM, Kris G. Lindgren wrote:
>
>> Hello All,
>>
>> We have some pretty good contributions of local patches on the etherpad.
>>   We are going through right now and trying to group patches that
>> multiple people are carrying and patches that people may not be carrying
>> but solves a problem that they are running into.  If you can take some
>> time and either add your own local patches that you have to the ether
>> pad or add +1's next to the patches that are laid out, it would help us
>> immensely.
>>
>> The etherpad can be found at:
>> https://etherpad.openstack.org/p/operator-local-patches
>>
>> Thanks for your help!
>>
>> ___
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>> From: "Kris G. Lindgren"
>> Date: Tuesday, September 22, 2015 at 4:21 PM
>> To: openstack-operators
>> Subject: Re: Operator Local Patches
>>
>> Hello all,
>>
>> Friendly reminder: If you have local patches and haven't yet done so,
>> please contribute to the etherpad at:
>> https://etherpad.openstack.org/p/operator-local-patches
>>
>> ___
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>> From: "Kris G. Lindgren"
>> Date: Friday, September 18, 2015 at 4:35 PM
>> To: openstack-operators
>> Cc: Tom Fifield
>> Subject: Operator Local Patches
>>
>> Hello Operators!
>>
>> During the ops meetup in Palo Alto were we talking about sessions for
>> Tokyo. A session that I purposed, that got a bunch of +1's,  was about
>> local patches that operators were carrying.  From my experience this is
>> done to either implement business logic,  fix assumptions in projects
>> that do not apply to your implementation, implement business
>> requirements that are not yet implemented in openstack, or fix scale
>> related bugs.  What I would like to do is get a working group together
>> to do the following:
>>
>> 1.) Document local patches that operators have (even those that are in
>> gerrit right now waiting to be committed upstream)
>> 2.) Figure out commonality in those patches
>> 3.) Either upstream the common fixes to the appropriate projects or
>> figure out if a hook can be added to allow people to run their code at
>> that specific point
>> 4.) 
>> 5.) Profit
>>
>> To start this off, I have documented every patch, along with a
>> description of what it does and why we did it (where needed), that
>> GoDaddy is running [1].  What I am asking is that the operator community
>> please update the etherpad with the patches that you are running, so
>> that we have a good starting point for discussions in Tokyo and beyond.
>>
>> [1] - https://etherpad.openstack.org/p/operator-local-patches
>> ___
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I saw this originally on the ops list and it's a great idea - cat herding
> the bazillion ops patches and seeing what common things rise to the top
> would be helpful.  Hopefully some of that can then be pushed into the
> projects.
>
> There are a couple of things I could note that are specifically operator
> driven which could use eyes again.
>
> 1. purge deleted instances from nova database:
>
>
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/purge-deleted-instances-cmd.html
>
> The spec is approved for mitaka, the code is out for review.  If people
> could test the change out it'd be helpful to vet it's usefulness.
>
> 2. I'm trying to revive a spec that was approved in liberty but the code
> never landed:
>
> https://review.openstack.org/#/c/226925/
>
> That's for force resetting quotas for a project/user so that on the next
> pass it gets recalculated. A question came up about making the user
> optional in that command so it's going to require a bit more review before
> we re-approve for mitaka since the design changes slightly.
>
> 3. mgagne was good enough to propose a patch upstream to neutron for a
> script he had out of tree:
>
> https://review.openstack.org/#/c/221508/
>
> That's a tool to deleted empty linux bridges.  The neutron linuxbridge
> agent used to remove those automatically but it caused race problems with
> nova so that was removed, but it'd still be good to have a tool to remove
> then as needed.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenSta

Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-09-29 Thread Matt Fischer
>
>
>
> I agree with John Griffith. I don't have any empirical evidences to back
> my "feelings" on that one but it's true that we weren't enable to enable
> Cinder v2 until now.
>
> Which makes me wonder: When can we actually deprecate an API version? I
> *feel* we are fast to jump on the deprecation when the replacement isn't
> 100% ready yet for several versions.
>
> --
> Mathieu
>


I don't think it's too much to ask that versions can't be deprecated until
the new version is 100% working, passing all tests, and the clients (at
least python-xxxclients) can handle it without issues. Ideally I'd like to
also throw in the criteria that devstack, rally, tempest, and other
services are all using and exercising the new API.

I agree that things feel rushed.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-09-28 Thread Matt Fischer
Yes, people are probably still using it. Last time I tried to use V2 it
didn't work because the clients were broken, and then it went back on the
bottom of my to do list. Is this mess fixed?

http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html

On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny  wrote:

> Hi all,
>
> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was
> introduced in Grizzly and v1 API is deprecated since Juno.
>
> After [1] is merged, Cinder API v1 is disabled in gates by default. We've
> got a filed bug [2] to remove Cinder v1 API at all.
>
>
> According to Deprecation Policy [3] looks like we are OK to remote it. But
> I would like to ask Cinder API users if any still use API v1.
> Should we remove it at all Mitaka release or just disable by default in
> the cinder.conf?
>
> AFAIR, only Rally doesn't support API v2 now and I'm going to implement it
> asap.
>
> [1] https://review.openstack.org/194726
> [2] https://bugs.launchpad.net/cinder/+bug/1467589
> [3]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>
> Regards,
> Ivan Kolodyazhny
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Fwd: Action required: stackforge/puppet-openstack project move

2015-09-27 Thread Matt Fischer
I'm not sure what value it has anymore but why not just readonly?
On Sep 27, 2015 6:09 PM, "Emilien Macchi"  wrote:

> should we delete it?
>
> FYI: the module is deprecated in Juno release.
>
> I vote for yes.
>
>
>  Forwarded Message 
> Subject: Action required: stackforge/puppet-openstack project move
> Date: Fri, 25 Sep 2015 21:57:10 +
> From: OpenStack Infrastructure Team 
> To: clayton.one...@twcable.com, coll...@gazlene.net, bod...@gmail.com,
> dpri...@redhat.com, emil...@redhat.com, francois.charl...@redhat.com,
> mga...@iweb.com, m...@mattfischer.com, wop...@gmail.com,
> sba...@redhat.com, xingc...@unitedstack.com, yguen...@redhat.com
>
> You appear to be associated with the stackforge/puppet-openstack project.
>
> The stackforge/ git repository namespace is being retired[1], and all
> projects within need to move to the openstack/ namespace or, in the
> case of inactive projects, identified as such and made read-only.
>
> For more background information, see this mailing list post and TC
> resolution:
>
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html
>
>
> http://governance.openstack.org/resolutions/20150615-stackforge-retirement.html
>
> To ensure we have correctly identified all of the projects, we have
> created a wiki page listing the projects that should be moved and the
> projects that should be retired.  You may find it here:
>
>   https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement
>
> Please add the stackforge/puppet-openstack project to the appropriate
> list on this page ("Active Projects to Move" or "Inactive Projects to
> Retire") as soon as possible.
>
> Projects that have not self-categorized by Friday October 2 will be
> assumed to be inactive and placed on the list of "Inactive Projects to
> Retire".
>
> Thank you for attending to this promptly,
>
> The OpenStack Infrastructure Team
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [puppet] feedback request about puppet-keystone

2015-09-27 Thread Matt Fischer
On Fri, Sep 25, 2015 at 11:01 AM, Emilien Macchi  wrote:
>
>
> So after 5 days, here is a bit of feedback (13 people did the poll [1]):
>
> 1/ Providers
> Except for 1, most of people are managing a few number of Keystone
> users/tenants.
> I would like to know if it's because the current implementation (using
> openstackclient) is too slow or just because they don't need to do that
> (they use bash, sdk, ansible, etc).
>

I'm generally thinking the opposite of you, I'd actually love to know the
use case for anyone managing more than a few users with Puppet. We have
service users and a few accounts for things like backups, monitoring etc.
Beyond that, the accounts are for actual users and they have to follow an
intake and project creation process that also handles things like networks.
We found this workflow much easier to script with python and it can also be
done without a deploy. This is all handled by a manager after ensuring that
OpenStack is the right solution for them, finding project requirements,
etc. So I think this is what many folks are doing, their user creation
workflow just doesn't mesh with puppet and their puppet deployment process.
 (This also discounts password management, something I don't want to be
doing for users with puppet)



>
> 2/ Features you want
>
> * "Configuration of federation via shibboleth":
> WIP on https://review.openstack.org/#/c/216821/
>
> * "Configuration of federation via mod_mellon":
> Will come after shibboleth I guess.
>
> * "Allow to configure websso"":
> See
>
> http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/enabling-federation.html
>
> * "Management of fernet keys":
> nothing *yet* in our roadmap AFIK, adding it in our backlog [2]
>

I looked into this when we deployed but could not come up with a great
solution that didn't involve declaring a master node on which keys were
generated. Would be happy to re-investigate or work with someone on this.


>
> * "Support for hybrid domain configurations (e.g. using both LDAP and
> built in database backend)":
>
> http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/support-for-keystone-domain-configuration.html
>
> * "Full v3 API support (depends on other modules beyond just
> puppet-keystone)":
>
> http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html
>
> * "the ability to upgrade modules independently of one another, like we
> do in production - currently the puppet dependencies dictate the order
> in which we do upgrades more than the OpenStack dependencies":
>
> During the last Summit, we decided [3] as a community that our modules
> branches will only support the OpenStack release of the branch.
> ie: stable/kilo supports OpenStack 2015.1 (Kilo). Maybe you can deploy
> Juno or Liberty with it, but our community does not support it.
> To give a little background, we already discussed about it [4] on the ML.
> Our interface is 100% (or should be) backward compatible for at least
> one full cycle, so you should not have issue when using a new version of
> the module with the same parameters. Though (and indeed), you need to
> keep your modules synchronized, specially because we have libraries and
> common providers (in puppet-keystone).
> AFIK, OpenStack also works like this with openstack/requirements.
> I'm not sure you can run Glance Kilo with Oslo Juno (maybe I'm wrong).
> What you're asking would be technically hard because we would have to
> support old versions of our providers & libraries, with a lot of
> backward compatible & legacy code in place, while we already do a good
> job in the parameters (interface).
> If you have any serious proposal, we would be happy to discuss design
> and find a solution.
>
> 3/ What we could improve in Puppet Keystone (and in general, regarding
> the answers)
>
> * "(...) but it would be nice to be able to deploy master and the most
> recent version immediately rather than wait. Happy to get involved with
> that as our maturity improves and we actually start to use the current
> version earlier. Contribution is hard when you folk are ahead of the
> game, any fixes and additions we have are ancient already":
>
> I would like to understand the issues here:
> do you have problems to contribute?
> is your issue "a feature is in master and not in stable/*" ? If that's
> the case, that means we can do a better job in backport policy.
> Something we already talked each others and I hope our group is aware
> about that.
>
> * "We were using keystone_user_role until we had huge compilation times
> due to the matrix (tenant x role x user) that is not scalable. With
> every single user and tenant on the environment, the catalog compilation
> increased. An improvement on that area will be useful."
>
> I understand the frustration and we are working on it [5].
>
> * "Currently does not handle deployment of hybrid domain configurations."
>
> Ditto:
>
> http://specs.openstack.org/openstack/puppet-opensta

Re: [openstack-dev] [puppet] service default value functions

2015-09-17 Thread Matt Fischer
Clint,

We're solving a different issue. Before anytime someone added an option we
had this logic:

if $setting {
  project_config/setting: value => $setting
}
else {
  project_config/setting: ensure => absent;
}

This was annoying to have to write for every single setting but without it,
nobody could remove settings that they didn't want and fall back to the
project defaults.

This discussion is about a way in the libraries to do the ensure absent but
to drop all the else {} clauses in all our modules.



On Thu, Sep 17, 2015 at 11:39 AM, Clint Byrum  wrote:

> Excerpts from Alex Schultz's message of 2015-09-16 09:53:10 -0700:
> > Hey puppet folks,
> >
> > Based on the meeting yesterday[0], I had proposed creating a parser
> > function called is_service_default[1] to validate if a variable matched
> our
> > agreed upon value of ''.  This got me thinking about how
> > can we maybe not use the arbitrary string throughout the puppet that can
> > not easily be validated.  So I tested creating another puppet function
> > named service_default[2] to replace the use of ''
> > throughout all the puppet modules.  My tests seemed to indicate that you
> > can use a parser function as parameter default for classes.
> >
> > I wanted to send a note to gather comments around the second function.
> > When we originally discussed what to use to designate for a service's
> > default configuration, I really didn't like using an arbitrary string
> since
> > it's hard to parse and validate. I think leveraging a function might be
> > better since it is something that can be validated via tests and a syntax
> > checker.  Thoughts?
>
> I'm confused.
>
> Why aren't you omitting the configuration option from the file if you
> want to use the default? Isn't that what undef is for?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] monasca,murano,mistral governance

2015-09-14 Thread Matt Fischer
Emilien,

I've discussed this with some of the Monasca puppet guys here who are doing
most of the work. I think it probably makes sense to move to that model
now, especially since the pace of development has slowed substantially. One
blocker before to having it "big tent" was the lack of test coverage, so as
long as we know that's a work in progress...  I'd also like to get Brad
Kiein's thoughts on this, but he's out of town this week. I'll ask him to
reply when he is back.


On Mon, Sep 14, 2015 at 3:44 PM, Emilien Macchi  wrote:

> Hi,
>
> As a reminder, Puppet modules that are part of OpenStack are documented
> here [1].
>
> I can see puppet-murano & puppet-mistral Gerrit permissions different
> from other modules, because Mirantis helped to bootstrap the module a
> few months ago.
>
> I think [2] the modules should be consistent in governance and only
> Puppet OpenStack group should be able to merge patches for these modules.
>
> Same question for puppet-monasca: if Monasca team wants their module
> under the big tent, I think they'll have to change Gerrit permissions to
> only have Puppet OpenStack able to merge patches.
>
> [1]
> http://governance.openstack.org/reference/projects/puppet-openstack.html
> [2] https://review.openstack.org/223313
>
> Any feedback is welcome,
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Liberty Sprint Retrospective

2015-09-06 Thread Matt Fischer
I've updated the bug triage portion but tomorrow is a US holiday so you may
not see much traction there until Tuesday.

On Sun, Sep 6, 2015 at 6:59 PM, Emilien Macchi 
wrote:

> Hi,
>
> With the goal to continually improve our way to work together, I would
> like to build a Sprint Retrospective from what happened last week.
>
> The first step would be to gather data on this etherpad:
> https://etherpad.openstack.org/p/puppet-liberty-sprint-retrospective
> The second step, that we will probably do during our weekly meeting would
> be to feed the "Generate insights" section and the "Decide what to do".
> Then I'll make a summary of our thoughts and close the retrospective by
> some documentation that will help us to make the next time even better.
>
> Feel free to participate to this discussion, any feedback is welcome,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >