[openstack-dev] [tc] Status update, July 21th

2017-07-20 Thread Thierry Carrez
Hi!

This is the weekly update on Technical Committee initiatives. You can
find the full list of all open topics at:

https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee


== Recently-approved changes ==

* Technical Committee 2019 vision [1][2][3][4]
* Refreshed extra-ATC list for I18n team [5]
* New tags: stable:follows-policy for Congress
* Pike goals update: tricircle
* Queens goals update: keystone
* New repositories: os_tacker

[1] https://review.openstack.org/#/c/453262/
[2] https://review.openstack.org/#/c/473620/
[3] https://review.openstack.org/#/c/482152/
[4] https://review.openstack.org/#/c/482686/
[5] https://review.openstack.org/#/c/483452/

The significant item of the week is of course the publication of the
2019 TC vision, which we had been working on since March. The idea here
is to describe a desirable point in the future, to help inform our
collective decisions. Of course future Technical Committee memberships
may differ on what this desirable future looks like, and could come up
with new visions to replace this one. You can find the TC 2019 vision at:

https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html


== Open discussions ==

Flavio Percoco posted a new resolution about allowing teams to host
meetings in their own IRC channels:

https://review.openstack.org/485117

Project team additions are currently frozen until the opening of the
Queens cycle. A number of teams are in the backlog, please comment on
their respective reviews and threads if you have concerns about how well
they fit the OpenStack mission or the community principles:

Blazar: https://review.openstack.org/#/c/482860/
Glare: https://review.openstack.org/#/c/479285/
Gluon: https://review.openstack.org/463069
Stackube: https://review.openstack.org/462460

Finally, discussion is still in progress on two clarifications from John
Garbutt, where we continue to iterate on patchsets:

Decisions should be globally inclusive:
https://review.openstack.org/#/c/460946/

Describe what upstream support means:
https://review.openstack.org/440601


== Voting in progress ==

The long-standing "Declare plainly the current state of PostgreSQL in
OpenStack" resolution now has majority support, and will be approved
Monday unless new objections are raised:

https://review.openstack.org/#/c/427880/


== TC member actions for the coming week(s) ==

None that I can think of.


== Need for a TC meeting next Tuesday ==

No initiative is currently stuck, so there is no need for a TC meeting
next week.


Cheers!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] priorities for the coming week (07/21-07/27)

2017-07-20 Thread Brian Rosmaita
As discussed at today's Glance meeting, here are the priorities for
the coming week.

(1)  The Pike release of the python-glanceclient
We'd like to have the release ready to go by Wednesday.
The list of patches to review is here:
https://etherpad.openstack.org/p/glance-client-priority-reviews-pike

(2)  The P-3 milestone for Glance
Note that the P-3 release is also marks the Pike feature freeze.
- As we discussed at today's meeting, it looks like the only feature
work being done at the moment is associated with image import.  Watch
the mailing list for a notice of any patches that need reviews.
- A patch that needs review for clarity and correctness is the
releasenote for support for running Glance as a WSGI application in a
web server:
https://review.openstack.org/485913

And, of course, continue working on bugs!

cheers,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] An experiment with Ansible

2017-07-20 Thread James Slagle
On Thu, Jul 20, 2017 at 9:52 PM, James Slagle  wrote:
> On Thu, Jul 20, 2017 at 9:04 PM, Paul Belanger  wrote:
>> On Thu, Jul 20, 2017 at 06:21:22PM -0400, James Slagle wrote:
>>> Following up on the previous thread:
>>> http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
>>>
>>> I wanted to share some work I did around the prototype I mentioned
>>> there. I spent a couple days exploring this idea. I came up with a
>>> Python script that when run against an in progress Heat stack, will
>>> pull all the server and deployment metadata out of Heat and generate
>>> ansible playbooks/tasks from the deployments.
>>>
>>> Here's the code:
>>> https://github.com/slagle/pump
>>>
>>> And an example of what gets generated:
>>> https://gist.github.com/slagle/433ea1bdca7e026ce8ab2c46f4d716a8
>>>
>>> If you're interested in any more detail, let me know.
>>>
>>> It signals the stack to completion with a dummy "ok" signal so that
>>> the stack will complete. You can then use ansible-playbook to apply
>>> the actual deloyments (in the expected order, respecting the steps
>>> across all roles, and in parallel across all the roles).
>>>
>>> Effectively, this treats Heat as nothing but a yaml cruncher. When
>>> using it with deployed-server, Heat doesn't actually change anything
>>> on an overcloud node, you're only using it to generate ansible.
>>>
>>> Honestly, I think I will prefer the longer term approach of using
>>> stack outputs. Although, I am not sure of the end goal of that work
>>> and if it is the same as this prototype.
>>>
>> Sorry if this hasn't been asked before but why don't you removed all of your
>> ansible-playbook logic out of heat and write them directly as native 
>> playbooks /
>> roles? Then instead of having a tool that reads heat to then generate the
>> playbooks / roles, you update heat just to directly call the playbooks? Any
>> dynamic information about be stored in the inventory or using the 
>> --extra-vars
>> on the CLI?
>
> We must maintain backwards compatibility with our existing Heat based
> interfaces (cli, api, templates). While that could probably be done
> with the approach you mention, it feels like it would be much more
> difficult to do so in that you'd need to effectively add back on the
> compatibility layer once the new pristine native ansible
> playbooks/roles were written. And it seems like it would be quite a
> lot of heat template work to translate existing interfaces to call
> into the new playbooks.
>
> Even then, any new playbooks written from scratch would have to be
> flexible enough to accommodate the old interfaces. On the surface, it
> feels like you may end up sacrificing a lot of your goals in your
> playbooks so you can maintain backwards compatibility anyways.
>
> The existing interface must be the first class citizen. We can't break
> those contracts, so we need ways to quickly iterate towards ansible.
> Writing all new native playbooks sounds like just writing a new
> OpenStack installer to me, and then making Heat call that so that it's
> backwards compatible.
>
> The focus on the interface flips that around so that you use existing
> systems and iterate them towards the end goal. Just my POV.
>
> FYI, there are other ongoing solutions as well such as existing
> ansible tasks directly in the templates today. These are much easier
> to reason about when it comes to generating the roles and playbooks,
> because it is direct Ansible syntax in the templates, so it's easier
> to see the origin of tasks and make changes.

I also wanted to mention that the Ansible tasks in the templates today
could be included with Heat's get_file function. In which case, as a
template developer you basically are writing a native Ansible tasks
file that could be included in an Ansible role.

The generation would still come into play when combining the tasks
into the role/playbook that is actually applied to a given server,
since that is all dynamic based on user input.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] An experiment with Ansible

2017-07-20 Thread James Slagle
On Thu, Jul 20, 2017 at 9:04 PM, Paul Belanger  wrote:
> On Thu, Jul 20, 2017 at 06:21:22PM -0400, James Slagle wrote:
>> Following up on the previous thread:
>> http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
>>
>> I wanted to share some work I did around the prototype I mentioned
>> there. I spent a couple days exploring this idea. I came up with a
>> Python script that when run against an in progress Heat stack, will
>> pull all the server and deployment metadata out of Heat and generate
>> ansible playbooks/tasks from the deployments.
>>
>> Here's the code:
>> https://github.com/slagle/pump
>>
>> And an example of what gets generated:
>> https://gist.github.com/slagle/433ea1bdca7e026ce8ab2c46f4d716a8
>>
>> If you're interested in any more detail, let me know.
>>
>> It signals the stack to completion with a dummy "ok" signal so that
>> the stack will complete. You can then use ansible-playbook to apply
>> the actual deloyments (in the expected order, respecting the steps
>> across all roles, and in parallel across all the roles).
>>
>> Effectively, this treats Heat as nothing but a yaml cruncher. When
>> using it with deployed-server, Heat doesn't actually change anything
>> on an overcloud node, you're only using it to generate ansible.
>>
>> Honestly, I think I will prefer the longer term approach of using
>> stack outputs. Although, I am not sure of the end goal of that work
>> and if it is the same as this prototype.
>>
> Sorry if this hasn't been asked before but why don't you removed all of your
> ansible-playbook logic out of heat and write them directly as native 
> playbooks /
> roles? Then instead of having a tool that reads heat to then generate the
> playbooks / roles, you update heat just to directly call the playbooks? Any
> dynamic information about be stored in the inventory or using the --extra-vars
> on the CLI?

We must maintain backwards compatibility with our existing Heat based
interfaces (cli, api, templates). While that could probably be done
with the approach you mention, it feels like it would be much more
difficult to do so in that you'd need to effectively add back on the
compatibility layer once the new pristine native ansible
playbooks/roles were written. And it seems like it would be quite a
lot of heat template work to translate existing interfaces to call
into the new playbooks.

Even then, any new playbooks written from scratch would have to be
flexible enough to accommodate the old interfaces. On the surface, it
feels like you may end up sacrificing a lot of your goals in your
playbooks so you can maintain backwards compatibility anyways.

The existing interface must be the first class citizen. We can't break
those contracts, so we need ways to quickly iterate towards ansible.
Writing all new native playbooks sounds like just writing a new
OpenStack installer to me, and then making Heat call that so that it's
backwards compatible.

The focus on the interface flips that around so that you use existing
systems and iterate them towards the end goal. Just my POV.

FYI, there are other ongoing solutions as well such as existing
ansible tasks directly in the templates today. These are much easier
to reason about when it comes to generating the roles and playbooks,
because it is direct Ansible syntax in the templates, so it's easier
to see the origin of tasks and make changes.

As we move forward on these approaches if we end up with users
gravitating towards certain usage patterns, I think we'd consider
deprecating interfaces that are no longer seen as useful.

>
> Basically, we do this for zuulv2.5 today in openstack-infra (dynamically
> generate playbooks at run-time) and it is a large amount of work to debug
> issues.  In our case, we did it to quickly migrate from jenkins to ansible
> (since zuulv3 completely fixes this with native playbooks) and I wouldn't
> recommend it to operators to do.  Not fun.

I'm not familiar with the technical reasoning there, but on the
surface it sounds similar to what some of our goals may be. We want to
quickly add some Ansible features and move in that direction.

We don't want to write all new roles and playbooks in Ansible, at
least I don't :) We can't even say definitively right now that native
Ansible is the interface we want long term. So I think it would be
premature to approach the problem from the angle of writing new native
playbooks.

Whether or not it ever becomes a full migration is as I said TBD, and
not even something we have to decide now. The decision would be more
driven by how people end up using it.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Mirantis] How to keep ntpd down

2017-07-20 Thread Jeremy Stanley
On 2017-07-21 05:56:48 +0530 (+0530), Raja T Nair wrote:
[...]
> Many other servers sync with this one too. Also only one
> controller had issues with time. Kind of stuck here, as I have no
> idea why one node's ntpd would fail :(
[...]

I've seen this from time to time over the years and it's nearly
always a bad RTC. Poor quality control is not unusual since the
manufacturers assume you either won't care or will use something
like NTP to keep things in reasonable sync, but sometimes you'll get
one which is well outside your ntpd's tolerance for drift and it'll
just flat refuse to update it. In those sorts of situations I've
pretty much always sent the server back or requested appropriate
replacement parts to service it myself on site.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Mirantis] How to keep ntpd down

2017-07-20 Thread John Petrini
To Brad's point - if your controllers are VM's you might also want to have
a look at Chrony https://chrony.tuxfamily.org/. It's supposed to perform
much better on virtual machines.

___

John Petrini

On Thu, Jul 20, 2017 at 9:20 PM, Brad Knowles 
wrote:

> On Jul 20, 2017, at 7:26 PM, Raja T Nair  wrote:
>
> > Thanks a lot for the reply, John.
> >
> > Yes I understand that time is really important for cluster setup, that's
> why I was panicking and looking for alternatives when I found time drifting
> while ntpd was still on.
> > So I was planning to do a ``ntpdate w.x.y.z '' every 2 mins in order to
> keep time in sync.
> >
> > Would want to investigate this. My upstream time server seems fine, its
> on a baremetal. Many other servers sync with this one too. Also only one
> controller had issues with time.
> > Kind of stuck here, as I have no idea why one node's ntpd would fail :(
>
> Doing a cron job with ntpdate will cause your time to bounce all over the
> place, and that will be even worse than what you've had so far.
>
> I've been a member of the NTP Public Services Project since 2003, and I've
> seen a lot of NTP problems over the years, especially on virtual machines.
> Historically, our advice was to not even run ntpd at all on a VM, but
> instead to run it on the bare hardware underneath, and then make sure that
> you're running the necessary hooks in the hypervisor and the guest OSes to
> pass good quality time up the stack to all the clients.
>
> I'm not sure if that is still the best advice or not -- I think it may
> depend on your hypervisor and your guest OSes.
>
> But if you do run ntpd on the guests, there are things you can do to
> measure larger-than-normal amounts of drift and compensate for that.  I
> would direct you to the mailing list questi...@lists.ntp.org for more
> information.
>
> --
> Brad Knowles 
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [release] Release countdown for week R-5, July 21 - 28

2017-07-20 Thread Thierry Carrez
Welcome to our regular release countdown email!

Development Focus
-

Teams should be wrapping up Pike feature work, in preparation for
Feature Freeze for client libraries and services following the
cycle-with-milestones release model.


General Information
---

Next week is the deadline for releasing client libraries (meaning:
all libraries that are python-${PROJECT}client API client libraries).

stable/pike branches will be cut from the most recent Pike releases. So
if your master branch contains changes that you want to see in the Pike
release branch, you should definitely consider asking for a new client
library release.

python-designateclient, python-searchlightclient and python-swiftclient
haven't made a Pike release yet: if nothing is done by July 27, one
release will be forced (on master HEAD) so that we have something to cut
a stable branch from.

For deliverables following the cycle-with-milestones model, next week is
also when Feature Freeze will hit and the pike-3 milestone will be
tagged. After that date, only bugfixes should be accepted, in
preparation for producing the first release candidate. Feature freeze
exceptions may be exceptionally granted up to RC1 for services, if
required to produce a clean and functional release. For those
deliverables, stable/pike branches will be created when the first
release candidate is tagged on master, and further release candidates
may be produced on the stable/pike branch until the final release date.

During all that period, StringFreeze is in effect, in order to let the
I18N team do the translation work in good conditions. The StringFreeze
is soft (allowing exceptions as long as they are discussed on the
mailing-list and deemed worth the effort). It becomes a hard
StringFreeze on August 10.

See all details at:https://releases.openstack.org/pike/schedule.html


Actions
---

stable/pike branches shall be created Friday for all non-client
libraries. You should expect 3 changes to be proposed for each: a
.gitreview update, a reno update and a tox.ini constraints URL update.
Please review those in priority so that the branch can be functional ASAP.


Upcoming Deadlines & Dates
--

Client libraries final releases: July 27
Pike-3 milestone (and Feature freeze): July 27
RC1 target deadline (and HardStringFreeze): August 10
Final Pike release: August 30
Queens PTG in Denver: Sept 11-15

-- 
Thierry Carrez (ttx)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Mirantis] How to keep ntpd down

2017-07-20 Thread Brad Knowles
On Jul 20, 2017, at 7:26 PM, Raja T Nair  wrote:

> Thanks a lot for the reply, John.
> 
> Yes I understand that time is really important for cluster setup, that's why 
> I was panicking and looking for alternatives when I found time drifting while 
> ntpd was still on.
> So I was planning to do a ``ntpdate w.x.y.z '' every 2 mins in order to keep 
> time in sync.
> 
> Would want to investigate this. My upstream time server seems fine, its on a 
> baremetal. Many other servers sync with this one too. Also only one 
> controller had issues with time.
> Kind of stuck here, as I have no idea why one node's ntpd would fail :(

Doing a cron job with ntpdate will cause your time to bounce all over the 
place, and that will be even worse than what you've had so far.

I've been a member of the NTP Public Services Project since 2003, and I've seen 
a lot of NTP problems over the years, especially on virtual machines.  
Historically, our advice was to not even run ntpd at all on a VM, but instead 
to run it on the bare hardware underneath, and then make sure that you're 
running the necessary hooks in the hypervisor and the guest OSes to pass good 
quality time up the stack to all the clients.

I'm not sure if that is still the best advice or not -- I think it may depend 
on your hypervisor and your guest OSes.

But if you do run ntpd on the guests, there are things you can do to measure 
larger-than-normal amounts of drift and compensate for that.  I would direct 
you to the mailing list questi...@lists.ntp.org for more information.

--
Brad Knowles 



signature.asc
Description: Message signed with OpenPGP
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [TripleO] An experiment with Ansible

2017-07-20 Thread Paul Belanger
On Thu, Jul 20, 2017 at 06:21:22PM -0400, James Slagle wrote:
> Following up on the previous thread:
> http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
> 
> I wanted to share some work I did around the prototype I mentioned
> there. I spent a couple days exploring this idea. I came up with a
> Python script that when run against an in progress Heat stack, will
> pull all the server and deployment metadata out of Heat and generate
> ansible playbooks/tasks from the deployments.
> 
> Here's the code:
> https://github.com/slagle/pump
> 
> And an example of what gets generated:
> https://gist.github.com/slagle/433ea1bdca7e026ce8ab2c46f4d716a8
> 
> If you're interested in any more detail, let me know.
> 
> It signals the stack to completion with a dummy "ok" signal so that
> the stack will complete. You can then use ansible-playbook to apply
> the actual deloyments (in the expected order, respecting the steps
> across all roles, and in parallel across all the roles).
> 
> Effectively, this treats Heat as nothing but a yaml cruncher. When
> using it with deployed-server, Heat doesn't actually change anything
> on an overcloud node, you're only using it to generate ansible.
> 
> Honestly, I think I will prefer the longer term approach of using
> stack outputs. Although, I am not sure of the end goal of that work
> and if it is the same as this prototype.
> 
Sorry if this hasn't been asked before but why don't you removed all of your
ansible-playbook logic out of heat and write them directly as native playbooks /
roles? Then instead of having a tool that reads heat to then generate the
playbooks / roles, you update heat just to directly call the playbooks? Any
dynamic information about be stored in the inventory or using the --extra-vars
on the CLI?

Basically, we do this for zuulv2.5 today in openstack-infra (dynamically
generate playbooks at run-time) and it is a large amount of work to debug
issues.  In our case, we did it to quickly migrate from jenkins to ansible
(since zuulv3 completely fixes this with native playbooks) and I wouldn't
recommend it to operators to do.  Not fun.

> And some of what I've done may be useful with that approach as well:
> https://review.openstack.org/#/c/485303/
> 
> However, I found this prototype interesting and worth exploring for a
> couple of reasons:
> 
> Regardless of the approach we take, I wanted to explore what an end
> result might look like. Personally, this illustrates what I kind of
> had in mind for an "end goal".
> 
> I also wanted to see if this was at all feasible. I envisioned some
> hurdles, such as deployments depending on output values of previous
> deployments, but we actually only do that in 1 place in
> tripleo-heat-templates, and I was able to workaround that. In the end
> I used it to deploy an all in one overcloud equivalent to our
> multinode CI job, so I believe it's feasible.
> 
> It meets most of the requirements we're looking to get out of ansible.
> You can (re)apply just a single deployment, or a given deployment
> across all ResourceGroup members, or all deployments for a given
> server(s), it's easy to see what failed and for what servers, etc.
> 
> FInally, It's something we could deliver  without much (any?) change
> in tripleo-heat-templates. Although I'm not trying to say it'd be a
> small amount of work to even do that, as this is a very rough
> prototype.
> 
> -- 
> -- James Slagle
> --
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-20 Thread Zane Bitter

On 19/07/17 23:19, Monty Taylor wrote:


Instance users do not solve this. Instance users can be built with this- 
but instance users are themselves not sufficient. Instance users are 
only sufficient in single-cloud ecosystems where it is possible to grant 
permissions on all the resources in the single-cloud ecosystem to an 
instance. We are not a single-cloud ecosystem.


Good point. Actually, nobody lives in a single-cloud ecosystem any more. 
So the 'public' side of any hybrid-cloud arrangement (including the big 
3 public clouds, not just OpenStack) will always need a way to deal with 
this.


Nodepool runs in Rackspace's DFW region. It has accounts across nine 
different clouds. If this were only solved with Instance users we'd have 
to boot a VM in each cloud so that we could call the publicly-accessible 
REST APIs of the clouds to boot VMs in each cloud.


I'm glad you're here, because I don't spend a lot of time thinking about 
such use cases (if we can get cloud applications to work on even one 
cloud then I can retire to my goat farm happy) and this one would have 
escaped me :)


So let's boil this down to 4 types of 'users' who need to authenticate 
to a given cloud:


1) Actual, corporeal humans
2) Services that are part of the cloud itself (e.g. autoscaling)
3) Hybrid-cloud applications running elsewhere (e.g. nodepool)
4) Applications running in the cloud

Looking at how AWS handles these cases AIUI:

1) For each tenant there is a 'root' account with access to billing  
Best practice is not to create API credentials for this account at all. 
Instead, you create IAM Users for all of the humans who need to access 
the tenant and give permissions to them (bootstrapped by the root 
account) using IAM Policies. To make management easier, you can 
aggregate Users into Groups. If a user leaves the organisation, you 
delete their IAM User. If the owner leaves the organisation, somebody 
else becomes the owner and you rotate the root password.


2) Cloud services can be named as principals in IAM policies, so 
permissions can be given to them in the same way that they are to human 
users.


3) You create an IAM User for the application and give it the 
appropriate permissions. The credential they get is actually a private 
key, not a password, so in theory you could store it in an HSM that just 
signs stuff with it and not provide it directly to the application. 
Otherwise, the credentials are necessarily disclosed to the team 
maintaining the application. If somebody who has/had access to private 
key leaves, you need to rotate the credentials. It's possible to 
automate the mechanics of this, but ultimately it has to be triggered by 
a human using their own credentials otherwise it's turtles all the way 
down. The AWS cloud has no way of ensuring that you rotate the 
credentials at appropriate times, or even knowing when those times are.


4) Instance users. You can give permissions to a VM that you have 
created in the cloud. It automatically receives credentials in its 
metadata. The credentials expire quite rapidly and are automatically 
replaced with new ones, also accessible through the metadata server. The 
application just reads the latest credentials from metadata and uses 
them. If someone leaves the organisation, you don't care. If an attacker 
breaches your server, the damage is limited to a relatively short window 
once you've evicted them again. There's no way to do the Wrong Thing 
even if you're trying.


And in OpenStack:

1) Works great provided you only have one user per project. Your 
password may, and probably will, be shared with your billing account 
(public cloud), or will be shared with pretty much your whole life 
(private cloud). If multiple humans need to work on the project, you'll 
generally need to share passwords or do something out-of-band to set it 
up (e.g. open a ticket with IT). If somebody leaves the organisation, 
same deal.


Application credentials could greatly improve this in the public cloud 
scenario.


2) Cloud services can create trusts that allow them to act on behalf of 
a particular user. If that user leaves the organisation, your 
application is hosed until someone else redeploys it to get a new trust.


Persistent application credentials could potentially replace trusts and 
solve this problem, although they'd need to be stored somewhere more 
secure (i.e. Barbican) than trust IDs are currently stored. A better 
solution might be to allow the service user to be granted permissions by 
the forthcoming fine-grained authorisation mechanism (independently of 
an application credential) - but this would require changes to the 
Keystone policies, because it would currently be blocked by the 
Scoped-RBAC system.


3) The credentials are necessarily disclosed to the team maintaining the 
application. Your password may, and probably will, be shared with your 
billing account. If somebody leaves the organisation, you have to rotate 
the password. This 

Re: [openstack-dev] [watcher] Stepping down as Watcher spec core

2017-07-20 Thread Hidekazu Nakamura
Hi Antoine,

I am grateful for your support from my starting contributing to Watcher.
Thanks to you I am contributing to Watcher actively now. 

I wish you live a happy life and a successful career.

Hidekazu Nakamura


> -Original Message-
> From: Antoine Cabot [mailto:antoinecabo...@gmail.com]
> Sent: Thursday, July 20, 2017 6:35 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [watcher] Stepping down as Watcher spec core
> 
> Hey guys,
> 
> It's been a long time since the last summit and our last discussions !
> I hope Watcher is going well and you are getting more traction
> everyday in the OpenStack community !
> 
> As you may guess, my last 2 months have been very busy with my
> relocation in Vancouver with my family. After 8 weeks of active job
> search in the cloud industry here in Vancouver, I've got a Senior
> Product Manager position at Parsable, a start-up leading the Industry
> 4.0 revolution. I will continue to deal with very large customers but
> in different industries (Oil & Gas, Manufacturing...) to build the
> best possible product, leveraging cloud and mobile technologies.
> 
> It was a great pleasure to lead the Watcher initiative from its
> infancy to the OpenStack Big Tent and be able to work with all of you.
> I hope to be part of another open source community in the near future
> but now, due to my new attributions, I need to step down as a core
> contributor to Watcher specs. Feel free to reach me in any case if I
> still hold restricted rights on launchpad or anywhere else.
> 
> I hope to see you all in Vancouver next year for the summit and be
> part of the traditional Watcher dinner (I will try to find the best
> place for you guys).
> 
> Cheers,
> 
> Antoine
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Mirantis] How to keep ntpd down

2017-07-20 Thread Raja T Nair
Thanks a lot for the reply, John.

Yes I understand that time is really important for cluster setup, that's
why I was panicking and looking for alternatives when I found time drifting
while ntpd was still on.
So I was planning to do a ``ntpdate w.x.y.z '' every 2 mins in order to
keep time in sync.

Would want to investigate this. My upstream time server seems fine, its on
a baremetal. Many other servers sync with this one too. Also only one
controller had issues with time.
Kind of stuck here, as I have no idea why one node's ntpd would fail :(

Regards,
Raja.



On 20 July 2017 at 16:27, John Petrini  wrote:

> On all of the controllers? crm resource stop clone_p_ntp should do it.
> Although I can't imagine why you would want to do this. Time is very
> important in OpenStack (and Ceph if you are running it) which it sounds
> like you've already found out.
>
> The whole purpose of NTP is to keep your time in sync - if it's not doing
> that you should be looking for the root cause not disabling it. You might
> want to start by looking at your upstream time servers that the controllers
> are using. This is configured in Fuel and the configuration is stored in
> /etc/npt.conf on the controllers.
>
> I'd highly recommend setting up monitoring of ntp so you know when the
> clock starts to drift and can respond to it before it drifts too far and
> becomes a problem.
>
> ___
>
> John Petrini
>
>
> On Thu, Jul 20, 2017 at 6:29 AM, Raja T Nair  wrote:
>
>> Hello All,
>>
>> Mirantis 7.0
>>
>> I am trying to keep ntpd down and do a periodic ntpdate against a time
>> server.
>> This is because one of the controllers started to drift and services in
>> that not started to go down.
>>
>> But it seems that the ntpd daemon comes up after 10 sec every time i stop
>> it.
>> Is there a monitor running somewhere which does brings it back?
>>
>> Please guide me on this and also tell me if I am doing something wrong.
>>
>> Regards,
>> Raja.
>>
>> --
>> :^)
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>>
>>
>


-- 
:^)
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack-operators] [Large Deployment Team] July meeting - 7/21/2017 at 03:00 UTC

2017-07-20 Thread Matt Van Winkle
Hello LDT folks,

Just a reminder that our July meeting is in a few hours at 03:00 UTC in 
#openstack-operators

See you there!
VW
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [keystone] [all] keystoneauth version discovery is here

2017-07-20 Thread Lance Bragstad
Happy Thursday,

We just released keystoneauth 3.0.0 [0], which contains a bunch of
built-in functionality to handle version discovery so that you don't
have to! Check out the documentation for all the details [1].

Big thanks to Eric and Monty for tackling this work, along with all the
folks who diligently reviewed it.


[0] https://review.openstack.org/#/c/485688/
[1] https://docs.openstack.org/keystoneauth/latest/using-sessions.html




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] An experiment with Ansible

2017-07-20 Thread James Slagle
Following up on the previous thread:
http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html

I wanted to share some work I did around the prototype I mentioned
there. I spent a couple days exploring this idea. I came up with a
Python script that when run against an in progress Heat stack, will
pull all the server and deployment metadata out of Heat and generate
ansible playbooks/tasks from the deployments.

Here's the code:
https://github.com/slagle/pump

And an example of what gets generated:
https://gist.github.com/slagle/433ea1bdca7e026ce8ab2c46f4d716a8

If you're interested in any more detail, let me know.

It signals the stack to completion with a dummy "ok" signal so that
the stack will complete. You can then use ansible-playbook to apply
the actual deloyments (in the expected order, respecting the steps
across all roles, and in parallel across all the roles).

Effectively, this treats Heat as nothing but a yaml cruncher. When
using it with deployed-server, Heat doesn't actually change anything
on an overcloud node, you're only using it to generate ansible.

Honestly, I think I will prefer the longer term approach of using
stack outputs. Although, I am not sure of the end goal of that work
and if it is the same as this prototype.

And some of what I've done may be useful with that approach as well:
https://review.openstack.org/#/c/485303/

However, I found this prototype interesting and worth exploring for a
couple of reasons:

Regardless of the approach we take, I wanted to explore what an end
result might look like. Personally, this illustrates what I kind of
had in mind for an "end goal".

I also wanted to see if this was at all feasible. I envisioned some
hurdles, such as deployments depending on output values of previous
deployments, but we actually only do that in 1 place in
tripleo-heat-templates, and I was able to workaround that. In the end
I used it to deploy an all in one overcloud equivalent to our
multinode CI job, so I believe it's feasible.

It meets most of the requirements we're looking to get out of ansible.
You can (re)apply just a single deployment, or a given deployment
across all ResourceGroup members, or all deployments for a given
server(s), it's easy to see what failed and for what servers, etc.

FInally, It's something we could deliver  without much (any?) change
in tripleo-heat-templates. Although I'm not trying to say it'd be a
small amount of work to even do that, as this is a very rough
prototype.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug triage experimentation

2017-07-20 Thread Nematollah Bidokhti
Hi,

I have missed the original email on this subject.
We [Fault Genes WG] have been doing some machine learning analysis on Nova 
bugs/issues from 3 different sources (Launchpad, Stackoverflow, 
ask.openstack.org). We have been able to take all the issues and bring them 
down to 15 clusters.
We have tried to find open source tools that can help us define the fault 
classifications, but have not been able to find any tool.

Therefore, our team have come to the conclusion that we need the support of 
some Nova experts to help define the classifications. I would like to have some 
discussions with Sean and others that have an interest in this area and compare 
notes and see how we can collaborate.

The goal of our WG is to apply the same technique to all key OpenStack projects.

Thanks,
Nemat 

-Original Message-
From: Emilien Macchi [mailto:emil...@redhat.com] 
Sent: Wednesday, July 05, 2017 12:24 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [nova] bug triage experimentation

On Fri, Jun 23, 2017 at 9:52 AM, Sean Dague  wrote:
> The Nova bug backlog is just over 800 open bugs, which while 
> historically not terrible, remains too large to be collectively usable 
> to figure out where things stand. We've had a few recent issues where 
> we just happened to discover upgrade bugs filed 4 months ago that 
> needed fixes and backports.
>
> Historically we've tried to just solve the bug backlog with volunteers.
> We've had many a brave person dive into here, and burn out after 4 - 6 
> months. And we're currently without a bug lead. Having done a big 
> giant purge in the past
> (http://lists.openstack.org/pipermail/openstack-dev/2014-September/046
> 517.html)
> I know how daunting this all can be.
>
> I don't think that people can currently solve the bug triage problem 
> at the current workload that it creates. We've got to reduce the smart 
> human part of that workload.
>
> But, I think that we can also learn some lessons from what active 
> github projects do.
>
> #1 Bot away bad states
>
> There are known bad states of bugs - In Progress with no open patch, 
> Assigned but not In Progress. We can just bot these away with scripts.
> Even better would be to react immediately on bugs like those, that 
> helps to train folks how to use our workflow. I've got some starter 
> scripts for this up at - https://github.com/sdague/nova-bug-tools
>
> #2 Use tag based workflow
>
> One lesson from github projects, is the github tracker has no workflow.
> Issues are openned or closed. Workflow has to be invented by every 
> team based on a set of tags. Sometimes that's annoying, but often 
> times it's super handy, because it allows the tracker to change 
> workflows and not try to change the meaning of things like "Confirmed 
> vs. Triaged" in your mind.
>
> We can probably tag for information we know we need at lot easier. I'm 
> considering something like
>
> * needs.system-version
> * needs.openstack-version
> * needs.logs
> * needs.subteam-feedback
> * has.system-version
> * has.openstack-version
> * has.reproduce
>
> Some of these a bot can process the text on and tell if that info was 
> provided, and comment how to provide the updated info. Some of this 
> would be human, but with official tags, it would probably help.
>
> #3 machine assisted functional tagging
>
> I'm playing around with some things that might be useful in mapping 
> new bugs into existing functional buckets like: libvirt, volumes, etc. 
> We'll see how useful it ends up being.
>
> #4 reporting on smaller slices
>
> Build some tooling to report on the status and change over time of 
> bugs under various tags. This will help visualize how we are doing
> (hopefully) and where the biggest piles of issues are.
>
> The intent is the normal unit of interaction would be one of these 
> smaller piles. Be they the 76 libvirt bugs, 61 volumes bugs, or 36 
> vmware bugs. It would also highlight the rates of change in these 
> piles, and what's getting attention and what is not.
>
>
> This is going to be kind of an ongoing experiment, but as we currently 
> have no one spear heading bug triage, it seemed like a good time to 
> try this out.
>
> Comments and other suggestions are welcomed. The tooling will have the 
> nova flow in mind, but I'm trying to make it so it takes a project 
> name as params on all the scripts, so anyone can use it. It's a little 
> hack and slash right now to discover what the right patterns are.

I also believe that some of the scripts could be transformed into native 
features of Storyboard where bugs could be auto-triaged periodically without 
human intervention.
Maybe it would convince more OpenStack projects to leave Launchpad and adopt 
Storyboard?
I would certainly one of those and propose such a change for TripleO & related 
projects.

Thanks,

> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> 

Re: [openstack-dev] [OpenStack-docs] [doc] dropping "draft" and series-specific publishing for docs.o.o

2017-07-20 Thread Andreas Jaeger
On 2017-07-20 21:11, Doug Hellmann wrote:
> Docs team,
> 
> We have two sets of changes happening in openstack-manuals that
> will simplify managing the content there, and that will .
> 
> Over the last couple of weeks we have removed several guides from
> the repository. The remaining guides are not version-specific, which
> allows us stop creating stable branches of that repository. We will
> continue to publish from stable/newton and stable/ocata for as long
> as those versions of the guides are supported.

The only guide that *was* version specific and is still around is the
Install Guide. But we moved all OpenStack content out and left only the
generic content in. I think we can just keep this unversioned and add
version specific instructions where needed like "If you use Queens, ...".

So, let's change the Install Guide to be unversioned and get rid of
branching!

> I am also preparing a series to patches [1] to rearrange some of
> the template pages to let us publish directly from master to docs.o.o,
> without using a separate "draft" directory that requires special
> effort at the end of a release cycle. When the series is approved
> (specifically when [2] lands), changes approved in the master branch
> will go live on the site within an hour or so of merging. They will
> no longer be published to the /drafts folder.
> 
> Both of these are changes to the current process, so we wanted to
> ensure that all contributors (and especially reviewers) were aware
> of the changes. Please keep this in mind when approving future
> changes.
> 
> The last patch in the series [3] updates the instructions for the
> end-of-release process based on these changes. I want to make sure
> these instructions are clear so that someone other than me can
> perform the steps, so I need your feedback on that patch especially.

thanks a lot for driving all of this, Doug! I really like the way where
this goes,

Andreas

> Doug
> 
> [1] 
> https://review.openstack.org/#/q/project:openstack/openstack-manuals+topic:doc-migration/no-more-drafts
> [2] https://review.openstack.org/484971
> [3] https://review.openstack.org/485789
> 
> 
> ___
> OpenStack-docs mailing list
> openstack-d...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Taking down the App Catalog (apps.openstack.org)

2017-07-20 Thread Jeremy Stanley
TL;DR is that the apps.openstack.org site will be taken offline at
the end of this month.

A few months ago, the TC removed[1] the App Catalog team from official
governance citing concerns over ecosystem confusion with other more
prevalent and successful application cataloguing services. The Infra
team has, since that time, continued to host the beta-test site[2]
for it but the catalog itself[3] has received no updates for over 6
months now and at this point probably constitutes an attractive
nuisance.

Given that the TC decision (as stated in the commit message for the
above mentioned governance change) says to "no longer expose it on
apps.openstack.org" we probably kept it running longer than we
really should have. The Infra team will therefore be taking down
this service on or soon after Monday, July 31. Please make whatever
preparations you may need if you previously depended on this
service.

[1] https://review.openstack.org/452086
[2] https://apps.openstack.org/
[3] https://git.openstack.org/cgit/openstack/app-catalog/
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [doc] dropping "draft" and series-specific publishing for docs.o.o

2017-07-20 Thread Doug Hellmann
Docs team,

We have two sets of changes happening in openstack-manuals that
will simplify managing the content there, and that will .

Over the last couple of weeks we have removed several guides from
the repository. The remaining guides are not version-specific, which
allows us stop creating stable branches of that repository. We will
continue to publish from stable/newton and stable/ocata for as long
as those versions of the guides are supported.

I am also preparing a series to patches [1] to rearrange some of
the template pages to let us publish directly from master to docs.o.o,
without using a separate "draft" directory that requires special
effort at the end of a release cycle. When the series is approved
(specifically when [2] lands), changes approved in the master branch
will go live on the site within an hour or so of merging. They will
no longer be published to the /drafts folder.

Both of these are changes to the current process, so we wanted to
ensure that all contributors (and especially reviewers) were aware
of the changes. Please keep this in mind when approving future
changes.

The last patch in the series [3] updates the instructions for the
end-of-release process based on these changes. I want to make sure
these instructions are clear so that someone other than me can
perform the steps, so I need your feedback on that patch especially.

Doug

[1] 
https://review.openstack.org/#/q/project:openstack/openstack-manuals+topic:doc-migration/no-more-drafts
[2] https://review.openstack.org/484971
[3] https://review.openstack.org/485789


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][deb-packaging] Stop using track-upstream for deb projects

2017-07-20 Thread Andreas Jaeger
On 2017-06-18 11:06, Andreas Jaeger wrote:
> On 2017-06-13 15:01, Paul Belanger wrote:
>> Greetings,
>>
>> I'd like to propose we stop using track-upstream for project-config on
>> deb-packaging projects. It seems there is no active development on these
>> projects currently and by using track-upstream we currently wasting both CI
>> resources and HDD space keeping their projects in sync with there upstream
>> openstack projects.
>>
>> Long term, we don't actually want to support the behavior. I propose we stop
>> doing this today, and if somebody steps up to continue the effort on 
>> packaging
>> our release we then progress forward with out the need of track-upstream.
>>
>> Effectively, track-upstream duplicates the size of a projects git repo. For
>> example, if deb-nova is setup to track-upstream of nova, we copy all commits 
>> and
>> import them into deb-nova. This puts unneeded pressure on our infrastructure
>> moving forward, the git overlay option for gbp is likely the solution we 
>> could
>> use.
> 
> Indeed ;(
> 
> Do you have a patch ready or was there some alternative proposal?

to close the loop:

Patch is up now:

https://review.openstack.org/#/c/485362/

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] Propose removal of cores

2017-07-20 Thread Andrea Frittoli
On Tue, Jun 27, 2017 at 3:01 PM John Villalovos 
wrote:

> I am proposing that the following people be removed as core reviewers from
> the hacking project:
> https://review.openstack.org/#/admin/groups/153,members
>
> Joe Gordon
> James Carey
>
> +1


>
> Joe Gordon:
> Has not done a review in OpenStack since 16-Feb-2017
> http://stackalytics.com/?release=all_id=jogo
>
> Has not done a review in hacking since 23-Jan-2016:
> http://stackalytics.com/?module=hacking_id=jogo=all
>
>
> James Carey
> Has not done a review in OpenStack since 9-Aug-2016
> http://stackalytics.com/?release=all_id=jecarey
>
> Has not done a review in hacking since 9-Aug-2016:
> http://stackalytics.com/?module=hacking=all_id=jecarey
>
>
> And maybe this project needs more core reviewers as there have been six
> total reviews by four core reviewers so far in the Pike cycle:
> http://stackalytics.com/?release=pike=hacking
>
> it's always nice to have core reviewers.
The volume on the project is quite low though, so I suppose the current
team should be able to manage it.
If a patch is laking reviews please ping in the QA channel :)

Andrea Frittoli (andreaf).


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Fault Genes WG

2017-07-20 Thread Rochelle Grober
The meeting is an online video/voice/collaboration meeting.  By clicking the 
link: https://welink-meeting.zoom.us/j/317491860 you will go to a page that 
will download the zoom client installation package.  Install that, run it and 
put the meeting ID in where asked.  Zoom works all over the world.  Once you 
have the client, when you click on the link in the future, it will ask you 
whether you want the zoom client launched.

No IRC room for this one.  It seems that the user groups often are more 
comfortable and productive with interactive meetings.

Hope this helps.

--Rocky


From: randy.perry...@dell.com [mailto:randy.perry...@dell.com]
Sent: Thursday, July 20, 2017 9:06 AM
To: Nematollah Bidokhti ; 
user-commit...@lists.openstack.org; openstack-operators@lists.openstack.org
Subject: Re: [User-committee] [Openstack-operators] Fault Genes WG

Dell - Internal Use - Confidential
Hi,
Is this only on a weblink?  Is there a meeting room on IRC for this?

-Original Appointment-
From: Nematollah Bidokhti [mailto:nematollah.bidok...@huawei.com]
Sent: Wednesday, March 22, 2017 8:20 PM
To: Nematollah Bidokhti; 
user-commit...@lists.openstack.org; 
openstack-operators@lists.openstack.org
Subject: [Openstack-operators] Fault Genes WG
When: Thursday, July 20, 2017 9:00 AM-10:00 AM (UTC-08:00) Pacific Time (US & 
Canada).
Where: Using Zoom Conf. Service - Meeting ID is 317491860


When: Occurs every Thursday from 9:00 AM to 10:00 AM effective 3/23/2017. 
(UTC-08:00) Pacific Time (US & Canada)
Where: Using Zoom Conf. Service - Meeting ID is 317491860

*~*~*~*~*~*~*~*~*~*
Hi there,

nematollah.bidok...@huawei.com is 
inviting you to a scheduled Zoom meeting.

Topic: Fault Genes WG

Time: this is a recurring meeting Meet anytime

Join from PC, Mac, Linux, iOS or Android: 
https://welink-meeting.zoom.us/j/317491860

Or iPhone one-tap (US Toll):  +16465588656,317491860# or +14086380968,317491860#

Or Telephone:

Dial: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll)
Meeting ID: 317 491 860

International numbers available: 
https://welink-meeting.zoom.us/zoomconference?m=qqUZ1nX7Q2YCsoeZbbUf9Wf3EkBnmwWe


 << File: ATT1.txt >>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-20 Thread Graham Hayes
On 19/07/17 20:24, Jeremy Stanley wrote:
> For those who are unaware, Freenode doesn't allow any one user to
> /join more than 120 channels concurrently. This has become a
> challenge for some of the community's IRC bots in the past year,
> most recently the "openstack" meetbot (which not only handles
> meetings but also takes care of channel logging to
> eavesdrop.openstack.org and does the nifty bug number resolution
> some people seem to like).
> 
> I have run some rudimentary analysis and come up with the following
> list of channels which have had fewer than 10 lines said by anyone
> besides a bot over the past three months:
> 
> #craton
> #openstack-api
> #openstack-app-catalog
> #openstack-bareon
> #openstack-cloudpulse
> #openstack-community
> #openstack-cue
> #openstack-diversity
> #openstack-gluon
> #openstack-gslb

This should have been gone a while ago

https://review.openstack.org/#/q/topic:remove-openstack-gslb-irc

> #openstack-ko
> #openstack-kubernetes
> #openstack-networking-cisco
> #openstack-neutron-release
> #openstack-opw
> #openstack-pkg
> #openstack-product
> #openstack-python3
> #openstack-quota
> #openstack-rating
> #openstack-solar
> #openstack-swauth
> #openstack-ux
> #openstack-vmware-nsx
> #openstack-zephyr
> 
> I have a feeling many of these are either no longer needed, or what
> little and infrequent conversation they get used for could just as
> easily happen in a general channel like #openstack-dev or #openstack
> or maybe in the more active channel of their parent team for some
> subteams. Who would miss these if we ceased logging/using them? Does
> anyone want to help by asking around to people who might not see
> this thread, maybe by popping into those channels and seeing if any
> of the sleeping denizens awaken and say they still want to keep it
> around?
> 
> Ultimately we should improve our meetbot deployment to support
> sharding channels across multiple bots, but that will take some time
> to implement and needs volunteers willing to work on it. In the
> meantime we're running with the meetbot present in 120 channels and
> have at least one new channel that desires logging and can't get it
> until we whittle that number down.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



0x23BA8E2E.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Intermittent Jenkins failures

2017-07-20 Thread Jeremy Stanley
On 2017-07-20 09:27:23 -0600 (-0600), Alex Schultz wrote:
> (updated topic to include [tripleo])
> 
> On Thu, Jul 20, 2017 at 9:20 AM, Abhishek Kane  
> wrote:
[...]
> > and undercloud install:
> >
> > http://logs.openstack.org/65/475765/17/gate/gate-tripleo-ci-centos-7-nonha-multinode-oooq/82
> 
> http://logs.openstack.org/65/475765/17/gate/gate-tripleo-ci-centos-7-nonha-multinode-oooq/82bd9ff/logs/undercloud/home/jenkins/undercloud_install.log.txt.gz#_2017-07-19_10_59_21
> 
> mirror issues
[...]

Indirectly, but the underlying issue seems to be general network
connectivity issues for that particular provider/region so we've
taken it out of service to avoid further impact:

https://review.openstack.org/485603

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-07-20 Thread Chris Dent


Greetings OpenStack community,

Most of the time in today's meeting was dedicated to discussing ideas for the 
Guided Review Process [0] (that link will expire in favor of a gerrit review 
soon) we are considering for the PTG. The idea is that projects which are 
enmeshed in debate over how to correctly follow the guidelines in their APIs 
can come to a process of in-person review at the PTG. All involved can engage 
in the discussion and learn. The exact mechanics are still being worked out. 
The wiki page at [0] is a starting point which will be reviewed and revised on 
gerrit. Our discussion today centered around trying to make sure we can 
actually productively engage with one another.

There's been little activity with regard to guidelines or bugs recently. This 
is mostly because everyone is very busy with other responsibilities. We hope 
things will smooth out soon.

# Newly Published Guidelines

None this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[0] https://wiki.openstack.org/wiki/API_Working_Group/Guided_Review_Process
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] loss of WSGI request logs with request-id under uwsgi/apache - PLAN OF ACTION

2017-07-20 Thread Sean Dague
On 07/20/2017 09:27 AM, Sean Dague wrote:

> Here is a starting patch that gets us close (no tests yet) -
> https://review.openstack.org/#/c/485602/ - it's going to require a paste
> change, which is less than idea.

After some #openstack-nova IRC discussion this morning, we decided the
following:

1) we need something like this!

2) changing paste.ini, and having an upgrade note, is not the end of the
world.

If you are cutting over from eventlet to uwsgi/apache, you are going to
need to do other configuration management changes in your environment,
adding this to the mix would be another part of that manual change.

3) try to get this to go silent if enabled under eventlet (to prevent
duplicate lines) as a stretch goal (which I think I just got working).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Fault Genes WG

2017-07-20 Thread Randy.Perryman
Dell - Internal Use - Confidential

Hi,
Is this only on a weblink?  Is there a meeting room on IRC for this?

  -Original Appointment-
  From: Nematollah Bidokhti [mailto:nematollah.bidok...@huawei.com]
  Sent: Wednesday, March 22, 2017 8:20 PM
  To: Nematollah Bidokhti; user-commit...@lists.openstack.org; 
openstack-operators@lists.openstack.org
  Subject: [Openstack-operators] Fault Genes WG
  When: Thursday, July 20, 2017 9:00 AM-10:00 AM (UTC-08:00) Pacific Time 
(US & Canada).
  Where: Using Zoom Conf. Service - Meeting ID is 317491860


  When: Occurs every Thursday from 9:00 AM to 10:00 AM effective 3/23/2017. 
(UTC-08:00) Pacific Time (US & Canada)
  Where: Using Zoom Conf. Service - Meeting ID is 317491860

  *~*~*~*~*~*~*~*~*~*


  Hi there,



  nematollah.bidok...@huawei.com is 
inviting you to a scheduled Zoom meeting.



  Topic: Fault Genes WG



  Time: this is a recurring meeting Meet anytime



  Join from PC, Mac, Linux, iOS or Android: 
https://welink-meeting.zoom.us/j/317491860



  Or iPhone one-tap (US Toll):  +16465588656,317491860# or 
+14086380968,317491860#



  Or Telephone:


  Dial: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll)

  Meeting ID: 317 491 860


  International numbers available: 
https://welink-meeting.zoom.us/zoomconference?m=qqUZ1nX7Q2YCsoeZbbUf9Wf3EkBnmwWe




   << File: ATT1.txt >>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] novnc client authenticating on the wrong nova-consoleauth at random

2017-07-20 Thread Jean-Philippe Methot

Hi,

I'm running a multi-region openstack setup. I'm running into a strange 
issue where when I try to open the novnc console, it will try to 
authenticate on a nova-consoleauth service in a random region. So, if I 
try to access a novnc console of an instance in regiontwo, the client 
will sometimes try to authenticate on the nova-consoleauth in region 
one. This results in the token either getting rejected (if I don't have 
regiontwo tokens in memcached) or nova-consoleauth not finding the 
instance (if I also store regiontwo tokens in regionone memcached).


I must also specify that I am using horizon that's running in regionone. 
Also interesting is the fact that the client accessing instance consoles 
in regionone will NEVER try to authenticate on nova-consoleauth in 
regiontwo. Only regiontwo instances will try to do so (and fail).


Any idea on what could be causing this?

--
Jean-Philippe Méthot
Cloud system administrator
PlanetHoster inc.
www.planethoster.net


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-20 Thread Zane Bitter

On 19/07/17 22:27, Monty Taylor wrote:
I propose we set aside time at the PTG to dig in to this. Between Zane 
and I and the Keystone core team I have confidence we can find a way out.


This may be a bad time to mention that regrettably I won't be attending 
the PTG, due to (happy!) family reasons.


It sounds like you and I are on the same page already in terms of the 
requirements though. I'm fairly relaxed about what the solution looks 
like, as long as we actually address those requirements.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][TripleO] New areas for collaboration between Kolla and TripleO

2017-07-20 Thread Flavio Percoco

On 20/07/17 08:18 -0700, Emilien Macchi wrote:

On Thu, Jul 20, 2017 at 7:27 AM, Andy McCrae  wrote:
[...]

Hopefully that is useful, happy to discuss this more (or any other
collaboration points!) if that does sound interesting.
Andy

[1]
https://github.com/openstack/openstack-ansible-plugins/blob/master/action/_v2_config_template.py
[2]
https://docs.openstack.org/project-deploy-guide/openstack-ansible/draft/app-advanced-config-override.html


Yes, this is very useful and this is what I also wanted to investigate
more back in June:
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118417.html
.

Like Flavio said, it sounds like we might just re-use what you guys
did, since it looks flexible.
What Doug wrote [1] stays very useful, since we don't want to re-use
your templates, we would rather generate the list of options available
in OpenStack projects by using oslo-.config directly. We could provide
an YAML with key/values of things we want to generate in an inifile.
Now we could ask ourselves, in that case, why not directly making
oslo.config reading YAML instead of ini? Do we really need a
translator?

User input → YAML →OSA config template plugin →INI →read by oslo.config

we could have:

User input → YAML → read by oslo.config

I've discussed with some operators about this options but I want to
re-iterate on it here. Any thoughts?

[1] https://github.com/dhellmann/oslo-config-ansible



The plugin, as is, is capable of generating INI files as well as YAML files. I
don't really see the need of making oslo.config read YAML.

On one side the idea sounds appealing because well, we can do more with YAMl
than we can with INI files. On the other side, though, I don't think it's worth
the time right now.

A migration to YAML files requires way more work than we can account for. Not
only we need to make oslo.config support it, we also have to maintain
compatibility for quite a few cycles, etc.

Don't get me wrong. If someone wants to work on this, I'm good. What I'm saying
is that, at this point, I don't think it's worth it. It may be in the future.
The reality is that we depend on INI files now and we will for the forseable
future so the work has to be done anyway.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Intermittent Jenkins failures

2017-07-20 Thread Alex Schultz
(updated topic to include [tripleo])

On Thu, Jul 20, 2017 at 9:20 AM, Abhishek Kane
 wrote:
> Hi,
>
>
>
> Recently saw intermittent jenkins failures in difference scenarios for patch
> https://review.openstack.org/#/c/475765/17.
>
>
>
> Current ones are-
>
> In overcloud deploy:
>
> http://logs.openstack.org/65/475765/17/check/gate-tripleo-ci-centos-7-scenario001-multinode-oooq/685b8bd/console.html
> http://logs.openstack.org/65/475765/17/check/gate-tripleo-ci-centos-7-scenario001-multinode-oooq-container/1dbde7d/console.html
>
>

https://bugs.launchpad.net/tripleo/+bug/1705481

>
> and undercloud install:
>
> http://logs.openstack.org/65/475765/17/gate/gate-tripleo-ci-centos-7-nonha-multinode-oooq/82
>
>

http://logs.openstack.org/65/475765/17/gate/gate-tripleo-ci-centos-7-nonha-multinode-oooq/82bd9ff/logs/undercloud/home/jenkins/undercloud_install.log.txt.gz#_2017-07-19_10_59_21

mirror issues

>
> Anybody else facing such issue?
>
>
>
> Thanks,
>
> Abhishek
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Intermittent Jenkins failures

2017-07-20 Thread Abhishek Kane
Hi,

Recently saw intermittent jenkins failures in difference scenarios for patch 
https://review.openstack.org/#/c/475765/17.

Current ones are-
In overcloud deploy:
http://logs.openstack.org/65/475765/17/check/gate-tripleo-ci-centos-7-scenario001-multinode-oooq/685b8bd/console.html
 
http://logs.openstack.org/65/475765/17/check/gate-tripleo-ci-centos-7-scenario001-multinode-oooq-container/1dbde7d/console.html

and undercloud install:
http://logs.openstack.org/65/475765/17/gate/gate-tripleo-ci-centos-7-nonha-multinode-oooq/82

Anybody else facing such issue?

Thanks,
Abhishek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][TripleO] New areas for collaboration between Kolla and TripleO

2017-07-20 Thread Emilien Macchi
On Thu, Jul 20, 2017 at 7:27 AM, Andy McCrae  wrote:
[...]
> Hopefully that is useful, happy to discuss this more (or any other
> collaboration points!) if that does sound interesting.
> Andy
>
> [1]
> https://github.com/openstack/openstack-ansible-plugins/blob/master/action/_v2_config_template.py
> [2]
> https://docs.openstack.org/project-deploy-guide/openstack-ansible/draft/app-advanced-config-override.html

Yes, this is very useful and this is what I also wanted to investigate
more back in June:
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118417.html
.

Like Flavio said, it sounds like we might just re-use what you guys
did, since it looks flexible.
What Doug wrote [1] stays very useful, since we don't want to re-use
your templates, we would rather generate the list of options available
in OpenStack projects by using oslo-.config directly. We could provide
an YAML with key/values of things we want to generate in an inifile.
Now we could ask ourselves, in that case, why not directly making
oslo.config reading YAML instead of ini? Do we really need a
translator?

User input → YAML →OSA config template plugin →INI →read by oslo.config

we could have:

User input → YAML → read by oslo.config

I've discussed with some operators about this options but I want to
re-iterate on it here. Any thoughts?

[1] https://github.com/dhellmann/oslo-config-ansible
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Architecture support for either VM or Ironic instance as Containers' Host ?

2017-07-20 Thread Mark Goddard
Hi Greg,

You're correct - magnum features support for running on top of VMs or
baremetal. Currently baremetal is supported for kubernetes on Fedora core
only[1]. There is a cluster template parameter 'server_type', which should
be set to 'BM' for baremetal clusters.

In terms of how this works within magnum, each magnum driver advertises one
or more (OS, COE, server_type) tuples that it supports via its 'provides'
property. There is no 'container-host-driver API' - magnum drivers are
largely just a collection of heat templates and a little python glue.

Due to historic and current limitations with ironic (mostly around
networking[2] and block storage support), drivers typically support either
VM or BM. Ironic networking has improved over the last few releases, and it
is becoming feasible to support baremetal using the standard VM drivers. I
think there is a general desire within the project to only support one set
of drivers and remove the maintenance burden.

In terms of your use case, I think that your proprietary bare metal service
would likely not work with any existing drivers. If it could be integrated
with heat, there there is a chance that you could implement a magnum driver
and reuse some of the shared magnum code for configuring and running COEs.

[1]
https://github.com/openstack/magnum/tree/master/magnum/drivers/k8s_fedora_ironic_v1
[2] https://bugs.launchpad.net/magnum/+bug/1544195

On 17 July 2017 at 14:18, Waines, Greg  wrote:

> I believe the MAGNUM architecture supports using either a VM Instance or
> an Ironic Instance as the Host for the COE’s masters and minions.
>
>
>
> How is this done / abstracted within the MAGNUM Architecture ?
>
> i.e. is there a ‘container-host-driver API’ that is defined; and
> implemented for both VM and Ironic ?
>
> ( Feel free to just refer me to a URL that describes this. )
>
>
>
> The reason I ask is that I have a proprietary bare metal service that I
> would like to have MAGNUM run on top of.
>
>
>
> Greg.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Looking for a good end-to-end demo of ironic integrated within openstack

2017-07-20 Thread Steven Dake
Greg,

This may not be exactly what your looking for but close.  (Ocata - NB no
inspector implementation)

Ocata video (no audio):
https://www.youtube.com/watch?v=rHCCUP2odd8=52s

Or check out this more extensive (with audio track) video of real-world
usage of ironic:
https://www.youtube.com/watch?v=V389ecbzjFs

Regards
-steve

On Thu, Jul 20, 2017 at 7:44 AM, Waines, Greg 
wrote:

> hey there,
>
>
>
> I’m an ironic newbie ...
>
> where can I find a good / relatively-current (e.g. PIKE) demo of Ironic
> integrated within OpenStack ?
>
>
>
> Greg.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Looking for a good end-to-end demo of ironic integrated within openstack

2017-07-20 Thread Lucas Alvares Gomes
Hi Greg,

> I’m an ironic newbie ...
>

First of, welcome to the community (-:

> where can I find a good / relatively-current (e.g. PIKE) demo of Ironic
> integrated within OpenStack ?
>

I would recommend deploying it with DevStack on a VM and playing with it,
you can follow this document in order to do it:
https://docs.openstack.org/ironic/latest/contributor/dev-quickstart.html#deploying-ironic-with-devstack

Hope that helps,
Lucas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Looking for a good end-to-end demo of ironic integrated within openstack

2017-07-20 Thread Waines, Greg
hey there,

I’m an ironic newbie ...
where can I find a good / relatively-current (e.g. PIKE) demo of Ironic 
integrated within OpenStack ?

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][TripleO] New areas for collaboration between Kolla and TripleO

2017-07-20 Thread Flavio Percoco

On 20/07/17 15:27 +0100, Andy McCrae wrote:

Hi all,




Some areas of collaboration:

* Kubernetes resources: Work on the same set of resources. In this case,
 resources means the existing templates in kolla-kubernetes. Find ways to
share
 the same resources rather than having 2 different sets of resources.

* Configuration management: Work on a common ansible role/module for
generating
 configuration files. There's a PoC already[1] but it's still being worked
on.
 The PoC will likely turn into an Ansible module rather than a role.
@flaper87
 is working on this.



On this point specifically, we have the config_template module[1] in
OpenStack-Ansible, which sounds like it
already does similar things to what you are after. Essentially you can
supply a yaml formatted config and it will
generate a json, ini or yaml conf file for you. We have some docs around
using the module [2] - and it's already in use by the
ceph-ansible project.

We use it on top of templates, to allow the deployer to specify any options
that aren't templated, but you could
just as easily use it on a blank/empty start point and do away with
templates completely.

We tried to push it into Ansible core a few years ago, but there was push
back based on there being other ways to
achieve that, but I think there has been a shift in Ansible's approach to
accepting new features/modules - so
Kevin Carter (cloudnull) is going to give that another go at upstreaming
it, since it seems generically useful
for Ansible projects.

Hopefully that is useful, happy to discuss this more (or any other
collaboration points!) if that does sound interesting.
Andy

[1]
https://github.com/openstack/openstack-ansible-plugins/blob/master/action/_v2_config_template.py
[2]
https://docs.openstack.org/project-deploy-guide/openstack-ansible/draft/app-advanced-config-override.html


I just learned this module exists. Thanks for reaching out!

By looking at the source code, it looks like this is exactly what we need and
what we were hoping to come up with. YAY! Open Source! YAY! OpenStack!

As mentioned on IRC, I'll add validation to this module (check the keys actually
exist, the types are valid, etc) based on the YAML schema that can be generated
with oslo-config-gen now.

I'll reach out again as asoon as I have something to show on this front.
Thanks again for your work,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-20 Thread Lance Bragstad


On 07/19/2017 09:27 PM, Monty Taylor wrote:
> On 07/19/2017 12:18 AM, Zane Bitter wrote:
>> On 18/07/17 10:55, Lance Bragstad wrote:

 Would Keystone folks be happy to allow persistent credentials once
 we have a way to hand out only the minimum required privileges?


 If I'm understanding correctly, this would make application
 credentials dependent on several cycles of policy work. Right?
>>>
>>> I think having the ability to communicate deprecations though
>>> oslo.policy would help here. We could use it to move towards better
>>> default roles, which requires being able to set minimum privileges.
>>>
>>> Using the current workflow requires operators to define the minimum
>>> privileges for whatever is using the application credential, and
>>> work that into their policy. Is that the intended workflow that we
>>> want to put on the users and operators of application credentials?
>>
>> The plan is to add an authorisation mechanism that is user-controlled
>> and independent of the (operator-controlled) policy. The beginnings
>> of this were included in earlier drafts of the spec, but were removed
>> in patch set 19 in favour of leaving them for a future spec:
>>
>> https://review.openstack.org/#/c/450415/18..19/specs/keystone/pike/application-credentials.rst
>
>
> Yes - that's right - and I expect to start work on that again as soon
> as this next keystoneauth release with version discovery is out the door.
>
> It turns out there are different POVs on this topic, and it's VERY
> important to be clear which one we're talking about at any given point
> in time. A bunch of the confusion just in getting as far as we've
> gotten so far came from folks saying words like "policy" or "trusts"
> or "ACLs" or "RBAC" - but not clarifying which group of cloud users
> they were discussing and from what context.
>
> The problem that Zane and I are are discussing and advocating for are
> for UNPRIVILEDGED users who neither deploy nor operate the cloud but
> who use the cloud to run applications.
>
> Unfortunately, neither the current policy system nor trusts are useful
> in any way shape or form for those humans. Policy and trusts are tools
> for cloud operators to take a certain set of actions.
>
> Similarly, the concern from the folks who are not in favor of
> project-lifecycled application credentials is the one that Zane
> outlined - that there will be $someone with access to those
> credentials after a User change event, and thus $security will be
> compromised.
>
> There is a balance that can and must be found. The use case Zane and I
> are talking about is ESSENTIAL, and literally ever single human who is
> a actually using OpenStack to run applications needs it. Needed it
> last year in fact, and they are, in fact doing things like writing
> ssh-agent like daemons in which they can store their corporate LDAP
> credentials so that their automation will work because we're not
> giving them a workable option.
>
> That said, the concerns about not letting a thing out the door that is
> insecure by design like PHP4's globally scoped URL variables is also
> super important.
>
> So we need to find a design that meets both goals.
>
> I have thoughts on the topic, but have been holding off until
> version-discovery is out the door. My hunch is that, like application
> credentials, we're not going to make significant headway without
> getting humans in the room - because the topic is WAY too fraught with
> peril.
>
> I propose we set aside time at the PTG to dig in to this. Between Zane
> and I and the Keystone core team I have confidence we can find a way out.

Done. I've added this thread to keystone's planning etherpad under
cross-project things we need to talk about [0]. Feel free to elaborate
and fill in context as you see fit. I'll make sure the content makes
it's way into a dedicated etherpad before we have that discussion
(usually as I go through each topic and plan the schedule).


[0] https://etherpad.openstack.org/p/keystone-queens-ptg

>
> Monty
>
> PS. It will not help to solve limited-scope before we solve this.
> Limited scope is an end-user opt-in action and having it does not
> remove the concerns that have been expressed.
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][TripleO] New areas for collaboration between Kolla and TripleO

2017-07-20 Thread Andy McCrae
Hi all,

>
>
> Some areas of collaboration:
>
> * Kubernetes resources: Work on the same set of resources. In this case,
>  resources means the existing templates in kolla-kubernetes. Find ways to
> share
>  the same resources rather than having 2 different sets of resources.
>
> * Configuration management: Work on a common ansible role/module for
> generating
>  configuration files. There's a PoC already[1] but it's still being worked
> on.
>  The PoC will likely turn into an Ansible module rather than a role.
> @flaper87
>  is working on this.
>

On this point specifically, we have the config_template module[1] in
OpenStack-Ansible, which sounds like it
already does similar things to what you are after. Essentially you can
supply a yaml formatted config and it will
generate a json, ini or yaml conf file for you. We have some docs around
using the module [2] - and it's already in use by the
ceph-ansible project.

We use it on top of templates, to allow the deployer to specify any options
that aren't templated, but you could
just as easily use it on a blank/empty start point and do away with
templates completely.

We tried to push it into Ansible core a few years ago, but there was push
back based on there being other ways to
achieve that, but I think there has been a shift in Ansible's approach to
accepting new features/modules - so
Kevin Carter (cloudnull) is going to give that another go at upstreaming
it, since it seems generically useful
for Ansible projects.

Hopefully that is useful, happy to discuss this more (or any other
collaboration points!) if that does sound interesting.
Andy

[1]
https://github.com/openstack/openstack-ansible-plugins/blob/master/action/_v2_config_template.py
[2]
https://docs.openstack.org/project-deploy-guide/openstack-ansible/draft/app-advanced-config-override.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Logging in containerized services

2017-07-20 Thread Lars Kellogg-Stedman
On Wed, Jul 19, 2017 at 4:53 AM, Mark Goddard  wrote:

> Kolla-ansible went through this process a few years ago, and ended up with
> a solution involving heka pulling logs from files in a shared docker volume
> (kolla_logs)


That's basically the same solution that we're currently using.  I'm
specifically recommending a solution that moves away from tailing log files
and towards a /dev/log based logging interface (and I'm suggesting we use
rsyslog for log gathering logs and shipping them to a remote point because
the distribution packaging for that is very mature and there's a good
chance that it's already running, particularly in tripleo target
environments).

-- 
Lars Kellogg-Stedman 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] loss of WSGI request logs with request-id under uwsgi/apache

2017-07-20 Thread Sean Dague
On 07/19/2017 06:28 PM, Matt Riedemann wrote:
> On 7/19/2017 6:16 AM, Sean Dague wrote:
>> We hit a similar issue with placement, and added custom
>> paste middleware for that. Maybe we need to consider a similar thing
>> here, that would only emit if running under uwsgi/apache?
> 
> For example, this:
> 
> http://logs.openstack.org/97/479497/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a0fb17/logs/screen-placement-api.txt.gz#_Jul_19_03_41_21_429324
> 
> 
> If it's not optional for placement, why would we make it optional for
> the compute API? Would turning it on always make it log the request IDs
> twice or something?
> 
> Is this a problem for glance/cinder/neutron/keystone and whoever else is
> logging request IDs in the API?

Here is a starting patch that gets us close (no tests yet) -
https://review.openstack.org/#/c/485602/ - it's going to require a paste
change, which is less than idea.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Access to keystone_authtoken config options (required for Sahara trust)

2017-07-20 Thread Gyorgy Szombathelyi
Hi,

> I naively tried (see https://review.openstack.org/#/c/485521/ ) to simply
> replace the old config key with the new ones, but this fails with:
>  oslo_config.cfg.NoSuchOptError: no such option project_name in group
> [keystone_authtoken]
> 
> I found this thread on this list, few months ago, and apparently those options
> can't be accessed directly:
> http://lists.openstack.org/pipermail/openstack-dev/2017-
> January/110060.html
> 
> but we were accessing their old version - or maybe it was just a combination
> of luck.
> So the question for Keystone people is: how to access those values? Through
> keystonemiddleware? Is there some existing code that can be used as
> reference?
> 
Well, using [keystone_authtoken] usually a bad idea, that's why projects 
introduce
other sections, like [nova], [neutron], [service_user], etc...
Howerver it is very confusing for the user (why the hell one needs to configure 
the
same settings twice), but [keystone_authtoken] should be considered private for 
keystonemiddleware. The effects can be mitigated with a default value of 
auth_section in the new section, I think it would be wise to use this in the 
projects
(create a new section, like [service_user], and set 
CFG.service_user.auth_section=
keystone_authtoken by default, then you can use CFG.service_user.xxx values in
your code).

For an instant solution, you can use the following ugliness:

http://git.openstack.org/cgit/openstack/murano/tree/murano/common/auth_utils.py#n28


> Ciao
> --
> Luigi

Br,
György


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-20 Thread Dulko, Michal
On Thu, 2017-07-20 at 13:06 +, Jeremy Stanley wrote:

On 2017-07-20 07:49:08 + (+), Dulko, Michal wrote:
[...]


Would it be possible to *add* #openstack-helm channel during those
changes?


[...]

Absolutely! I've left a comment on your change linking this ML
thread, but I expect we'll have the situation (temporarily) resolved
over the next few days.

Great, thank you!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-20 Thread Jeremy Stanley
On 2017-07-20 07:49:08 + (+), Dulko, Michal wrote:
[...]
> Would it be possible to *add* #openstack-helm channel during those
> changes?
[...]

Absolutely! I've left a comment on your change linking this ML
thread, but I expect we'll have the situation (temporarily) resolved
over the next few days.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara][keystone][oslo.config] Access to keystone_authtoken config options (required for Sahara trust)

2017-07-20 Thread Luigi Toscano
Hi,

I was trying to deploy Sahara/Pike using TripleO and the cluster creation does 
not work, while it works on Sahara gates.
Cluster operation uses trust (see 
http://specs.openstack.org/openstack/sahara-specs/specs/liberty/cluster-creation-with-trust.html)

The difference between the two deployments is that [keystone_authtoken] config 
section does not contain anymore (after 
https://review.openstack.org/#/c/441223/) the old options 
admin_{name,password,tenant_name} in the TripleO deployment, but username, 
password and project_name. Sahara's gates work because we set the old options 
in devstack.

I naively tried (see https://review.openstack.org/#/c/485521/ ) to simply 
replace the old config key with the new ones, but this fails with:
 oslo_config.cfg.NoSuchOptError: no such option project_name in group 
[keystone_authtoken]

I found this thread on this list, few months ago, and apparently those options 
can't be accessed directly:
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110060.html

but we were accessing their old version - or maybe it was just a combination of 
luck.
So the question for Keystone people is: how to access those values? Through 
keystonemiddleware? Is there some existing code that can be used as reference?

Ciao
-- 
Luigi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][TripleO] New areas for collaboration between Kolla and TripleO

2017-07-20 Thread Flavio Percoco

Hello Team,

The TripleO team and the Kolla team met on IRC yday to explore areas where
collaboration is possible, now that TripleO is looking into jumping on the
Kubernetes wagon.

Below you can find a brief summary of the meeting and some of the action items
that came out from it. But, before that, I'd like to take the chance to thank
everyone who participated in the meeting as I believe it was a productive
conversation. There are still many more to have but it's a good example of what
is possible.

Bullet summary:

* The Kolla team went into details about how kolla-kubernetes uses Helm.
* kolla-kubernetes doesn't depend on Helm as much as it depends on gotpl. Helm
 is still being used to render the template and running the services, though.
 Although it's not planned, it would be technically possible to change the
 latter with calls to kubectl and the former with calls to gotpl directly.
 Again, not planned, not even discussed. Just a thought.
* TripleO would rather not have another template language.
* TripleO is interested in a solution that is primarily based on Ansible.

Some areas of collaboration:

* Kubernetes resources: Work on the same set of resources. In this case,
 resources means the existing templates in kolla-kubernetes. Find ways to share
 the same resources rather than having 2 different sets of resources.

* Configuration management: Work on a common ansible role/module for generating
 configuration files. There's a PoC already[1] but it's still being worked on.
 The PoC will likely turn into an Ansible module rather than a role. @flaper87
 is working on this.

* Work on a common orchestration playbook: It would be possible to work on a set
 of playbooks that could be shared across kolla-kubernetes, TripleO and other
 projects to orchestrate an OpenStack deployment.

Moving Forward:

Configuration management is certainly one area that we can start working on
already. As mentioned above, I've started working on it based on a previous PoC
that Doug Hellmann did. I'm in the process of translating the role into an
ansible module 'cause I believe a python module would be better for this case.

The work on common orchestration depends, to some extent, on the work for using
the same set of kubernetes resources. I'm also looking into this topic. As
mentioned in the meeting, the TripleO team would rather not add a new templating
language to the stack so I'm looking into other ways we could make this happen.
For example, I added support for generating k8s YAML files to
ansible-kubernetes[2]. No idea whether that will land or whether it makes sense
but, I'm actively working on this.

Once we figure some of the above out, we can start working on a common playbook
for orchestration. I've not mentioned anything about repos, teams, etc because I
don't think this discussion is relevant right now. Let's get something going and
work the logistics out later on.

Finally, Emilien and Michal will sync to make sure the PTG sessions for Kolla
and TripleO don't overlap so we can have more chances for shared sessions.
Ideally, we'll get to the PTG with some prototypes done already and we'll use
that time for more granular planning.

Thoughts? Corrections? Did I miss something?
Flavio

[0] 
http://eavesdrop.openstack.org/meetings/kolla/2017/kolla.2017-07-19-16.00.log.html
[1] https://github.com/flaper87/oslo-config-ansible
[2] https://github.com/ansible/ansible-kubernetes-modules/pull/4


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] loss of WSGI request logs with request-id under uwsgi/apache

2017-07-20 Thread Sean Dague
On 07/19/2017 06:28 PM, Matt Riedemann wrote:
> On 7/19/2017 6:16 AM, Sean Dague wrote:
>> We hit a similar issue with placement, and added custom
>> paste middleware for that. Maybe we need to consider a similar thing
>> here, that would only emit if running under uwsgi/apache?
> 
> For example, this:
> 
> http://logs.openstack.org/97/479497/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a0fb17/logs/screen-placement-api.txt.gz#_Jul_19_03_41_21_429324

Placement can't run under eventlet, so there was no reason to make it
optional (or to not emit if we're under eventlet). I'm fine with it
being mandatory, but for niceness not run when we're under eventlet server.

> If it's not optional for placement, why would we make it optional for
> the compute API? Would turning it on always make it log the request IDs
> twice or something?

That was my concern. Right now the path for logging the INFO request
lines comes from following:

* http server is started via oslo.service
* oslo.service is a wrapper around eventlet.wsgi
* eventlet.wsgi takes a log object in, and uses that for logging
* that log object is our log object, and it uses our .info method to emit

Which means it has the context, which includes things like global
request-id, request-id, project, user, domain, etc.

> Is this a problem for glance/cinder/neutron/keystone and whoever else is
> logging request IDs in the API?

It will be the same issue for anyone else going from oslo.service ->
wsgi. I had forgotten that bit of the problem when we did our cut over,
but it's a pretty big problem, and it actually makes most of the global
request id work somewhat pointless, because we threw away the REST call
tracing entirely if people run under the uwsgi/apache model.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Mirantis] How to keep ntpd down

2017-07-20 Thread John Petrini
On all of the controllers? crm resource stop clone_p_ntp should do it.
Although I can't imagine why you would want to do this. Time is very
important in OpenStack (and Ceph if you are running it) which it sounds
like you've already found out.

The whole purpose of NTP is to keep your time in sync - if it's not doing
that you should be looking for the root cause not disabling it. You might
want to start by looking at your upstream time servers that the controllers
are using. This is configured in Fuel and the configuration is stored in
/etc/npt.conf on the controllers.

I'd highly recommend setting up monitoring of ntp so you know when the
clock starts to drift and can respond to it before it drifts too far and
becomes a problem.

___

John Petrini


On Thu, Jul 20, 2017 at 6:29 AM, Raja T Nair  wrote:

> Hello All,
>
> Mirantis 7.0
>
> I am trying to keep ntpd down and do a periodic ntpdate against a time
> server.
> This is because one of the controllers started to drift and services in
> that not started to go down.
>
> But it seems that the ntpd daemon comes up after 10 sec every time i stop
> it.
> Is there a monitor running somewhere which does brings it back?
>
> Please guide me on this and also tell me if I am doing something wrong.
>
> Regards,
> Raja.
>
> --
> :^)
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [nova] loss of WSGI request logs with request-id under uwsgi/apache

2017-07-20 Thread Sean Dague
On 07/19/2017 09:46 PM, Matt Riedemann wrote:
> On 7/19/2017 6:16 AM, Sean Dague wrote:
>> I was just starting to look through some logs to see if I could line up
>> request ids (part of global request id efforts), when I realized that in
>> the process to uwsgi by default, we've entirely lost the INFO wsgi
>> request logs. :(
>>
>> Instead of the old format (which was coming out of oslo.service) we get
>> the following -
>> http://logs.openstack.org/97/479497/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a0fb17/logs/screen-n-api.txt.gz#_Jul_19_03_44_58_233532
>>
>>
>>
>> That definitely takes us a step backwards in understanding the world, as
>> we lose our request id on entry that was extremely useful to match up
>> everything. We hit a similar issue with placement, and added custom
>> paste middleware for that. Maybe we need to consider a similar thing
>> here, that would only emit if running under uwsgi/apache?
>>
>> Thoughts?
>>
>> -Sean
>>
> 
> I'm noticing some other weirdness here:
> 
> http://logs.openstack.org/65/483565/4/check/gate-tempest-dsvm-py35-ubuntu-xenial/9921636/logs/screen-n-sch.txt.gz#_Jul_19_20_17_18_801773
> 
> 
> The first part of the log message got cut off:
> 
> Jul 19 20:17:18.801773 ubuntu-xenial-infracloud-vanilla-9950433
> nova-scheduler[22773]:
> -01dc-4de3-9da7-8eb3de9e305e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active'),
> 'a4eba582-075a-4200-ae6f-9fc7797c95dd':

No, it's the log message exceeded buffer limits in systemd journal, and
was split across lines. It starts 2 more lines up.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-20 Thread Sean Dague
On 07/19/2017 10:00 PM, Adrian Turjak wrote:
> The problem is then entirely procedural within a team. Do they rotate
> all keys when one person leaves? Anything less is the same problem. All
> we can do is make rotation less of a pain, but it will still be painful
> no matter what, and depending on the situation the team makes the choice
> of how to handle rotation if at all.
> 
> The sole reason for project level ownership of these application
> credentials is so that a user leaving/being deleted isn't a scramble to
> replace keys, and a team has the option/time to do it if they care about
> the possibility of that person having known the keys (again, not our
> problem, not a security flaw in code). Anything else, pretty much makes
> this feature useless for teams. :(
> 
> Having both options (owned by project vs user) is useful, but the
> 'security issues' are kind of implied by using project owned app creds.
> It's a very useful feature with some 'use at your own risk' attached.

I think this is a pretty good summary.

In many situations the situation of removing people from projects
(termination) will also physically remove their path to said clouds (as
they are beyond the firewall). It's not true with public clouds, but
it's not making the situation any worse, because right now it's shared
passwords to accounts.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Stepping down from oslo-core

2017-07-20 Thread ChangBo Guo
jd
Thanks for your contribution for oslo,  you are not alone on tooz,  I would
like to spent some time on it when I have time :-)

2017-07-20 18:24 GMT+08:00 Davanum Srinivas :

> Thanks for all your help @jd !! yes of course (on tooz)
>
> -- Dims
>
> On Thu, Jul 20, 2017 at 3:59 AM, Julien Danjou  wrote:
> > Hi folks,
> >
> > I've not been reviewing or contributing to oslo.* stuff for a while and
> > I don't intend to change that in the near future. It seems only fair to
> > step down.
> >
> > As I'm currently the only maintainer of tooz, I'd still suggest to leave
> > me on tooz-core though. :)
> >
> > Cheers,
> > --
> > Julien Danjou
> > # Free Software hacker
> > # https://julien.danjou.info
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [Mirantis] How to keep ntpd down

2017-07-20 Thread Raja T Nair
Hello All,

Mirantis 7.0

I am trying to keep ntpd down and do a periodic ntpdate against a time
server.
This is because one of the controllers started to drift and services in
that not started to go down.

But it seems that the ntpd daemon comes up after 10 sec every time i stop
it.
Is there a monitor running somewhere which does brings it back?

Please guide me on this and also tell me if I am doing something wrong.

Regards,
Raja.

-- 
:^)
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [oslo] Stepping down from oslo-core

2017-07-20 Thread Davanum Srinivas
Thanks for all your help @jd !! yes of course (on tooz)

-- Dims

On Thu, Jul 20, 2017 at 3:59 AM, Julien Danjou  wrote:
> Hi folks,
>
> I've not been reviewing or contributing to oslo.* stuff for a while and
> I don't intend to change that in the near future. It seems only fair to
> step down.
>
> As I'm currently the only maintainer of tooz, I'd still suggest to leave
> me on tooz-core though. :)
>
> Cheers,
> --
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][tempest] New Tempest stable interfaces coming soon

2017-07-20 Thread Andrea Frittoli
We have been working on making more Tempest module stable, especially for
Tempest plugins.
Once this work is complete, plugins will benefit from backward
compatibility on an extended set
of Tempest API.

This benefit comes at a small cost though, since we have to make a few
changes to the modules
before they can be declared as stable. In some cases the impact will be
zero, in other cases it
should be limited to changing an import line or adding an __init__
parameter, but I wanted to give
ample warning to everyone that the changes are coming, so that people can
prepare.

Some more details about this work and the specific patches on Tempest side
is available at [0].

Below is a list of the modules affected and main changes that will be
coming in the near future.
The list of modules affected

The following module will be marked as stable, and moved under tempest.lib:
- tempest/services/object_storage: there may be changes to the interface
- tempest/common/dynamic_creds: extra __init__ parameters will be required
- tempest/common/preprov_creds: extra __init__ parameters will be required
- tempest/common/fixed_network

The following modules will be marked stable for plugins:
- tempest/test.py: No change planned
- tempest/clients.py: Client aliases will only be defined when the
corresponding service is marked
  as enabled in config
- tempest/common/credentials_factory: signature changes to a couple of
helpers

Andrea Frittoli (andreaf)

[0] https://etherpad.openstack.org/p/tempest-test-module-stable
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] TROVE: mongodb/cluster grow issue

2017-07-20 Thread magicb...@gmail.com

https://bugs.launchpad.net/trove/+bug/1705412

tehere you are :)

On 19/07/17 16:15, MCCASLAND, TREVOR wrote:


That’s not right, can you make a bug report here? 
https://bugs.launchpad.net/trove/


*From:*magicb...@gmail.com [mailto:magicb...@gmail.com]
*Sent:* Wednesday, July 19, 2017 7:43 AM
*To:* openstack@lists.openstack.org
*Subject:* [Openstack] TROVE: mongodb/cluster grow issue

Hi

with mongodb(3.2)/cluster deployed with Trove/Ocata, when I try to 
grow the initial cluster, I get some weird error:


> trove  cluster-grow baca7b0f-a68f-4fdd-b1bf-79c3d11d70fb --instance 
"name=extra1,flavor=2x2048x25,volume=1,type=replica,related_to=rs1"
ERROR: b"The value [u'replica'] for key type is invalid. Allowed 
values are ['replica', 'query_router']. (HTTP 400)"


Any ideas?

Thanks,
J.



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-20 Thread Balazs Gibizer
On Wed, Jul 19, 2017 at 3:54 PM, Chris Dent  
wrote:

On Wed, 19 Jul 2017, Balazs Gibizer wrote:

I added more info to the bug report and the review as it seems the 
test is fluctuating.


(Reflecting some conversation gibi and I have had in IRC)

I've made a gabbi-based replication of the desired functionality. It
also flaps, with a >50% failure rate:
https://review.openstack.org/#/c/485209/

Sorry copy pasted the wrong link, the correct link is 
https://bugs.launchpad.net/nova/+bug/1705231


This has been updated (by gibi) to show that the generated SQL is
different between the failure and success cases.


Thanks Jay for proposing the fix 
https://review.openstack.org/#/c/485088/ . It works for me both in the 
functional env and in devstack.


cheers,
gibi





--
Chris Dent  ┬──┬◡ノ(° -°ノ)   
https://anticdent.org/

freenode: cdent tw: @anticdent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Stepping down from oslo-core

2017-07-20 Thread Julien Danjou
Hi folks,

I've not been reviewing or contributing to oslo.* stuff for a while and
I don't intend to change that in the near future. It seems only fair to
step down.

As I'm currently the only maintainer of tooz, I'd still suggest to leave
me on tooz-core though. :)

Cheers,
-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-20 Thread Dulko, Michal
On Wed, 2017-07-19 at 19:24 +, Jeremy Stanley wrote:
> For those who are unaware, Freenode doesn't allow any one user to
> /join more than 120 channels concurrently. This has become a
> challenge for some of the community's IRC bots in the past year,
> most recently the "openstack" meetbot (which not only handles
> meetings but also takes care of channel logging to
> eavesdrop.openstack.org and does the nifty bug number resolution
> some people seem to like).
> 
> I have run some rudimentary analysis and come up with the following
> list of channels which have had fewer than 10 lines said by anyone
> besides a bot over the past three months:
> 
> #craton
> #openstack-api
> #openstack-app-catalog
> #openstack-bareon
> #openstack-cloudpulse
> #openstack-community
> #openstack-cue
> #openstack-diversity
> #openstack-gluon
> #openstack-gslb
> #openstack-ko
> #openstack-kubernetes
> #openstack-networking-cisco
> #openstack-neutron-release
> #openstack-opw
> #openstack-pkg
> #openstack-product
> #openstack-python3
> #openstack-quota
> #openstack-rating
> #openstack-solar
> #openstack-swauth
> #openstack-ux
> #openstack-vmware-nsx
> #openstack-zephyr
> 
> I have a feeling many of these are either no longer needed, or what
> little and infrequent conversation they get used for could just as
> easily happen in a general channel like #openstack-dev or #openstack
> or maybe in the more active channel of their parent team for some
> subteams. Who would miss these if we ceased logging/using them? Does
> anyone want to help by asking around to people who might not see
> this thread, maybe by popping into those channels and seeing if any
> of the sleeping denizens awaken and say they still want to keep it
> around?
> 
> Ultimately we should improve our meetbot deployment to support
> sharding channels across multiple bots, but that will take some time
> to implement and needs volunteers willing to work on it. In the
> meantime we're running with the meetbot present in 120 channels and
> have at least one new channel that desires logging and can't get it
> until we whittle that number down.

Would it be possible to *add* #openstack-helm channel during those
changes? I have a review doing that [1], which is hanging for some time
now and #openstack-helm channel is currently logged only by chatbot
from k8s Slack. That's a pretty active channel by the way.

Thanks,
Michal

[1] https://review.openstack.org/#/c/455742/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rpm-packaging][karbor]

2017-07-20 Thread Chandan kumar
Hello Chen,

On Mon, Jul 17, 2017 at 12:41 PM, Chen Ying  wrote:
> Hi Chandan,
>
>Thank your work about  packaging abclient  .
>

Both the packages are now available in cbs.
https://review.rdoproject.org/r/#/c/7711/ got merged in RDO.

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Tag stable/mitaka as eol for Monasca Ceilometer

2017-07-20 Thread Andreas Jaeger
On 2017-07-19 20:29, Ashwin Agate wrote:
> Hi,
> 
> Can you please delete stable/mitaka branch for openstack/monasca-
> ceilometer project and create a mitaka-eol tag?
> 
> Please see https://review.openstack.org/#/c/484494/6 for some
> discussion with Andreas Jaeger on this subject.

Roland, as PTL, can you confirm, please?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [Openstack] Looking for a case study

2017-07-20 Thread Irum Rauf
Hi,
Thank you Rahil and Gary. These are good ideas and I will look into it. I am 
actually looking for an already implemented system with openstack that I can 
build wrapper upon and invoke it and study it. I have searched for several case 
studies on openstack site but I could not find one with a proper API that I 
could invoke with curl and full specifications.

I was just wondering if community has developed such a case study for research 
purposes or if someone is willing to provide their implemented openstack system 
(with the features mentioned earlier) for research purposes.

Many Thanks,
Irum


> On 19 Jul 2017, at 8.12, Gary Kotton  wrote:
> 
> Hi,
> As part of trying to help people use OpenStack I encouraged a friend of mine 
> who was teaching a course at a University to have all of his students do 
> their projects on OpenStack instances. Prior to each lab he would prep a 
> snapshot of the instance and then the students would continue from them. It 
> was a great success.
> Each student was a tenant and had their own isolated networks etc. It was fun 
> and in the end a very successful endeavor.
> Thanks
> Gary
>  
> From: Rahil Gandotra 
> Reply-To: "rahil.gando...@colorado.edu" 
> Date: Wednesday, July 19, 2017 at 4:47 AM
> To: Irum Rauf , "openstack@lists.openstack.org" 
> 
> Subject: Re: [Openstack] Looking for a case study
>  
> How about implementing a campus lab network for students? 
> You could use multiple servers to spin up controller, compute and network. 
> And then segregate userspace for each student which would use Keystone for 
> authorization. Additionally you could maintain a state of the quotas 
> allocated to each student.
>  
> - Rahil
>  
>  
>  
> On Tue, Jul 18, 2017 at 3:34 PM, Irum Rauf  > wrote:
>>  
>> Hello all,
>> I am a researcher and I am looking for a case study with OpenStack. I have 
>> installed open stack with DevStack and have played with it already. Now, I 
>> am looking for a real world case study with openstack to apply my approach 
>> on but I am not able to. I am looking for:
>>  
>> - A case study developed with open stack that I can invoke via its API 
>> through curl.
>>  
>> - The case study should involve KeyStone for authentication and 
>> authorisation of its resources
>>  
>> - The case study should be stateful.
>>  
>> Any clues in this regard would be very helpful.
>>  
>> Many Thanks.
>>  
>> 
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 
>> 
>> Post to : openstack@lists.openstack.org 
>> 
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 
>> 
> 
> 
>  
> -- 
> Rahil Gandotra
> Graduate Student
> Interdisciplinary Telecom Program
> University of Colorado Boulder

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack