Re: [openstack-dev] [all][release] sphinx 1.6.1 behavior changes triggering job failures

2017-05-17 Thread Julien Danjou
On Tue, May 16 2017, Doug Hellmann wrote:

> https://bugs.launchpad.net/pbr/+bug/1691129 describes a traceback
> produced when building the developer documentation through pbr.
> 
> https://bugs.launchpad.net/reno/+bug/1691224 describes a change where
> Sphinx now treats log messages at WARNING or ERROR level as reasons to
> abort the build when strict mode is enabled.
>
> I have a patch up to the global requirements list to block 1.6.1 for
> builds following g-r and constraints:
> https://review.openstack.org/#/c/465135/
>
> Many of our doc builds do not use constraints, so if your doc build
> fails you will want to apply the same change locally.
>
> There's a patch in review for the reno issue. It would be great if
> someone had time to look into a fix for pbr to make it work with
> older and newer Sphinx.

I'll look into it.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-17 Thread John Garbutt
On 16 May 2017 at 16:08, Doug Hellmann  wrote:
> Excerpts from Sean Dague's message of 2017-05-16 10:49:54 -0400:
>> On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
>> > Folks,
>> >
>> > See $TITLE :)
>> >
>> > Thanks,
>> > Dims
>>
>> I'd rather avoid #openstack-tc and just use #openstack-dev.
>> #openstack-dev is pretty low used environment (compared to like
>> #openstack-infra or #openstack-nova). I've personally been trying to
>> make it my go to way to hit up members of other teams whenever instead
>> of diving into project specific channels, because typically it means we
>> can get a broader conversation around the item in question.
>>
>> Our fragmentation of shared understanding on many issues is definitely
>> exacerbated by many project channels, and the assumption that people
>> need to watch 20+ different channels, with different context, to stay up
>> on things.
>>
>> I would love us to have the problem that too many interesting topics are
>> being discussed in #openstack-dev that we feel the need to parallelize
>> them with a different channel. But I would say we should wait until
>> that's actually a problem.
>>
>> -Sean
>
> +1, let's start with just the -dev channel and see if volume becomes
> an issue.

+1 my preference is to just start with the -dev channel, and see how we go.

+1 all the comments about the history and the discussion needing to be
summarised via ML/gerrit anyway. We can link to the logs of the -dev
channel for the "raw" discussion.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Future of the tripleo-quickstart-utils project

2017-05-17 Thread Bogdan Dobrelya
On 17.05.2017 4:01, Emilien Macchi wrote:
> Hey Raoul,
> 
> Thanks for putting this up in the ML. Replying inline:
> 
> On Tue, May 16, 2017 at 4:59 PM, Raoul Scarazzini  wrote:
>> Hi everybody,
>> as discussed in today's TripleO meeting [1] here's a brief recap of the
>> tripleo-quickstart-utils topic.
>>
>> ### TL;DR ###
>>
>> We are trying to understand whether is good or not to put the contents
>> of [2] somewhere else for a wider exposure.
>>
>> ### Long version ###
>>
>> tripleo-quickstart-utils project started after splitting the
>> ha-validation stuff from the tripleo-quickstart-extras repo [3],

It is amusing a little bit as it looks a controversial to the
"Validations before upgrades and updates" effort. Shall we just move the
tripleo-quickstart-utils back to extras, or to validations repo and have
both issues solved? :)

>> basically because the specificity of the topic was creating a leak of
>> reviewers.
>> Today this repository have three roles:
>>
>> 1 - validate-ha: to do ha specific tests depending on the version. This
>> role relies on a micro bash framework named ha-test-suite available in
>> the same repo, under the utils directory;
> 
> I've looked at 
> https://github.com/redhat-openstack/tripleo-quickstart-utils/blob/master/roles/validate-ha/tasks/main.yml
> and I see it's basically a set of tasks that validates that HA is
> working well on the overcloud.
> Despite little things that might be adjusted (calling bash scripts
> from Ansible), I think this role would be a good fit with

A side note, this peculiar way to use ansible is a deliberate move for
automatic documenting of reproducing steps. So those jinja templated
scripts could be as well used aside of the ansible playbooks. It looked
odd to me as well, but I tend to agree that is an interesting solution
for automagic documentation builds.

> tripleo-validations projects, which is "a collection of Ansible
> playbooks to detect and report potential issues during TripleO
> deployments".
> 
>> 2 - stonith-config: to configure STONITH inside an HA env;
> 
> IMHO (and tell me if I'm wrong), this role is something you want to
> apply at Day 1 during your deployment, right?
> If that's the case, I think the playbooks could really live in THT
> where we already have automation to deploy & configure Pacemaker with
> Heat and Puppet.
> Some tasks might be useful for the upgrade operations but we also have
> upgrade_tasks that use Ansible, so possibly easily re-usable.
> 
> If it's more Day 2 operations, then we should investigate by creating
> a new repository for tripleo with some playbooks useful for Day 2, but
> AFIK we've managed to avoid that until now.
> 
>> 3 - instance-ha: to configure high availability for instances on the
>> compute nodes;
> 
> Same as stonith. It sounds like some tasks done during initial
> deployment to enable instakce HA and then during upgrade to disable /
> enable configurations. I think it could also be done by THT like
> stonith configuration.
> 
>> Despite of the name, this is not just a tripleo-quickstart related
>> project, it is also usable on every TripleO deployed environment, and is
>> meant to support all the TripleO OpenStack versions from kilo to pike
>> for all the roles it sells;
> 
> Great, it means we could easily re-use the bits, modulo some technical
> adjustments.
> 
>> There's also a docs related to the Multi Virtual Undercloud project [4]
>> that explains how to have more than one virtual Undercloud on a physical
>> machine to manage more environments from the same place.
> 
> I would suggest to move it to tripleo-docs, so we have a single place for doc.
> 
>> That's basically the meaning of the word "utils" in the name of the repo.
>>
>> What I would like to understand is if you see this as something useful
>> that can be placed somewhere more near to upstream TripleO project, to
>> reach a wider audience for further contribution/evolution.
> versus
> IIRC, everything in this repo could be moved to existing projects in
> TripleO that are already productized, so little efforts would be done.
> 
>> ###
>>
>> [1]
>> http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-05-16-14.00.log.html
>> [2] https://github.com/redhat-openstack/tripleo-quickstart-utils
>> [3]
>> https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/validate-ha
>> [4]
>> https://github.com/redhat-openstack/tripleo-quickstart-utils/tree/master/docs/multi-virtual-undercloud
>>
>> ###
>>
>> Thanks for your time,
> 
> Thanks for bringing this up!
> 
>> --
>> Raoul Scarazzini
>> ra...@redhat.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

___

Re: [openstack-dev] Suspected SPAM - Re: [vitrage] about  "is_admin" in ctx

2017-05-17 Thread dong.wenjuan
Hi Alexey,

Currently we use 'is_admin' as 'is_admin_project'[1], right?

I think the 'is_admin_project' represent the tenant , not be related to user.

So why 'is_admin_project' alaways return 'True' with all of the user and 
tenants is the issue.


What about a user which is not 'admin' but have the 'admin' role?

Should the user with 'admin' role see all  all-tenants?


[1] 
https://github.com/openstack/vitrage/blob/master/vitrage/api_handler/apis/base.py#L94





Thanks~




BR,

dwj
 













Original Mail



Sender:  <alexey.w...@nokia.com>
To:  <openstack-dev@lists.openstack.org>
Date: 2017/05/15 22:49
Subject: Re: [openstack-dev]Suspected SPAM - Re:  [vitrage] about  "is_admin" 
in ctx







Hi Wenjuan,


 


Sorry it took me so long to answer due to the Boston Summit.


 


After making some more checks in order to make it sure, the results are the 
following:


1.   The context that we use has 2 properties regarding admin (is_admin, 
is_admin_project).


2.   The is_admin property regards whether the user that has done the 
inquiry is the admin or not. So the only way it is True is if the user is  
admin.


3.   The is_admin_project I thought will represent the tenant of the user, 
but from all of the user and tenants that I have used, it laways returned  me 
True.


4.   Due to that I have decided to use the is_admin property in the context 
to indicate whether the user can see all-tenants or not.


5.   This is not a perfect solution because also users such as 
nova/cinder/all project names seems to be able to see the admin tab. In our 
case  what will happen is that although in the UI we have the admin tab for 
those users, the data we will show in the vitrage tab is not of all the tenants 
but this specific tenant.


 


Alexey


 




From: Weyl, Alexey (Nokia - IL/Kfar Sava) [mailto:alexey.w...@nokia.com] 
 Sent: Tuesday, April 25, 2017 3:10 PM
 To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
 Subject: Suspected SPAM - Re: [openstack-dev] [vitrage] about  "is_admin" in 
ctx




 


Hi Wenjuan,


 


This is a good question, I need to check it a bit more thoroughly.


 


It’s just that at the moment we are preparing for the Boston Openstack Summit 
and thus it will take me a bit more time to answer that.


 


Sorry for the delay.


 


Alexey J


 




From: dong.wenj...@zte.com.cn [mailto:dong.wenj...@zte.com.cn] 
 Sent: Friday, April 21, 2017 11:08 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [vitrage] about  "is_admin" in ctx




 

Hi all,

I'm a little confused about the "is_amdin" in ctx. 

From the 
hook(https://github.com/openstack/vitrage/blob/master/vitrage/api/hooks.py#L73),
  "is_admin" means admin user,.

But we define the macro as "admin project"( 
https://github.com/openstack/vitrage/blob/master/vitrage/api_handler/apis/base.py#L94
  ). But in my opinion,  it should be the admin role. Correct me if i'm wrong 
:).

 

BR,

dwj__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] sphinx 1.6.1 behavior changes triggering job failures

2017-05-17 Thread Julien Danjou
On Wed, May 17 2017, Julien Danjou wrote:

>> There's a patch in review for the reno issue. It would be great if
>> someone had time to look into a fix for pbr to make it work with
>> older and newer Sphinx.
>
> I'll look into it.

Here's the pbr fix:

  https://review.openstack.org/#/c/465489/

If we can merge and release that ASAP that'd be great.

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Thierry Carrez
Sean Dague wrote:
> On 05/16/2017 02:39 PM, Doug Hellmann wrote:
>> Excerpts from Michał Jastrzębski's message of 2017-05-16 09:51:00 -0700:
>>> One thing I struggle with is...well...how does *not having* built
>>> containers help with that? If your company have full time security
>>> team, they can check our containers prior to deployment. If your
>>> company doesn't, then building locally will be subject to same risks
>>> as downloading from dockerhub. Difference is, dockerhub containers
>>> were tested in our CI to extend that our CI allows. No matter whether
>>> or not you have your own security team, local CI, staging env, that
>>> will be just a little bit of testing on top of that which you get for
>>> free, and I think that's value enough for users to push for this.
>>
>> The benefit of not building images ourselves is that we are clearly
>> communicating that the responsibility for maintaining the images
>> falls on whoever *does* build them. There can be no question in any
>> user's mind that the community somehow needs to maintain the content
>> of the images for them, just because we're publishing new images
>> at some regular cadence.
> 
> +1. It is really easy to think that saying "don't use this in
> production" prevents people from using it in production. See: User
> Survey 2017 and the number of folks reporting DevStack as their
> production deployment tool.
> 
> We need to not only manage artifacts, but expectations. And with all the
> confusion of projects in the openstack git namespace being officially
> blessed openstack projects over the past few years, I can't imagine
> people not thinking that openstack infra generated content in dockerhub
> is officially supported content.

I totally agree, although I think daily rebuilds / per-commit rebuilds,
together with a properly named repository, might limit expectations
enough to remove the "supported" part of your sentence.

As a parallel, we refresh per-commit a Nova master source code tarball
(nova-master.tar.gz). If a vulnerability is introduced in master but was
never "released" with a version number, we silently fix it in master (no
OSSA advisory published). People tracking master are supposed to be
continuously tracking master.

Back to container image world, if we refresh those images daily and they
are not versioned or archived (basically you can only use the latest and
can't really access past dailies), I think we'd be in a similar situation ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Issue while applying customs configuration to overcloud.

2017-05-17 Thread Bogdan Dobrelya
On 16.05.2017 17:19, Dnyaneshwar Pawar wrote:
> Hi Steve,
> 
> Thanks for your reply.
> 
> Out of interest, where did you find OS::TripleO::ControllerServer, do we
> have a mistake in our docs somewhere?
> 
> I referred below template. 
> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/controller-role.yaml
> 
> resources:
> 
>   Controller:
> type: OS::TripleO::ControllerServer
> metadata:
>   os-collect-config:
> 
> 
> OS::Heat::SoftwareDeployment is referred instead of 
> OS::Heat::SoftwareDeployments at following places.
> 
> 1. 
> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/11/pdf/partner_integration/Red_Hat_OpenStack_Platform-11-Partner_Integration-en-US.pdf
>Section 2.1.4. TripleO and TripleO Heat Templates (page #13 in pdf)
>Section 5.4. CUSTOMIZING CONFIGURATION BEFORE OVERCLOUD CONFIGURATION 
> (Page #32 in pdf)
> 2. http://hardysteven.blogspot.in/2015/05/heat-softwareconfig-resources.html
>Section: Heat SoftwareConfig resources
>Section: SoftwareDeployment HOT template definition
> 3. 
> http://hardysteven.blogspot.in/2015/05/tripleo-heat-templates-part-2-node.html
> Section: Initial deployment flow, step by step
> 
> 
> Thanks and Regards,
> Dnyaneshwar
> 
> 
> On 5/16/17, 4:40 PM, "Steven Hardy"  wrote:
> 
> On Tue, May 16, 2017 at 04:33:33AM +, Dnyaneshwar Pawar wrote:
> > Hi TripleO team,
> > 
> > I am trying to apply custom configuration to an existing overcloud. 
> (using openstack overcloud deploy command)
> > Though there is no error, the configuration is in not applied to 
> overcloud.
> > Am I missing anything here?
> > http://paste.openstack.org/show/609619/
> 
> In your paste you have the resource_registry like this:
> 
> OS::TripleO::ControllerServer: /home/stack/test/heat3_ocata.yaml
> 
> The problem is OS::TripleO::ControllerServer isn't a resource type we use,
> e.g it's not a valid hook to enable additional node configuration.
> 
> Instead try something like this:
> 
> OS::TripleO::NodeExtraConfigPost: /home/stack/test/heat3_ocata.yaml
> 
> Which will run the script on all nodes, as documented here:
> 
> 
> https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html
> 
> Out of interest, where did you find OS::TripleO::ControllerServer, do we
> have a mistake in our docs somewhere?
> 
> Also in your template the type: OS::Heat::SoftwareDeployment should be
> either type: OS::Heat::SoftwareDeployments (as in the docs) or type:
> OS::Heat::SoftwareDeploymentGroup (the newer name for SoftwareDeployments,
> we should switch the docs to that..).
> 
> Hope that helps!

An interesting topic, a popular use case! Let's please fix confusing
user experience [0], feel free to update with SoftwareDeployment(s)
nuances as well.

[0] https://review.openstack.org/#/c/465498/

> 
> -- 
> Steve Hardy
> Red Hat Engineering, Cloud
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Suspected SPAM - Re: [vitrage] about  "is_admin" in ctx

2017-05-17 Thread Weyl, Alexey (Nokia - IL/Kfar Sava)
Hi,

Yes the ‘is_admin_project’ represent the tenant.

I have tried to use many different users which are part of the admin, alt_demo, 
nova, vitrage, new_tenant that I have created and they all returned the 
is_admin_project=True.

At the moment as I said only the admin user will be able to see the 
all-tenants, although this is not correct, and there are other users has the 
permissions to see all-tenants as well.

We have added a new tab in horizon under the admin tab, where those who has 
permission to see that tab can see the vitrages all-tenants.

In our case only the Admin at the moment will actually see all the entities.

Alexey

From: dong.wenj...@zte.com.cn [mailto:dong.wenj...@zte.com.cn]
Sent: Wednesday, May 17, 2017 12:24 PM
To: Weyl, Alexey (Nokia - IL/Kfar Sava) 
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev]Suspected SPAM - Re: [vitrage] about  "is_admin" in 
ctx


Hi Alexey,

Currently we use 'is_admin' as 'is_admin_project'[1], right?

I think the 'is_admin_project' represent the tenant , not be related to user.

So why 'is_admin_project' alaways return 'True' with all of the user and 
tenants is the issue.

What about a user which is not 'admin' but have the 'admin' role?

Should the user with 'admin' role see all  all-tenants?


[1] 
https://github.com/openstack/vitrage/blob/master/vitrage/api_handler/apis/base.py#L94



Thanks~



BR,

dwj




Original Mail
Sender:  <alexey.w...@nokia.com>;
To:  <openstack-dev@lists.openstack.org>;
Date: 2017/05/15 22:49
Subject: Re: [openstack-dev]Suspected SPAM - Re:  [vitrage] about  "is_admin" 
in ctx


Wenjuan,

Sorry it took me so long to answer due to the Boston Summit.

After making some more checks in order to make it sure, the results are the 
following:

1.  The context that we use has 2 properties regarding admin (is_admin, 
is_admin_project).

2.  The is_admin property regards whether the user that has done the 
inquiry is the admin or not. So the only way it is True is if the user is  
admin.

3.  The is_admin_project I thought will represent the tenant of the user, 
but from all of the user and tenants that I have used, it laways returned  me 
True.

4.  Due to that I have decided to use the is_admin property in the context 
to indicate whether the user can see all-tenants or not.

5.  This is not a perfect solution because also users such as 
nova/cinder/all project names seems to be able to see the admin tab. In our 
case  what will happen is that although in the UI we have the admin tab for 
those users, the data we will show in the vitrage tab is not of all the tenants 
but this specific tenant.

Alexey

From: Weyl, Alexey (Nokia - IL/Kfar Sava) [mailto:alexey.w...@nokia.com]
Sent: Tuesday, April 25, 2017 3:10 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Suspected SPAM - Re: [openstack-dev] [vitrage] about  "is_admin" in ctx

Hi Wenjuan,

This is a good question, I need to check it a bit more thoroughly.

It’s just that at the moment we are preparing for the Boston Openstack Summit 
and thus it will take me a bit more time to answer that.

Sorry for the delay.

Alexey ☺

From: dong.wenj...@zte.com.cn 
[mailto:dong.wenj...@zte.com.cn]
Sent: Friday, April 21, 2017 11:08 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [vitrage] about  "is_admin" in ctx


Hi all,

I'm a little confused about the "is_amdin" in ctx.

From the 
hook(https://github.com/openstack/vitrage/blob/master/vitrage/api/hooks.py#L73),
  "is_admin" means admin user,.

But we define the macro as "admin project"( 
https://github.com/openstack/vitrage/blob/master/vitrage/api_handler/apis/base.py#L94
  ). But in my opinion,  it should be the admin role. Correct me if i'm wrong 
:).



BR,

dwj










__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Chris Dent

On Wed, 17 May 2017, Thierry Carrez wrote:


Back to container image world, if we refresh those images daily and they
are not versioned or archived (basically you can only use the latest and
can't really access past dailies), I think we'd be in a similar situation ?


Yes, this.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-17 Thread Sean Dague
On 05/16/2017 07:34 PM, Adrian Turjak wrote:

> Anyway that aside, I'm sold on API keys as a concept in this case
> provided they are project owned rather than user owned, I just don't
> think we should make them too unique, and we shouldn't be giving them a
> unique policy system because that way madness lies.
> 
> Policy is already a complicated system, lets not have to maintain two
> systems. Any policy system we make for API keys ought to be built on top
> of the policy systems we end up with using roles. An explosion of roles
> will happen with dynamic policy anyway, and yes sadly it will be a DSL
> for some clouds, but no sensible cloud operator is going to allow a
> separate policy system in for API keys unless they can control it. I
> don't think we can solve the "all clouds have the same policy for API
> keys" problem and I'd suggest putting that in the "too hard for now
> box". Thus we do your step 1, and leave step 2 until later when we have
> a better idea of how to do it without pissing off a lot of operators,
> breaking standard policy, or maintaining an entirely separate policy system.

This is definitely steps. And I agree we do step 1 to get us at least
revokable keys. That's Pike (hopefully), and then figure out the path
through step 2.

The thing about the policy system today, is it's designed for operators.
Honestly, the only way you really know what policy is really doing is if
you read the source code of openstack as well. That is very very far
from a declarative way of a user to further drop privileges. If we went
straight forward from here we're increasing the audience for this by a
factor of 1000+, with documentation, guarantees that policy points don't
ever change. No one has been thinking about microversioning on a policy
front, for instance. It now becomes part of a much stricter contract,
with a much wider audience.

I think the user experience of API use is going to be really bad if we
have to teach the world about our policy names. They are non mnemonic
for people familiar with the API. Even just in building up testing in
the Nova tree over the years mistakes have been made because it wasn't
super clear what routes the policies in question were modifying. Nova
did a giant replacement of all the policy names 2 cycles ago because of
that. It's better now, but still not what I'd want to thrust on people
that don't have at least 20% of the Nova source tree in their head.

We also need to realize there are going to be 2 levels of permissions
here. There is going to be what the operator allows (which is policy +
roles they have built up on there side), and then what the user allows
in their API Key. I would imagine that an API Key created by a user
inherits any roles that user has (the API Key is still owned by a
project). The user at any time can change the allowed routes on the key.
The admin at any time can change the role / policy structure. *both*
have to be checked on operations, and only if both succeed do we move
forward.

I think another question where we're clearly in a different space, is if
we think about granting an API Key user the ability to create a server.
In a classical role/policy move, that would require not just (compute,
"os_compute_api:servers:create"), but also (image, "get_image"), (image,
"download_image"), (network, "get_port"), (network, "create_port"), and
possibly much more. Missing one of these policies means a deep late
fail, which is not debugable unless you have the source code in front of
you. And not only requires knowledge of the OpenStack API, but deep
knowledge of the particular deployment, because the permissions needed
around networking might be different on different clouds.

Clearly, that's not the right experience for someone that just wants to
write a cloud native application that works on multiple clouds.

So we definitely are already doing something a bit different, that is
going to need to not be evaluated everywhere that policy is current
evaluated, but only *on the initial inbound request*. The user would
express this as (region1, compute, /servers, POST), which means that's
the API call they want this API Key to be able to make. Subsequent
requests wrapped in service tokens bypass checking API Key permissions.
The role system is still in play, keeping the API Key in the box the
operator wanted to put it in.

Given that these systems are going to act differently, and at different
times, I don't actually see it being a path to madness. I actually see
it as less confusing to manage correctly in the code, because they two
things won't get confused, and the wrong permissions checks get made. I
totally agree that policy today is far too complicated, and I fear
making it a new related, but very different task, way more than building
a different declarative approach that is easier for users to get right.

But... that being said, all of this part is Queens. And regardless of
how this part falls out, the Pike work to just provision these API

Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-17 Thread Davanum Srinivas
+1 to "my preference is to just start with the -dev channel"

Thanks,
Dims

On Wed, May 17, 2017 at 4:23 AM, John Garbutt  wrote:
> On 16 May 2017 at 16:08, Doug Hellmann  wrote:
>> Excerpts from Sean Dague's message of 2017-05-16 10:49:54 -0400:
>>> On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
>>> > Folks,
>>> >
>>> > See $TITLE :)
>>> >
>>> > Thanks,
>>> > Dims
>>>
>>> I'd rather avoid #openstack-tc and just use #openstack-dev.
>>> #openstack-dev is pretty low used environment (compared to like
>>> #openstack-infra or #openstack-nova). I've personally been trying to
>>> make it my go to way to hit up members of other teams whenever instead
>>> of diving into project specific channels, because typically it means we
>>> can get a broader conversation around the item in question.
>>>
>>> Our fragmentation of shared understanding on many issues is definitely
>>> exacerbated by many project channels, and the assumption that people
>>> need to watch 20+ different channels, with different context, to stay up
>>> on things.
>>>
>>> I would love us to have the problem that too many interesting topics are
>>> being discussed in #openstack-dev that we feel the need to parallelize
>>> them with a different channel. But I would say we should wait until
>>> that's actually a problem.
>>>
>>> -Sean
>>
>> +1, let's start with just the -dev channel and see if volume becomes
>> an issue.
>
> +1 my preference is to just start with the -dev channel, and see how we go.
>
> +1 all the comments about the history and the discussion needing to be
> summarised via ML/gerrit anyway. We can link to the logs of the -dev
> channel for the "raw" discussion.
>
> Thanks,
> johnthetubaguy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-17 Thread Sean Dague
On 05/16/2017 01:28 PM, John Dickinson wrote:

> I'm not sure the best place to respond (mailing list or gerrit), so
> I'll write this up and post it to both places.
> 
> I think the idea behind this proposal is great. It has the potential
> to bring a lot of benefit to users who are tracing a request across
> many different services, in part by making it easy to search in an
> indexing system like ELK.
> 
> The current proposal has some elements that won't work with the way
> Swift currently solves this problem. This is mostly due to the
> proposed uuid-ish check for validation. However, the Swift solution
> has a few aspects that I believe would be very helpful for the entire
> community.
> 
> NB: Swift returns both an |X-OpenStack-Request-ID| and an |X-Trans-ID|
> header in every response. The |X-Trans-ID| was implemented before the
> OpenStack request ID was proposed, and so we've kept the |X-Trans-ID| so
> as not to break existing clients. The value of |X-OpenStack-Request-ID|
> in any response from Swift is simply a mirror of the |X-Trans-ID| value.
> 
> The request id in Swift is made up of a few parts:
> 
> |X-Openstack-Request-Id: txbea0071df2b0465082501-00591b3077saio-extraextra |
> 
> In the code, this in generated from:
> 
> |'tx%s-%010x%s' % (uuid.uuid4().hex[:21], time.time(),
> quote(trans_id_suffix)) |
> 
> ...meaning that there are three parts to the request id. Let's take
> each in turn.
> 
> The first part always starts with 'tx' (originally from the
> "transaction id") and then is the first 21 hex characters of a uuid4.
> The truncation is to limit the overall length of the value.
> 
> The second part is the hex value of the current time, padded to 10
> characters.
> 
> Finally, the third part is the quoted suffix, and it defaults to the
> empty string. The suffix itself can be made of two parts. The first is
> configured in the Swift proxy server itself (ie the service that does
> the logging) via the |trans_id_suffix| config. This allows an operator
> to set a different suffix for each API endpoint or each region or each
> cluster in order to help distinguish them in logs. For example, if a
> deployment with multiple clusters uses centralized log aggregation, a
> different trans_id_suffix value for each cluster makes it very easy to
> distinguish between the clusters' logs.
> 
> The last part of the suffix is settable via the end-user (ie the one
> calling the API). When the "X-Trans-ID-Extra" header in a request,
> it's value is quoted and appended to the final transaction id value.
> 
> Here's a curl example that shows this all put together. You can see
> that I have my Swift-All-In-One dev environment configured to use the
> "saio" value for the |trans_id_suffix| value:
> 
> |$ curl -i -H "X-Auth-Token: AUTH_tk1bab51ce5e1d4e2bb6c54bccf59433ee"
> http://saio:8080/v1/AUTH_test/c/o --data-binary 1234 -XPUT -H
> "x-trans-id-extra: extraextra" HTTP/1.1 201 Created Last-Modified: Tue,
> 16 May 2017 17:04:17 GMT Content-Length: 0 Etag:
> 81dc9bdb52d04dc20036dbd8313ed055 Content-Type: text/html; charset=UTF-8
> X-Trans-Id: txf766cc02859c450eb4aef-00591b3110saio-extraextra
> X-Openstack-Request-Id:
> txf766cc02859c450eb4aef-00591b3110saio-extraextra Date: Tue, 16 May 2017
> 17:04:16 GMT $ $ curl -i -H "X-Auth-Token:
> AUTH_tk1bab51ce5e1d4e2bb6c54bccf59433ee"
> http://saio:8080/v1/AUTH_test/c/o -H "x-trans-id-extra:
> moredifferentextra" HTTP/1.1 200 OK Content-Length: 4 Accept-Ranges:
> bytes Last-Modified: Tue, 16 May 2017 17:04:17 GMT Etag:
> 81dc9bdb52d04dc20036dbd8313ed055 X-Timestamp: 1494954256.25977
> Content-Type: application/x-www-form-urlencoded X-Trans-Id:
> tx4013173098b348b6b7952-00591b34d2saio-moredifferentextra
> X-Openstack-Request-Id:
> tx4013173098b348b6b7952-00591b34d2saio-moredifferentextra Date: Tue, 16
> May 2017 17:20:18 GMT 1234 $ curl -i -H "X-Auth-Token:
> AUTH_tk1bab51ce5e1d4e2bb6c54bccf59433ee"
> http://saio:8080/v1/AUTH_test/c/o HTTP/1.1 200 OK Content-Length: 4
> Accept-Ranges: bytes Last-Modified: Tue, 16 May 2017 17:04:17 GMT Etag:
> 81dc9bdb52d04dc20036dbd8313ed055 X-Timestamp: 1494954256.25977
> Content-Type: application/x-www-form-urlencoded X-Trans-Id:
> txf66856a06d7547c4ad79d-00591b3527saio X-Openstack-Request-Id:
> txf66856a06d7547c4ad79d-00591b3527saio Date: Tue, 16 May 2017 17:21:43
> GMT 1234 |
> 
> The |X-Trans-ID-Extra| header, specifically, sounds very similar to
> what is being proposed to solve the cross-project reuqest IDs. To
> quote from the commit in Swift that added this:
> 
> |The value of the X-Trans-Id-Extra header on the request (if any) will
> now be appended to the transaction ID. This lets users put their own
> information into transaction IDs. For example, Glance folks upload
> images as large objects, so they'd like to be able to tie together all
> the segment PUTs and the manifest PUT with some operation ID in the
> logs. This would let them pass in that operation ID as X-Trans-Id-Extra,
> and then when things went wrong, it'd be

Re: [openstack-dev] [horizon][api][docs] Feedback requested on proposed formatting change to API docs

2017-05-17 Thread Chris Dent

On Tue, 16 May 2017, Monty Taylor wrote:


The questions:

- Does this help, hurt, no difference?
- servers[].name - servers is a list, containing objects with a name field. 
Good or bad?
- servers[].addresses.$network-name - addresses is an object and the keys of 
the object are the name of the network in question.


I sympathize with the motivation, but for me these don't help: they
add noise (more symbols) and require me to understand yet more syntax.

This is probably because I tend to look at the representations of the
request or response, see a key name and wonder "what is this?" and
then look for it in the table, not the other way round. Thus I want
the key name to be visually greppable without extra goo.

I suspect, however, that I'm not representative of the important
audience and feedback from people who are "real users" should be
prioritized way higher than mine.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread Spyros Trigazis
On 17 May 2017 at 06:25, KiYoun Sung  wrote:

> Hello,
> Magnum team.
>
> I Installed Openstack newton and magnum.
> I installed Magnum by source(master branch).
>
> I have two questions.
>
> 1.
> After installation,
> I created kubernetes cluster and it's CREATE_COMPLETE,
> and I want to create kubernetes pod.
>
> My create script is below.
> --
> apiVersion: v1
> kind: Pod
> metadata:
>   name: nginx
>   labels:
> app: nginx
> spec:
>   containers:
>   - name: nginx
> image: nginx
> ports:
> - containerPort: 80
> --
>
> I tried "kubectl create -f nginx.yaml"
> But, error has occured.
>
> Error message is below.
> error validating "pod-nginx-with-label.yaml": error validating data:
> unexpected type: object; if you choose to ignore these errors, turn
> validation off with --validate=false
>
> Why did this error occur?
>

This is not related to magnum, it is related to your client. From where do
you execute the
kubectl create command? You computer? Some vm with a distributed file
system?


>
> 2.
> I want to access this kubernetes cluster service(like nginx) above the
> Openstack magnum environment from outside world.
>
> I refer to this guide(https://docs.openstack.org/developer/magnum/dev/
> kubernetes-load-balancer.html#how-it-works), but it didn't work.
>
> Openstack: newton
> Magnum: 4.1.1 (master branch)
>
> How can I do?
> Do I must install Lbaasv2?
>

You need lbaas V2 with octavia preferably. Not sure what is the recommended
way to install.


>
> Thank you.
> Best regards.
>

Cheers,
Spyros


>
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread Spyros Trigazis
On 17 May 2017 at 13:58, Spyros Trigazis  wrote:

>
>
> On 17 May 2017 at 06:25, KiYoun Sung  wrote:
>
>> Hello,
>> Magnum team.
>>
>> I Installed Openstack newton and magnum.
>> I installed Magnum by source(master branch).
>>
>> I have two questions.
>>
>> 1.
>> After installation,
>> I created kubernetes cluster and it's CREATE_COMPLETE,
>> and I want to create kubernetes pod.
>>
>> My create script is below.
>> --
>> apiVersion: v1
>> kind: Pod
>> metadata:
>>   name: nginx
>>   labels:
>> app: nginx
>> spec:
>>   containers:
>>   - name: nginx
>> image: nginx
>> ports:
>> - containerPort: 80
>> --
>>
>> I tried "kubectl create -f nginx.yaml"
>> But, error has occured.
>>
>> Error message is below.
>> error validating "pod-nginx-with-label.yaml": error validating data:
>> unexpected type: object; if you choose to ignore these errors, turn
>> validation off with --validate=false
>>
>> Why did this error occur?
>>
>
> This is not related to magnum, it is related to your client. From where do
> you execute the
> kubectl create command? You computer? Some vm with a distributed file
> system?
>
>
>>
>> 2.
>> I want to access this kubernetes cluster service(like nginx) above the
>> Openstack magnum environment from outside world.
>>
>> I refer to this guide(https://docs.openstack.o
>> rg/developer/magnum/dev/kubernetes-load-balancer.html#how-it-works), but
>> it didn't work.
>>
>> Openstack: newton
>> Magnum: 4.1.1 (master branch)
>>
>> How can I do?
>> Do I must install Lbaasv2?
>>
>
> You need lbaas V2 with octavia preferably. Not sure what is the
> recommended way to install.
>

Have a look here:
https://docs.openstack.org/draft/networking-guide/config-lbaas.html

Cheers,
Spyros


>
>
>>
>> Thank you.
>> Best regards.
>>
>
> Cheers,
> Spyros
>
>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck Monitoring

2017-05-17 Thread Waines, Greg
Yes that’s correct.
VM Heartbeating / Health-check Monitoring would introduce intrusive / white-box 
type monitoring of VMs / Instances.

I realize this is somewhat in the gray-zone of what a cloud should be 
monitoring or not,
but I believe it provides an alternative for Applications deployed in VMs that 
do not have an external monitoring/management entity like a VNF Manager in the 
MANO architecture.
And even for VMs with VNF Managers, it provides a highly reliable alternate 
monitoring path that does not rely on Tenant Networking.

You’re correct, that VM HB/HC Monitoring would leverage
https://wiki.libvirt.org/page/Qemu_guest_agent
that would require the agent to be installed in the images for talking back to 
the compute host.
( there are other examples of similar approaches in openstack ... the 
murano-agent for installation, the swift-agent for object store management )
Although here, in the case of VM HB/HC Monitoring, via the QEMU Guest Agent, 
the messaging path is internal thru a QEMU virtual serial device.  i.e. a very 
simple interface with very few dependencies ... it’s up and available very 
early in VM lifecycle and virtually always up.

Wrt failure modes / use-cases

· a VM’s response to a Heartbeat Challenge Request can be as simple as 
just ACK-ing,
this alone allows for detection of:

oa failed or hung QEMU/KVM instance, or

oa failed or hung VM’s OS, or

oa failure of the VM’s OS to schedule the QEMU Guest Agent daemon, or

oa failure of the VM to route basic IO via linux sockets.

· I have had feedback that this is similar to the virtual hardware 
watchdog of QEMU/KVM ( https://libvirt.org/formatdomain.html#elementsWatchdog )

· However, the VM Heartbeat / Health-check Monitoring

o   provides a higher-level (i.e. application-level) heartbeating

§  i.e. if the Heartbeat requests are being answered by the Application running 
within the VM

o   provides more than just heartbeating, as the Application can use it to 
trigger a variety of audits,

o   provides a mechanism for the Application within the VM to report a Health 
Status / Info back to the Host / Cloud,

o   provides notification of the Heartbeat / Health-check status to 
higher-level cloud entities thru Vitrage

§  e.g.   VM-Heartbeat-Monitor - to - Vitrage - (EventAlarm) - Aodh - ... - 
VNF-Manager

- (StateChange) - Nova - ... - VNF Manager


Greg.


From: Adam Spiers 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Tuesday, May 16, 2017 at 7:29 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck 
Monitoring

Waines, Greg mailto:greg.wai...@windriver.com>> 
wrote:
thanks for the pointers Sam.

I took a quick look.
I agree that the VM Heartbeat / Health-check looks like a good fit into 
Masakari.

Currently your instance monitoring looks like it is strictly black-box type 
monitoring thru libvirt events.
Is that correct ?
i.e. you do not do any intrusive type monitoring of the instance thru the QUEMU 
Guest Agent facility
   correct ?

That is correct:

https://github.com/openstack/masakari-monitors/blob/master/masakarimonitors/instancemonitor/instance.py

I think this is what VM Heartbeat / Health-check would add to Masaraki.
Let me know if you agree.

OK, so you are looking for something slightly different I guess, based
on this QEMU guest agent?

https://wiki.libvirt.org/page/Qemu_guest_agent

That would require the agent to be installed in the images, which is
extra work but I imagine quite easily justifiable in some scenarios.
What failure modes do you have in mind for covering with this
approach - things like the guest kernel freezing, for instance?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-17 Thread Sean Dague
On 05/16/2017 08:08 PM, Zane Bitter wrote:
> On 16/05/17 01:06, Colleen Murphy wrote:
>> Additionally, I think OAuth - either extending the existing OAuth1.0
>> plugin or implementing OAuth2.0 - should probably be on the table.
> 
> I believe that OAuth is not a good fit for long-lived things like an
> application needing to communicate with its own infrastructure. Tokens
> are (a) tied to a user, and (b) expire, neither of which we want. Any
> use case where you can't just drop the user into a web browser and ask
> for their password at any time seem to be, at a minimum, excruciatingly
> painful and often impossible with OAuth, because that is the use case it
> was designed for.

I think that's the key bit I was noodling over when OAuth was brought
up. OAuth is really about acting as the user that created it. But one of
the very common concerns is how "when marry in IT quits, and she did all
the automated systems, what happens?" It's actually a quite common issue
(related issue) in the small organization space that the Twitter, shared
gmail, etc was set up by one staff member, who then leaves. And then
scramble hoping that they left an email address around. Because these
entities we less concerned about long term maintenance than initial sign up.

These things have to live at a project level to handle people
disappearing, and their access being revoked. You might want to rotate
keys then anyway on these things, depending.

I think that's one of the things about why OpenStack is going to be
different here. The shared project construct and resources belonging to
projects not users. I honestly wish more internet services understood
that as a common pattern. If there is a way that naturally fits in OAuth
that I'm not aware of cool. But if not, I think it comes off the table
pretty fast.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck Monitoring

2017-05-17 Thread Waines, Greg
Excellent.
Yeah I just watched your Boston Summit presentation and noticed, at least when 
you were talking about host-monitoring, you were looking at having alternative 
backends for reporting e.g. to masakari-api or to mistral or ... to Vitrage :)

Greg.

From: Adam Spiers 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Tuesday, May 16, 2017 at 7:42 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck 
Monitoring

Waines, Greg mailto:greg.wai...@windriver.com>> 
wrote:
Sam,

Two other more higher-level points I wanted to discuss with you about Masaraki.


First,
so I notice that you are doing both monitoring, auto-recovery and even host 
maintenance
type functionality as part of the Masaraki architecture.

are you open to some configurability (enabling/disabling) of these capabilities 
?

I can't speak for Sampath or the Masakari developers, but the monitors
are standalone components.  Currently they can only send notifications
in a format which the masakari-api service can understand, but I guess
it wouldn't be hard to extend them to send notifications in other
formats if that made sense.

e.g. OPNFV guys would NOT want auto-recovery, they would prefer that fault 
events
  get reported to Vitrage ... and eventually filter up to 
Aodh Alarms that get
  received by VNFManagers which would be responsible for 
the recovery.

e.g. some deployers of openstack might want to disable parts or all of your 
monitoring,
 if using other mechanisms such as Zabbix or Nagios for the host 
monitoring (say)

Yes, exactly!  This kind of configurability and flexibility which
would allow each cloud architect to choose which monitoring / alerting
/ recovery components suit their requirements best in a "mix'n'match"
fashion, is exactly what we are aiming for with our modular approach
to the design of compute plane HA.  If the various monitoring
components adopt a driver-based approach to alerting and/or the
ability to alert via a lowest common denominator format such as simple
HTTP POST of JSON blobs, then it should be possible for each cloud
deployer to integrate the monitors with whichever reporting dashboards
/ recovery workflow controllers best satisfy their requirements.

Second, are you open to configurably having fault events reported to
Vitrage ?

Again I can't speak on behalf of the Masakari project, but this sounds
like a great idea to me :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Ironic-UI review requirements - single core reviews

2017-05-17 Thread milanisko k
út 16. 5. 2017 v 18:03 odesílatel Dmitry Tantsur 
napsal:

> On 05/15/2017 09:10 PM, Julia Kreger wrote:
> > All,
> >
> > In our new reality, in order to maximize velocity, I propose that we
> > loosen the review requirements for ironic-ui to allow faster
> > iteration. To this end, I suggest we move ironic-ui to using a single
> > core reviewer for code approval, along the same lines as Horizon[0].
>
> Ok, Horizon example makes me feel a bit better about this :)
>
>  +1 :D

> >
> > Our new reality is a fairly grim one, but there is always hope. We
> > have several distinct active core reviewers. The problem is available
> > time to review, and then getting any two reviewers to be on the same,
> > at the same time, with the same patch set. Reducing the requirements
> > will help us iterate faster and reduce the time a revision waits for
> > approval to land, which should ultimately help everyone contributing.
> >
> > If there are no objections from my fellow ironic folk, then I propose
> > we move to this for ironic-ui immediately.
>
> I'm fine with that. As I mentioned to you, it's clearly more important to
> be
> able to move forward than to make sure we never miss a sub-perfect patch.
> Especially for leaf projects like UI.
>

 +1 I'm sometimes wondering about inspector ;)


> >
> > Thanks,
> >
> > -Julia
> >
> > [0]:
> http://lists.openstack.org/pipermail/openstack-dev/2017-February/113029.html
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][api][docs] Feedback requested on proposed formatting change to API docs

2017-05-17 Thread Sean Dague
On 05/17/2017 07:40 AM, Chris Dent wrote:
> On Tue, 16 May 2017, Monty Taylor wrote:
> 
>> The questions:
>>
>> - Does this help, hurt, no difference?
>> - servers[].name - servers is a list, containing objects with a name
>> field. Good or bad?
>> - servers[].addresses.$network-name - addresses is an object and the
>> keys of the object are the name of the network in question.
> 
> I sympathize with the motivation, but for me these don't help: they
> add noise (more symbols) and require me to understand yet more syntax.
> 
> This is probably because I tend to look at the representations of the
> request or response, see a key name and wonder "what is this?" and
> then look for it in the table, not the other way round. Thus I want
> the key name to be visually greppable without extra goo.
> 
> I suspect, however, that I'm not representative of the important
> audience and feedback from people who are "real users" should be
> prioritized way higher than mine.

The visually greppable part concerns me as well.

I wonder if a dedicated parent column would make sense? Like "Contained In"

| addresses | servers[] | ...
| addr  | servers[].addresses.$network-name | ...

Especially as these things get to
"servers[].addresses.$network-name[].OS-EXT-IPS-MAC:type" they are
wrapping in the name field, which makes them even harder to find.

Maybe figure out the right visual first, then work backwards on a
structure that can build it. But, like I said in IRC, I'm *so* steeped
in all of this, I definitely don't trust my judgement in what's good for
folks that don't have half the code in their head.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Security bug in diskimage-builder

2017-05-17 Thread George Shuklin
There is a bug in diskimage-builder I reported it at 2017-03-10 as 
'private security'. I think this bug is a medium severity.


So far there was no reaction at all. I plan to change this bug to public 
security on next Monday. If someone is interested in bumping up CVE 
count for DIB, please look at 
https://bugs.launchpad.net/diskimage-builder/+bug/1671842 
(private-walled for security group).



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 20

2017-05-17 Thread Chris Dent

On Tue, 16 May 2017, Chris Dent wrote:


Elsewhere I'll create a more complete write up of the Foundation board
meeting that happened on the Sunday before summit, but some comments
from that feel relevant to the purpose of these reports:


Here's the rough notes from the board meeting:

https://anticdent.org/openstack-pike-board-meeting-notes.html

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ci] Saving memory in OOOQ multinode jobs

2017-05-17 Thread Jiří Stránský

Hi all,

we can save some memory in our OOOQ multinode jobs by specifying custom 
role data -- getting rid of resources and YAQL crunching for roles that 
aren't used at all in the end. Shout out to shardy for suggesting we 
should look into doing this.


First observations, hopefully at least somewhat telling:

nonha-multinode-oooq job: comparing [1][2] the RSS memory usage by the 4 
heat-engine processes on undercloud drops from 803 MB to 690 MB.


containers-multinode-upgrades job: comparing [3][4] heat-engine memory 
usage drops from 1221 MB to 968 MB.


I expected some time savings as well but wasn't able to spot any, looks 
like concurrency works well in Heat :)


Patches are here:
https://review.openstack.org/#/c/455730/
https://review.openstack.org/#/c/455719/


Have a good day,

Jirka


[1] 
http://logs.openstack.org/68/465468/1/check/gate-tripleo-ci-centos-7-nonha-multinode-oooq/9d354b5/logs/undercloud/var/log/host_info.txt.gz
[2] 
http://logs.openstack.org/30/455730/3/check/gate-tripleo-ci-centos-7-nonha-multinode-oooq/4e3bb4a/logs/undercloud/var/log/host_info.txt.gz
[3] 
http://logs.openstack.org/52/464652/5/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/a20f7dd/logs/undercloud/var/log/host_info.txt.gz
[4] 
http://logs.openstack.org/30/455730/3/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/f753cd9/logs/undercloud/var/log/host_info.txt.gz


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] pep8 failing

2017-05-17 Thread Bernard Cafarelli
Indeed, recreating the environment with "tox -r" should help here (to
get the new lib)

On 16 May 2017 at 20:28, Ihar Hrachyshka  wrote:
> Make sure you have the latest neutron-lib in your tree: neutron-lib==1.6.0
>
> On Tue, May 16, 2017 at 3:05 AM, Vikash Kumar
>  wrote:
>> Hi Team,
>>
>>   pep8 is failing in master code. translation hint helpers are removed from
>> LOG messages. Is this purposefully done ? Let me know if it is not, will
>> change it.
>>
>> ./networking_sfc/db/flowclassifier_db.py:342:13: N531  Log messages require
>> translation hints!
>> LOG.info("Deleting a non-existing flow classifier.")
>> ^
>> ./networking_sfc/db/sfc_db.py:383:13: N531  Log messages require translation
>> hints!
>> LOG.info("Deleting a non-existing port chain.")
>> ^
>> ./networking_sfc/db/sfc_db.py:526:13: N531  Log messages require translation
>> hints!
>> LOG.info("Deleting a non-existing port pair.")
>> ^
>> ./networking_sfc/db/sfc_db.py:658:13: N531  Log messages require translation
>> hints!
>> LOG.info("Deleting a non-existing port pair group.")
>> ^
>> ./networking_sfc/services/flowclassifier/driver_manager.py:38:9: N531  Log
>> messages require translation hints!
>> LOG.info("Configured Flow Classifier drivers: %s", names)
>> ^
>> ./networking_sfc/services/flowclassifier/driver_manager.py:44:9: N531  Log
>> messages require translation hints!
>> LOG.info("Loaded Flow Classifier drivers: %s",
>> ^
>> ./networking_sfc/services/flowclassifier/driver_manager.py:80:9: N531  Log
>> messages require translation hints!
>> LOG.info("Registered Flow Classifier drivers: %s",
>> ^
>> ./networking_sfc/services/flowclassifier/driver_manager.py:87:13: N531  Log
>> messages require translation hints!
>> LOG.info("Initializing Flow Classifier driver '%s'",
>> ^
>> ./networking_sfc/services/flowclassifier/driver_manager.py:107:17: N531  Log
>> messages require translation hints!
>> LOG.error(
>> ^
>> ./networking_sfc/services/flowclassifier/plugin.py:63:17: N531  Log messages
>> require translation hints!
>> LOG.error("Create flow classifier failed, "
>> ^
>> ./networking_sfc/services/flowclassifier/plugin.py:87:17: N531  Log messages
>> require translation hints!
>> LOG.error("Update flow classifier failed, "
>> ^
>> ./networking_sfc/services/flowclassifier/plugin.py:102:17: N531  Log
>> messages require translation hints!
>> LOG.error("Delete flow classifier failed, "
>> ^
>> ./networking_sfc/services/sfc/driver_manager.py:38:9: N531  Log messages
>> require translation hints!
>> LOG.info("Configured SFC drivers: %s", names)
>> ^
>> ./networking_sfc/services/sfc/driver_manager.py:43:9: N531  Log messages
>> require translation hints!
>> LOG.info("Loaded SFC drivers: %s", self.names())
>> ^
>> ./networking_sfc/services/sfc/driver_manager.py:78:9: N531  Log messages
>> require translation hints!
>> LOG.info("Registered SFC drivers: %s",
>> ^
>> ./networking_sfc/services/sfc/driver_manager.py:85:13: N531  Log messages
>> require translation hints!
>> LOG.info("Initializing SFC driver '%s'", driver.name)
>> ^
>> ./networking_sfc/services/sfc/driver_manager.py:104:17: N531  Log messages
>> require translation hints!
>> LOG.error(
>> ^
>> ./networking_sfc/services/sfc/plugin.py:57:17: N531  Log messages require
>> translation hints!
>> LOG.error("Create port chain failed, "
>> ^
>> ./networking_sfc/services/sfc/plugin.py:82:17: N531  Log messages require
>> translation hints!
>> LOG.error("Update port chain failed, port_chain '%s'",
>> ^
>> ./networking_sfc/services/sfc/plugin.py:97:17: N531  Log messages require
>> translation hints!
>> LOG.error("Delete port chain failed, portchain '%s'",
>> ^
>> ./networking_sfc/services/sfc/plugin.py:122:17: N531  Log messages require
>> translation hints!
>> LOG.error("Create port pair failed, "
>> ^
>> ./networking_sfc/services/sfc/plugin.py:144:17: N531  Log messages require
>> translation hints!
>> LOG.error("Update port pair failed, port_pair '%s'",
>> ^
>> ./networking_sfc/services/sfc/plugin.py:159:17: N531  Log messages require
>> translation hints!
>> LOG.error("Delete port pair failed, port_pair '%s'",
>> ^
>> ./networking_sfc/services/sfc/plugin.py:185:17: N531  Log messages require
>> translation hints!
>> LOG.error("Create port pair group failed, "
>> ^
>> ./networking_sfc/services/sfc/plugin.py:213:17: N531  Log messages require
>> translation hints!
>>

[openstack-dev] [nova] Multiple default routes in instance

2017-05-17 Thread Mathieu Gagné
Hi,

When you attach multiple networks/interfaces to an instance and at
least 2 subnets have a default route, you end up with 2 default routes
in network_data.json.

If cloud-init or any other in-guest agent is used to configure the
network, you could end up with 2 default routes and to an extend, with
non-working networking. (like we found out recently)

How should we handle this situation where you could end up with 2
subnets, each having a default route?

* Should Nova decide which default route to expose to the instance?
* Should the in-guest agent pick one?

I'm looking for opinions and advises on how to fix this issue.

Thanks

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Multiple default routes in instance

2017-05-17 Thread Mathieu Gagné
On Wed, May 17, 2017 at 10:03 AM, Mathieu Gagné  wrote:
> Hi,
>
> When you attach multiple networks/interfaces to an instance and at
> least 2 subnets have a default route, you end up with 2 default routes
> in network_data.json.
>
> If cloud-init or any other in-guest agent is used to configure the
> network, you could end up with 2 default routes and to an extend, with
> non-working networking. (like we found out recently)
>
> How should we handle this situation where you could end up with 2
> subnets, each having a default route?
>
> * Should Nova decide which default route to expose to the instance?
> * Should the in-guest agent pick one?
>
> I'm looking for opinions and advises on how to fix this issue.
>

I would also like to draw attention to a spec I wrote which faces the same
questions and challenges related to multiple fixed-ips per port support:
https://review.openstack.org/#/c/312626/7

Quoting the commit message here:

> Each subnet can have its own routes and default route which create
challenges:
> * Should all routes be merged in the same "routes" attribute?
> * What should be the default route if more than one subnet provides a
default route? Should we provide all of them?
> * What should be the implementation logic an in-guest agent use to
determine the default route?
> * Are there use cases where one default route should be used over the
others? I'm thinking about NFV or other specialized use cases.

--
Mathieu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck Monitoring

2017-05-17 Thread Adam Spiers

Yep :-)  That's pretty much exactly what I was suggesting elsewhere in
this thread:

http://lists.openstack.org/pipermail/openstack-dev/2017-May/116748.html

Waines, Greg  wrote:

Excellent.
Yeah I just watched your Boston Summit presentation and noticed, at least when 
you were talking about host-monitoring, you were looking at having alternative 
backends for reporting e.g. to masakari-api or to mistral or ... to Vitrage :)

Greg.

From: Adam Spiers 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Tuesday, May 16, 2017 at 7:42 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck 
Monitoring

Waines, Greg mailto:greg.wai...@windriver.com>> 
wrote:
Sam,

Two other more higher-level points I wanted to discuss with you about Masaraki.


First,
so I notice that you are doing both monitoring, auto-recovery and even host 
maintenance
type functionality as part of the Masaraki architecture.

are you open to some configurability (enabling/disabling) of these capabilities 
?

I can't speak for Sampath or the Masakari developers, but the monitors
are standalone components.  Currently they can only send notifications
in a format which the masakari-api service can understand, but I guess
it wouldn't be hard to extend them to send notifications in other
formats if that made sense.

e.g. OPNFV guys would NOT want auto-recovery, they would prefer that fault 
events
 get reported to Vitrage ... and eventually filter up to 
Aodh Alarms that get
 received by VNFManagers which would be responsible for the 
recovery.

e.g. some deployers of openstack might want to disable parts or all of your 
monitoring,
if using other mechanisms such as Zabbix or Nagios for the host 
monitoring (say)

Yes, exactly!  This kind of configurability and flexibility which
would allow each cloud architect to choose which monitoring / alerting
/ recovery components suit their requirements best in a "mix'n'match"
fashion, is exactly what we are aiming for with our modular approach
to the design of compute plane HA.  If the various monitoring
components adopt a driver-based approach to alerting and/or the
ability to alert via a lowest common denominator format such as simple
HTTP POST of JSON blobs, then it should be possible for each cloud
deployer to integrate the monitors with whichever reporting dashboards
/ recovery workflow controllers best satisfy their requirements.

Second, are you open to configurably having fault events reported to
Vitrage ?

Again I can't speak on behalf of the Masakari project, but this sounds
like a great idea to me :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck Monitoring

2017-05-17 Thread Adam Spiers

Thanks for the clarification Greg.  This sounds like it has the
potential to be a very useful capability.  May I suggest that you
propose a new user story for it, along similar lines to this existing
one?

http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html

Waines, Greg  wrote:

Yes that’s correct.
VM Heartbeating / Health-check Monitoring would introduce intrusive / white-box 
type monitoring of VMs / Instances.

I realize this is somewhat in the gray-zone of what a cloud should be 
monitoring or not,
but I believe it provides an alternative for Applications deployed in VMs that 
do not have an external monitoring/management entity like a VNF Manager in the 
MANO architecture.
And even for VMs with VNF Managers, it provides a highly reliable alternate 
monitoring path that does not rely on Tenant Networking.

You’re correct, that VM HB/HC Monitoring would leverage
https://wiki.libvirt.org/page/Qemu_guest_agent
that would require the agent to be installed in the images for talking back to 
the compute host.
( there are other examples of similar approaches in openstack ... the 
murano-agent for installation, the swift-agent for object store management )
Although here, in the case of VM HB/HC Monitoring, via the QEMU Guest Agent, 
the messaging path is internal thru a QEMU virtual serial device.  i.e. a very 
simple interface with very few dependencies ... it’s up and available very 
early in VM lifecycle and virtually always up.

Wrt failure modes / use-cases

· a VM’s response to a Heartbeat Challenge Request can be as simple as 
just ACK-ing,
this alone allows for detection of:

oa failed or hung QEMU/KVM instance, or

oa failed or hung VM’s OS, or

oa failure of the VM’s OS to schedule the QEMU Guest Agent daemon, or

oa failure of the VM to route basic IO via linux sockets.

· I have had feedback that this is similar to the virtual hardware 
watchdog of QEMU/KVM ( https://libvirt.org/formatdomain.html#elementsWatchdog )

· However, the VM Heartbeat / Health-check Monitoring

o   provides a higher-level (i.e. application-level) heartbeating

§  i.e. if the Heartbeat requests are being answered by the Application running 
within the VM

o   provides more than just heartbeating, as the Application can use it to 
trigger a variety of audits,

o   provides a mechanism for the Application within the VM to report a Health 
Status / Info back to the Host / Cloud,

o   provides notification of the Heartbeat / Health-check status to 
higher-level cloud entities thru Vitrage

§  e.g.   VM-Heartbeat-Monitor - to - Vitrage - (EventAlarm) - Aodh - ... - 
VNF-Manager
   
- (StateChange) - Nova - ... - VNF Manager


Greg.


From: Adam Spiers 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Tuesday, May 16, 2017 at 7:29 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck 
Monitoring

Waines, Greg mailto:greg.wai...@windriver.com>> 
wrote:
thanks for the pointers Sam.

I took a quick look.
I agree that the VM Heartbeat / Health-check looks like a good fit into 
Masakari.

Currently your instance monitoring looks like it is strictly black-box type 
monitoring thru libvirt events.
Is that correct ?
i.e. you do not do any intrusive type monitoring of the instance thru the QUEMU 
Guest Agent facility
  correct ?

That is correct:

https://github.com/openstack/masakari-monitors/blob/master/masakarimonitors/instancemonitor/instance.py

I think this is what VM Heartbeat / Health-check would add to Masaraki.
Let me know if you agree.

OK, so you are looking for something slightly different I guess, based
on this QEMU guest agent?

   https://wiki.libvirt.org/page/Qemu_guest_agent

That would require the agent to be installed in the images, which is
extra work but I imagine quite easily justifiable in some scenarios.
What failure modes do you have in mind for covering with this
approach - things like the guest kernel freezing, for instance?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/list

[openstack-dev] [forum] Future of Stackalytics

2017-05-17 Thread Thierry Carrez
Hi!

Following the forum, the session moderators should summarize how their
session went and detail next steps, to give opportunity to those who
couldn't be around in person to catch up if interested. Here is my quick
summary of the "Should we kill Stackalytics" session. You can find the
etherpad at [1].

After a quick intro on the history of Stackalytics, we spent some time
discussing the issues with the current situation. Beyond limited
maintenance resources, some accuracy issues and partial infra
transition, it became clear in the session that the major driver for a
change is that Stackalytics incentivizes the wrong behavior(s),
especially in new contributors and organizations in their first step of
involvement. As the only "official" and visible way to measure
contribution, it encourages dumping useless patches and reviews, and
does not value strategic contributions over more tactical contributions.
Beyond wasted resources and being annoying to core reviewers, it fails
to drive those new contributors to what could be extremely useful
contributions, and prevents them to step up in the community.

At the same time, several people in the room raised that they would
rather not support solutions that would just "kill Stackalytics". Having
access to raw metrics on project contribution is useful, for various
profiles. It's when you start adding apples to oranges, or deriving
rankings from those compound metrics that things start to go very wrong.
If Stackalytics was just removed, no doubt it would soon be reborn in
other forms elsewhere. Also removing it while not providing anything to
replace it is totally useless.

We need to first provide a clear incentive to work on desirable items
and under-staffed critical teams. The TC is exploring how we could
produce such a "help wanted" list (action driven by myself), together
with how to give proper recognition to the individuals working on that
and/or the organizations funding their work. Once that is done, we'll
likely explore how to remove most of the misleading rankings and graphs
from Stackalytics to focus it on raw metric information. Before we can
do that, we need to complete transition of Stackalytics to OpenStack
infrastructure. Once that work is completed it will be time to
reconsider our options and have a follow-up discussion session.

In summary, the following follow-on work items were identified:

1. Set up the "help wanted" list at TC level and associated hall of fame
(action: ttx and TC members)
2. Complete Stackalytics migration to infra (action: infra team, mrmartin)
3. Explore which are the most misleading information/graphs from
stackalytics that we might want to remove
4. Reconsider the issue once that work is completed

If interested, please jump in! In particular we need help with
completing the migration to infra. If interested you can reach out to
fungi (Infra team PTL) nor mrmartin (who currently helps with the
transition work).

[1] https://etherpad.openstack.org/p/BOS-forum-should-we-kill-stackalytics

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Michał Jastrzębski
On 17 May 2017 at 04:14, Chris Dent  wrote:
> On Wed, 17 May 2017, Thierry Carrez wrote:
>
>> Back to container image world, if we refresh those images daily and they
>> are not versioned or archived (basically you can only use the latest and
>> can't really access past dailies), I think we'd be in a similar situation
>> ?
>
>
> Yes, this.

I think it's not a bad idea to message "you are responsible for
archving your containers". Do that, combine it with good toolset that
helps users determine versions of packages and other metadata and
we'll end up with something that itself would be greatly appreciated.

Few potential user stories.

I have OpenStack <100 nodes and need every single one of them, hence
no CI. At the same time I want to have fresh packages to avoid CVEs. I
deploy kolla with tip-of-the-stable-branch and setup cronjob that will
upgrade it every week. Because my scenerio is quite typical and
containers already ran through gates that tests my scenerio, I'm good.

Another one:

I have 300+ node cloud, heavy CI and security team examining every
container. While I could build containers locally, downloading them is
just simpler and effectively the same (after all, it's containers
being tested not build process). Every download our security team
scrutinize contaniers and uses toolset Kolla provides to help them.
Additional benefit is that on top of our CI these images went through
Kolla CI which is nice, more testing is always good.

And another one

We are Kolla community. We want to provide testing for full release
upgrades every day in gates, to make sure OpenStack and Kolla is
upgradable and improve general user experience of upgrades. Because
infra is resource constrained, we cannot afford building 2 sets of
containers (stable and master) and doing deploy->test->upgrade->test.
However because we have these cached containers, that are fresh and
passed CI for deploy, we can just use them! Now effectively we're not
only testing Kolla's correctness of upgrade procedure but also all the
other project team upgrades! Oh, it seems Nova merged something that
negatively affects upgrades, let's make sure they are aware!

And last one, which cannot be underestimated

I am CTO of some company and I've heard OpenStack is no longer hard to
deploy, I'll just download kolla-ansible and try. I'll follow this
guide that deploys simple OpenStack with 2 commands and few small
configs, and it's done! Super simple! We're moving to OpenStack and
start contributing tomorrow!

Please, let's solve messaging problems, put burden of archiving on
users, whatever it takes to protect our community from wrong
expectations, but not kill this effort. There are very real and
immediate benefits to OpenStack as a whole if we do this.

Cheers,
Michal

> --
> Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM Heartbeat / Healthcheck Monitoring

2017-05-17 Thread Waines, Greg
Sure.  I can propose a new user story.

And then are you thinking of including this user story in the scope of what 
masakari would be looking at ?

Greg.


From: Adam Spiers 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Wednesday, May 17, 2017 at 10:08 AM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck 
Monitoring

Thanks for the clarification Greg.  This sounds like it has the
potential to be a very useful capability.  May I suggest that you
propose a new user story for it, along similar lines to this existing
one?

http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html

Waines, Greg mailto:greg.wai...@windriver.com>> 
wrote:
Yes that’s correct.
VM Heartbeating / Health-check Monitoring would introduce intrusive / white-box 
type monitoring of VMs / Instances.

I realize this is somewhat in the gray-zone of what a cloud should be 
monitoring or not,
but I believe it provides an alternative for Applications deployed in VMs that 
do not have an external monitoring/management entity like a VNF Manager in the 
MANO architecture.
And even for VMs with VNF Managers, it provides a highly reliable alternate 
monitoring path that does not rely on Tenant Networking.

You’re correct, that VM HB/HC Monitoring would leverage
https://wiki.libvirt.org/page/Qemu_guest_agent
that would require the agent to be installed in the images for talking back to 
the compute host.
( there are other examples of similar approaches in openstack ... the 
murano-agent for installation, the swift-agent for object store management )
Although here, in the case of VM HB/HC Monitoring, via the QEMU Guest Agent, 
the messaging path is internal thru a QEMU virtual serial device.  i.e. a very 
simple interface with very few dependencies ... it’s up and available very 
early in VM lifecycle and virtually always up.

Wrt failure modes / use-cases

· a VM’s response to a Heartbeat Challenge Request can be as simple as 
just ACK-ing,
this alone allows for detection of:

oa failed or hung QEMU/KVM instance, or

oa failed or hung VM’s OS, or

oa failure of the VM’s OS to schedule the QEMU Guest Agent daemon, or

oa failure of the VM to route basic IO via linux sockets.

· I have had feedback that this is similar to the virtual hardware 
watchdog of QEMU/KVM ( https://libvirt.org/formatdomain.html#elementsWatchdog )

· However, the VM Heartbeat / Health-check Monitoring

o   provides a higher-level (i.e. application-level) heartbeating

§  i.e. if the Heartbeat requests are being answered by the Application running 
within the VM

o   provides more than just heartbeating, as the Application can use it to 
trigger a variety of audits,

o   provides a mechanism for the Application within the VM to report a Health 
Status / Info back to the Host / Cloud,

o   provides notification of the Heartbeat / Health-check status to 
higher-level cloud entities thru Vitrage

§  e.g.   VM-Heartbeat-Monitor - to - Vitrage - (EventAlarm) - Aodh - ... - 
VNF-Manager

- (StateChange) - Nova - ... - VNF Manager


Greg.


From: Adam Spiers mailto:aspi...@suse.com>>
Reply-To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, May 16, 2017 at 7:29 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck 
Monitoring

Waines, Greg 
mailto:greg.wai...@windriver.com>>
 wrote:
thanks for the pointers Sam.

I took a quick look.
I agree that the VM Heartbeat / Health-check looks like a good fit into 
Masakari.

Currently your instance monitoring looks like it is strictly black-box type 
monitoring thru libvirt events.
Is that correct ?
i.e. you do not do any intrusive type monitoring of the instance thru the QUEMU 
Guest Agent facility
   correct ?

That is correct:

https://github.com/openstack/masakari-monitors/blob/master/masakarimonitors/instancemonitor/instance.py

I think this is what VM Heartbeat / Health-check would add to Masaraki.
Let me know if you agree.

OK, so you are looking for something slightly different I guess, based
on this QEMU guest agent?

https://wiki.libvirt.org/page/Qemu_guest_agent

That would require the agent to be installed in the images, which is
extra work but I imagine quite easily justifiable in some scenarios.
What failure modes do you have in mind for covering with this
approach - things like the guest kernel freezing, for instance?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ

Re: [openstack-dev] [horizon][api][docs] Feedback requested on proposed formatting change to API docs

2017-05-17 Thread Joe Topjian
On Tue, May 16, 2017 at 4:13 PM, Monty Taylor  wrote:

> Hey all!
>
> I read the API docs A LOT. (thank you to all of you who have worked on
> writing them)
>
> As I do, a gotcha I hit up against a non-zero amount is mapping the
> descriptions of the response parameters to the form of the response itself.
> Most of the time there is a top level parameter under which either an
> object or a list resides, but the description lists list the top level and
> the sub-parameters as siblings.
>
> So I wrote a patch to os-api-ref taking a stab at providing a way to show
> things a little differently:
>
> https://review.openstack.org/#/c/464255/
>
> You can see the output here:
>
> http://docs-draft.openstack.org/55/464255/5/check/gate-nova-
> api-ref-src/f02b170//api-ref/build/html/
>
> If you go expand either the GET / or the GET /servers/details sections and
> go to look at their Response sections, you can see it in action.
>
> We'd like some feedback on impact from humans who read the API docs
> decently regularly...
>
> The questions:
>
> - Does this help, hurt, no difference?
>

It helps. It seems noisy at first glance, but the information being denoted
is important. It's one of those situations where once you start reading
deeper into the information, this kind of markup makes the API more
understandable more quickly.


> - servers[].name - servers is a list, containing objects with a name
> field. Good or bad?
> - servers[].addresses.$network-name - addresses is an object and the keys
> of the object are the name of the network in question.
>

Again, these seem noisy at first, but having parsed complex paths,
especially the above address info, by dumping variables too many times, I
really appreciate the above syntax.

Going even further:

servers[].addresses.$network-name[].OS-EXT-IPS-MAC:mac_addr

looks a mess, but I can see how exactly to navigate to the leaf as well as
understand what types make up the path. Being able to succinctly
(relatively/subjectively speaking) describe something like the above is
very helpful. This definitely gets my support.

Thanks,
Joe
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Steven Dake (stdake)
Michael,

There has been a lot of cost and risk analysis in this thread.

What hasn’t really been discussed at great detail is the “benefit analysis” 
which you have started.  I think we are all clear on the risks and the costs.

If we as a technical community are going to place a line in the sand and state 
“thou shall not ship containers to dockerhub” because of the risks inherent in 
such behavior, we are not integrating properly with the emerging container 
ecosystem.  Expecting operators to build their own images is a viable path 
forward.  Unfortunately, the lack of automation introduces significant 
cognitive load for many (based upon the Q&A in the #openstack-kolla channel on 
a daily basis).  This cognitive load could be (incorrectly) perceived by many 
to be “OpenStack doesn’t care about integrating with adjacent communities.”

On balance, the benefits to OpenStack of your proposal outweigh the costs.

Regards
-steve


-Original Message-
From: Michał Jastrzębski 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, May 17, 2017 at 7:47 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 17 May 2017 at 04:14, Chris Dent  wrote:
> On Wed, 17 May 2017, Thierry Carrez wrote:
>
>> Back to container image world, if we refresh those images daily and they
>> are not versioned or archived (basically you can only use the latest and
>> can't really access past dailies), I think we'd be in a similar situation
>> ?
>
>
> Yes, this.

I think it's not a bad idea to message "you are responsible for
archving your containers". Do that, combine it with good toolset that
helps users determine versions of packages and other metadata and
we'll end up with something that itself would be greatly appreciated.

Few potential user stories.

I have OpenStack <100 nodes and need every single one of them, hence
no CI. At the same time I want to have fresh packages to avoid CVEs. I
deploy kolla with tip-of-the-stable-branch and setup cronjob that will
upgrade it every week. Because my scenerio is quite typical and
containers already ran through gates that tests my scenerio, I'm good.

Another one:

I have 300+ node cloud, heavy CI and security team examining every
container. While I could build containers locally, downloading them is
just simpler and effectively the same (after all, it's containers
being tested not build process). Every download our security team
scrutinize contaniers and uses toolset Kolla provides to help them.
Additional benefit is that on top of our CI these images went through
Kolla CI which is nice, more testing is always good.

And another one

We are Kolla community. We want to provide testing for full release
upgrades every day in gates, to make sure OpenStack and Kolla is
upgradable and improve general user experience of upgrades. Because
infra is resource constrained, we cannot afford building 2 sets of
containers (stable and master) and doing deploy->test->upgrade->test.
However because we have these cached containers, that are fresh and
passed CI for deploy, we can just use them! Now effectively we're not
only testing Kolla's correctness of upgrade procedure but also all the
other project team upgrades! Oh, it seems Nova merged something that
negatively affects upgrades, let's make sure they are aware!

And last one, which cannot be underestimated

I am CTO of some company and I've heard OpenStack is no longer hard to
deploy, I'll just download kolla-ansible and try. I'll follow this
guide that deploys simple OpenStack with 2 commands and few small
configs, and it's done! Super simple! We're moving to OpenStack and
start contributing tomorrow!

Please, let's solve messaging problems, put burden of archiving on
users, whatever it takes to protect our community from wrong
expectations, but not kill this effort. There are very real and
immediate benefits to OpenStack as a whole if we do this.

Cheers,
Michal

> --
> Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
U

Re: [openstack-dev] [MassivelyDistributed] Fog / Edge / Massively Distributed Cloud (FEMDC)

2017-05-17 Thread lebre . adrien

Dear all, Dear Edgar, 

Unless I am mistaken, I didn't see any comments regarding my email. 
I would like to know whether there should be some follow-up actions/messages or 
if the FEMDC Working Group is well established in the OpenStack landscape? 

Thanks, 
Ad_rien_

- Mail original -
> De: "lebre adrien" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: "Shilla Saebi" , "OpenStack Operators" 
> ,
> openst...@lists.openstack.org
> Envoyé: Lundi 8 Mai 2017 20:16:57
> Objet: Re: [openstack-dev] [MassivelyDistributed] Fog / Edge / Massively 
> Distributed Cloud Sessions during the summit
> 
> Dear Edgar,
> 
> As indicated into the WG chairs' session pad [1], the WG was
> previously entitled ``Massively Distributed Cloud''.
> The description appears on the WG wiki page [2] (and I sent an email
> to the user ML a few months ago to ask for the official creation
> [3]).
> 
> After exchanging with the OpenStack foundation folks recently, they
> suggested us to rename the Massively Distributed Clouds WG as the
> Fog/Edge/Massively Distributed Clouds WG.
> The old wiki page [4] refers the new one [5].
> 
> Please let me know whether I miss something (I can update [2] to
> reflect the new name)
> ad_rien_
> 
> [1]
> https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews
> [2]
> https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Working_Groups_and_Teams
> [3]
> http://lists.openstack.org/pipermail/user-committee/2016-September/001232.html
> [4] https://wiki.openstack.org/wiki/Massively_Distributed_Clouds
> [5]
> https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds
> 
> 
> - Mail original -
> > De: "Edgar Magana" 
> > À: "OpenStack Development Mailing List (not for usage questions)"
> > , "OpenStack
> > Operators" ,
> > openst...@lists.openstack.org
> > Cc: "Shilla Saebi" 
> > Envoyé: Lundi 8 Mai 2017 18:25:12
> > Objet: Re: [openstack-dev] [MassivelyDistributed] Fog / Edge /
> > Massively Distributed Cloud Sessions during the summit
> > 
> > Hello,
> > 
> > In behalf of the User Community I would like to understand if this
> > is
> > considering officially to request to create the Working Group. I
> > could have missed another email request the inclusion but if not, I
> > would like to discuss the goals and objective from the WG. The User
> > Committee will be glad to helping you out if anything needed.
> > 
> > I am cc the rest of the UC members.
> > 
> > Thanks,
> > 
> > Edgar
> > 
> > On 5/5/17, 1:16 PM, "lebre.adr...@free.fr" 
> > wrote:
> > 
> > Dear all,
> > 
> > A brief email to inform you about our schedule next week in
> > Boston.
> > 
> > In addition to interesting presentations that will deal with
> > Fog/Edge/Massively Distributed Clouds challenges [1], I would
> > like to highlight two important sessions:
> > 
> > * A new Birds of a Feather session ``OpenStack on the Edge'' is
> > now scheduled on Tuesday afternoon [2].
> > This will be the primary call to action covered by Jonathan
> > Bryce
> > during Monday's keynote about Edge Computing.
> > After introducing the goal of the WG, I will give the floor to
> > participants to share their use-case (3/4 min for each
> > presentation)
> > The Foundation has personally invited four large users that are
> > planning for fog/edge computing.
> > This will guide the WG for the future and hopefully get more
> > contributors involved.
> > Moreover, many of the Foundation staff already planned to
> > attend
> > and talk about the in-planning-phase OpenDev event and get
> > input.
> > The etherpad for this session is available at [3].
> > 
> > * Our regular face-to-face meeting for current and new members
> > to
> > discuss next cycle plans is still scheduled on Wednesday
> > afternoon [5]
> > The etherpad for this session is available at [5]
> > 
> > I encourage all of you to attend both sessions.
> > See you in Boston and have a safe trip
> > ad_rien_
> > 
> > [1]
> > 
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.openstack.org_summit_boston-2D2017_summit-2Dschedule_global-2Dsearch-3Ft-3Dedge&d=DwIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g&s=qez9zHbrE7PfeS8vjgS2nql6xeYGg5heiAnBNMCy5os&e=
> > [2]
> > 
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.openstack.org_summit_boston-2D2017_summit-2Dschedule_events_18988_openstack-2Don-2Dthe-2Dedge-2Dfogedgemassively-2Ddistributed-2Dclouds-2Dbirds-2Dof-2Da-2Dfeather&d=DwIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g&s=fnH2yZ09stEM5L-sOWuq79x_fySLwPOZu308ue7TmCU&e=
> > [3]
> > 
> > https://urldefense.proofpoint.com/v2/url?u=htt

[openstack-dev] [release] Proposal to change the timing of Feature Freeze

2017-05-17 Thread Chris Jones
Hey folks

I have a fairly simple proposal to make - I'd like to suggest that Feature
Freeze move to being much earlier in the release cycle (no earlier than M.1
and no later than M.2 would be my preference).

In the current arrangement (looking specifically at Pike), FF is scheduled
to happen 5 weeks before the upstream release. This means that of a 26 week
release cycle, 21 weeks are available for making large changes, and only 5
weeks are available for stabilising the release after the feature work has
landed (possibly less if FF exceptions are granted).

In my experience, significant issues are generally still being found after
the upstream release, by which point fixing them is much harder - the
patches need to land twice (master and stable/foo) and master may already
have diverged.

If the current model were inverted, and ~6 weeks of each release were
available for landing features, there would be ~20 weeks available for
upstream and downstream folk to do their testing/stabilising work. The
upstream release ought to have a higher quality, and downstream releases
would be more likely to be able to happen at the same time.

Obviously not all developers would be working on the stabilisation work for
those ~20 weeks, many would move on to working on features for the
following release, which would then be ready to land in the much shorter
period.

This might slow the feature velocity of projects, and maybe ~6 weeks is too
aggressive, but I feel that the balance right now is weighted strongly
against timely, stable releasing of OpenStack, particularly for downstream
consumers :)

Rather than getting hung up on the specific numbers of weeks, perhaps it
would be helpful to start with opinions on whether or not there is enough
stabilisation time in the current release schedules.

-- 
Cheers,

Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security bug in diskimage-builder

2017-05-17 Thread Jeremy Stanley
On 2017-05-17 15:57:16 +0300 (+0300), George Shuklin wrote:
> There is a bug in diskimage-builder I reported it at 2017-03-10 as 'private
> security'. I think this bug is a medium severity.
> 
> So far there was no reaction at all. I plan to change this bug to public
> security on next Monday. If someone is interested in bumping up CVE count
> for DIB, please look at
> https://bugs.launchpad.net/diskimage-builder/+bug/1671842 (private-walled
> for security group).

Thanks for the heads up! One thing we missed in the migration of DIB
from TripleO to Infra team governance is that the bug tracker for it
was still under TripleO team control (I just now leveraged my
OpenStack Administrator membership on LP to fix that), so the bug
was only visible to https://launchpad.net/~tripleo until moments
ago.

That said, a "private" bug report visible to the 86 people who are
members of that LP team doesn't really qualify as private in my book
so there's probably no additional harm in just switching it to
public security while I work on triaging it with the DIB devs.
Going forward, private security bugs filed for DIB are only visible
to the 18 people who make up the diskimage-builder-core and
openstack-ci-core teams on LP, which is still more than it probably
should be but it's a start at least.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-17 12:14:40 +0100:
> On Wed, 17 May 2017, Thierry Carrez wrote:
> 
> > Back to container image world, if we refresh those images daily and they
> > are not versioned or archived (basically you can only use the latest and
> > can't really access past dailies), I think we'd be in a similar situation ?
> 
> Yes, this.
> 

Is that how container publishing works? Can we overwrite an existing
archive, so that there is only ever 1 version of a published container
at any given time?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-17 12:19:22 +0200:
> Sean Dague wrote:
> > On 05/16/2017 02:39 PM, Doug Hellmann wrote:
> >> Excerpts from Michał Jastrzębski's message of 2017-05-16 09:51:00 -0700:
> >>> One thing I struggle with is...well...how does *not having* built
> >>> containers help with that? If your company have full time security
> >>> team, they can check our containers prior to deployment. If your
> >>> company doesn't, then building locally will be subject to same risks
> >>> as downloading from dockerhub. Difference is, dockerhub containers
> >>> were tested in our CI to extend that our CI allows. No matter whether
> >>> or not you have your own security team, local CI, staging env, that
> >>> will be just a little bit of testing on top of that which you get for
> >>> free, and I think that's value enough for users to push for this.
> >>
> >> The benefit of not building images ourselves is that we are clearly
> >> communicating that the responsibility for maintaining the images
> >> falls on whoever *does* build them. There can be no question in any
> >> user's mind that the community somehow needs to maintain the content
> >> of the images for them, just because we're publishing new images
> >> at some regular cadence.
> > 
> > +1. It is really easy to think that saying "don't use this in
> > production" prevents people from using it in production. See: User
> > Survey 2017 and the number of folks reporting DevStack as their
> > production deployment tool.
> > 
> > We need to not only manage artifacts, but expectations. And with all the
> > confusion of projects in the openstack git namespace being officially
> > blessed openstack projects over the past few years, I can't imagine
> > people not thinking that openstack infra generated content in dockerhub
> > is officially supported content.
> 
> I totally agree, although I think daily rebuilds / per-commit rebuilds,
> together with a properly named repository, might limit expectations
> enough to remove the "supported" part of your sentence.
> 
> As a parallel, we refresh per-commit a Nova master source code tarball
> (nova-master.tar.gz). If a vulnerability is introduced in master but was
> never "released" with a version number, we silently fix it in master (no
> OSSA advisory published). People tracking master are supposed to be
> continuously tracking master.
> 
> Back to container image world, if we refresh those images daily and they
> are not versioned or archived (basically you can only use the latest and
> can't really access past dailies), I think we'd be in a similar situation ?
> 

The source tarballs are not production deployment tools and only
contain code for one project at a time and it is all our code, so
we don't have to track issues in any other components. The same
differences apply to the artifacts we publish to PyPI and NPM. So
it's similar, but different.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Michał Jastrzębski
On 17 May 2017 at 08:55, Doug Hellmann  wrote:
> Excerpts from Chris Dent's message of 2017-05-17 12:14:40 +0100:
>> On Wed, 17 May 2017, Thierry Carrez wrote:
>>
>> > Back to container image world, if we refresh those images daily and they
>> > are not versioned or archived (basically you can only use the latest and
>> > can't really access past dailies), I think we'd be in a similar situation ?
>>
>> Yes, this.
>>
>
> Is that how container publishing works? Can we overwrite an existing
> archive, so that there is only ever 1 version of a published container
> at any given time?

We can do it either way, but that's how we want it, top of stable
branch daily + top of master.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Steven Dake (stdake)
Doug,

When a docker image is pushed to dockerhub (or quay.io, etc) a tag is 
specified.  If the tag already exists, it is overwritten.

Regards
-steve

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, May 17, 2017 at 8:55 AM
To: openstack-dev 
Subject: Re: [openstack-dev]
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
do we want to be publishing binary container images?

Excerpts from Chris Dent's message of 2017-05-17 12:14:40 +0100:
> On Wed, 17 May 2017, Thierry Carrez wrote:
> 
> > Back to container image world, if we refresh those images daily and they
> > are not versioned or archived (basically you can only use the latest and
> > can't really access past dailies), I think we'd be in a similar 
situation ?
> 
> Yes, this.
> 

Is that how container publishing works? Can we overwrite an existing
archive, so that there is only ever 1 version of a published container
at any given time?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Fox, Kevin M
What kolla's been discussing is having something like:
4.0.0-1, 4.0.0-2, 4.0.0-3, etc.
only keeping the most recent two. and then aliases for:
4.0.0 pointing to the newest.

This allows helm upgrade to atomically roll/forward back properly. If you drop 
releases, k8s can't properly do atomic upgrades. You will get inconsistent 
rollouts and will not know which containers are old and have the security 
issues. Knowing there is a newer -revision number also notifies you that you 
are running something old and need to update.

Thanks,
Kevin

From: Chris Dent [cdent...@anticdent.org]
Sent: Wednesday, May 17, 2017 4:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On Wed, 17 May 2017, Thierry Carrez wrote:

> Back to container image world, if we refresh those images daily and they
> are not versioned or archived (basically you can only use the latest and
> can't really access past dailies), I think we'd be in a similar situation ?

Yes, this.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Proposal to change the timing of Feature Freeze

2017-05-17 Thread Dmitry Tantsur

On 05/17/2017 05:34 PM, Chris Jones wrote:

Hey folks

I have a fairly simple proposal to make - I'd like to suggest that Feature 
Freeze move to being much earlier in the release cycle (no earlier than M.1 and 
no later than M.2 would be my preference).


I welcome projects to experiment with it, but strong -1 to make it any kinds of 
policy. Actually, we mostly got rid of the feature freeze in Ironic because of 
how it did not work for us.




In the current arrangement (looking specifically at Pike), FF is scheduled to 
happen 5 weeks before the upstream release. This means that of a 26 week release 
cycle, 21 weeks are available for making large changes, and only 5 weeks are 
available for stabilising the release after the feature work has landed 
(possibly less if FF exceptions are granted).


In my experience, significant issues are generally still being found after the 
upstream release, by which point fixing them is much harder - the patches need 
to land twice (master and stable/foo) and master may already have diverged.


If the current model were inverted, and ~6 weeks of each release were available 
for landing features, there would be ~20 weeks available for upstream and 
downstream folk to do their testing/stabilising work. The upstream release ought 
to have a higher quality, and downstream releases would be more likely to be 
able to happen at the same time.


6 weeks of each release is not enough to land anything serious IMO. Most of big 
feature I remember took months of review to land.


Also what this is going to end up with is folks working on features during the 
freeze with -2 applied. And then in these 6 weeks they'll put *enormous* 
pressure on the core team to merge their changes, knowing that otherwise they'll 
have to wait 5 months more.


Even our regular feature freeze had this effect to some extent. People don't 
plan that early, not all code patches are of equal quality initially, etc. While 
you can blame them for it (and you'll probably be right), this will still end up 
in core team pressurized.


This will also make prioritization much more difficult, as a feature that was 
not deemed a priority is unlikely to land at all. We already have complaints 
from some contributors about uneven prioritization, your proposal has chances of 
making it much worse.




Obviously not all developers would be working on the stabilisation work for 
those ~20 weeks, many would move on to working on features for the following 
release, which would then be ready to land in the much shorter period.


From our experience, most of the non-core developers just go away during the 
feature freeze, and come back when it's lifted. I don't remember any increase in 
the number of bugs fixed during that time frame. Most of the folks not deploying 
from master still start serious testing after the GA.


And this proposal may make it harder for people to justify full-time work on a 
certain projects.




This might slow the feature velocity of projects, and maybe ~6 weeks is too 
aggressive, but I feel that the balance right now is weighted strongly against 
timely, stable releasing of OpenStack, particularly for downstream consumers :)


Rather than getting hung up on the specific numbers of weeks, perhaps it would 
be helpful to start with opinions on whether or not there is enough 
stabilisation time in the current release schedules.


It may differ per project. We have enough time, we don't have enough people 
testing and fixing bugs before the GA.


As an alternative, I'd prefer people to work with stable branches more actively. 
And deploy a kind of CI/CD system that will allow them to test code at any point 
in time, not only close to the release.




--
Cheers,

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Fox, Kevin M
You can do that, but doesn't play well with orchestration systems such as k8s 
as it removes its ability to know when upgraded containers appear.

Thanks,
Kevin

* As always, sorry for top posting, but my organization does not allow me the 
choice of mail software.

From: Doug Hellmann [d...@doughellmann.com]
Sent: Wednesday, May 17, 2017 8:55 AM
To: openstack-dev
Subject: Re: [openstack-dev]
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
do we want to be publishing binary container images?

Excerpts from Chris Dent's message of 2017-05-17 12:14:40 +0100:
> On Wed, 17 May 2017, Thierry Carrez wrote:
>
> > Back to container image world, if we refresh those images daily and they
> > are not versioned or archived (basically you can only use the latest and
> > can't really access past dailies), I think we'd be in a similar situation ?
>
> Yes, this.
>

Is that how container publishing works? Can we overwrite an existing
archive, so that there is only ever 1 version of a published container
at any given time?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gluon] IRC Meeting cancelled today May 17, 2017 and reconvene on May 24

2017-05-17 Thread HU, BIN
Hello folks,



Many core members and participants of our project are on vacation now, and 
won't be able to participate in IRC meeting today.



Let us cancel it for today and re-convene it next week May 24.



Thanks



Bin



[1] https://wiki.openstack.org/wiki/Gluon

[2] https://wiki.openstack.org/wiki/Meetings/Gluon



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] PTG involvement

2017-05-17 Thread Ben Swartzlander
As I've mentioned in past meetings, the Manila community needs to decide 
whether the OpenStack PTG is a venue we want to take advantage of in the 
future. In particular, there's a deadline to declare our plans for the 
Denver PTG in September by the end of the week.


Personally I'm conflicted because there are pros and cons either way, 
but having attended the Summit in Boston last week, I think I have all 
the information needed to form my own opinion and I'd really like to 
hear from the rest of you at the weekly meeting tomorrow.


I believe our choice breaks down into 2 broad categories:

1) Continue to be involved with PTG. In this case we would need to lobby 
the foundation organizers to reduce the kinds of scheduling conflicts 
that made the Atlanta PTG so problematic for our team.


2) Drop out of PTG and plan a virtual PTG just for Manila a few weeks 
before or after the official PTG. In this case we would encourage team 
members to get together at the summits for face to face discussions.



Pros for (1):
* This is clearly what the foundation wants
* For US-based developers it would save money compared to (2)
* It ensures that timezone issues don't prevent participation in discussions

Cons for (1):
* We'd have to fight with cross project sessions and other project 
sessions (notably Cinder) for time slots to meet. Very likely it will be 
impossible to participate in all 3 tracks, which some of us currently 
try to do.
* Some developers won't get budget for travel because it's not the kind 
of conference where there are customers and salespeople (and thus lots 
of spare money).


Pros for (2):
* Virtual meetups have worked out well for us in the past, and they save 
money.

* It allows us to easily avoid any scheduling conflicts with other tracks.
* It avoids exhaustion at the PTG itself where trying to participate in 
3 tracks would probably mean no downtime.
* It's pretty easy to get budget to travel to summits because there are 
customers and salespeople, so face to face time could be preserved by 
hanging out in hacking rooms and using forum sessions.


Cons for (2):
* Virtual meetups always cause problems for about 1/3 of the world's 
timezones. In the past this has meant west coast USA, and Asia/Pacific 
have been greatly inconvenienced because most participants where east 
coast USA and Europe based.
* Less chance for cross pollination to occur at PTG where people from 
other projects drop in.



Based on the pros/cons I personally lean towards (2), but I look forward 
to hearing from the community.


There is one more complication affecting this decision, which is that 
the very next summit is planned for Sydney, which is uniquely far away 
and expensive to travel to (for most of the core team). For that summit 
only, I expect the argument that it's easier to travel to summits than 
PTGs to be less true, because Sydney might be simply too expensive or 
time consuming for some of us. In the long run though I expect Sydney to 
be an outlier and most summits will be relatively cheap/easy to travel to.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][security] Rename openstack-ansible-security role?

2017-05-17 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hey there,

After my talk[0] at the OpenStack Summit in Boston about the 
openstack-ansible-security role, I received plenty of feedback.  Thanks to 
everyone who attended the talk and suffered through my Texas accent. ;)

One of the pieces of feedback is around the name of the role. A common 
misconception is that the role only works with the OpenStack-Ansible (OSA) 
project itself since 'openstack-ansible-' is in the name. OSA is definitely not 
required and it's possible to use the role with any physical or virtual host 
that may or not be running OpenStack. I've done my best to make that clear 
whenever it comes up in conversation, but the name still causes some confusion.

The role ended up with its name because it was originally designed to work in 
tandem with OSA and it grew up in the OSA community. Almost all of the 
OSA-related roles follow the 'openstack-ansible-' syntax and the 
security role was no exception.

With all of that said, I'm curious to know if it's worth the effort to rename 
it, and if so, what the suggested names might be. This will obviously affect 
downstream projects that rely on the openstack-ansible-security role and that's 
the reason I'm bringing this up on the list.

So my questions are:

  1) Should the openstack-ansible-security role be
 renamed to alleviate confusion?

  2) If it should be renamed, what's your suggestion?

Thanks!

- --
Major Hayden

[0] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/17616/securing-openstack-clouds-and-beyond-with-ansible
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJZHIeXAAoJEHNwUeDBAR+xc6kP/jFxwCAqcCMFAuWtIp1MXSJf
4SS0g9UvduJXRITtQlikO4hcT0vroIR+CpeolG+edyl1f+8RM5hv9KNdb1lcB/OY
17qffjTxOfRhluq6iskXFpVmhjvRuPvcDI9PHbH0+BJcuQPBrfb45m3ng1C6fDNr
gM3mMVRnlQNdrAfBvXhslar00dH4wc4g40ncFpCG0/WO5MhrKeQREVpin/X1CoCH
NxUMnchPyQmnyaCcY2YYVzKnvqTydYA/GL0/3Q9FVwooDDMzsR0EiBcWkcsBHxvL
4lrGGx+hfrR+PHcp+rUgxnZrg70QD2iJSiIB8L5NkSGllPF0lvIIe1ykm+1BgVQt
mVnqjcBdUKz1LsaIRu6cx/x2x2CbQKk8LBV6fngSLj88Q9bewHIZNK2M3E6LzQFX
tTTNduT1iZeSBGGXH5lkciBg3jCK7/K/Qb3OG06jN/tYpzFFOgA2KUyi6WyqBFlC
26q3Vi0aX/l3FbQKs9zs/vPHjV4vYcinooBG5ZAd8zZke4jtfb7WRAE9sAIA8sF+
xvbsA3dk/Wip6dnU+iEPL0w3FNqzxRSkbDSl6Z3yLKxpIvF3duwLhfuNFvPmVlUk
UCzwivv57K7XMsVKcLJd7H/KcdngudKaf3tK4YfPn/Pe+aQkp7HP0SING1xr5Sex
4jbAXGLocV8OOrvq7jeT
=FIHG
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM Heartbeat / Healthcheck Monitoring

2017-05-17 Thread Adam Spiers

I don't see any reason why masakari couldn't handle that, but you'd
have to ask Sampath and the masakari team whether they would consider
that in scope for their roadmap.

Waines, Greg  wrote:

Sure.  I can propose a new user story.

And then are you thinking of including this user story in the scope of what 
masakari would be looking at ?

Greg.


From: Adam Spiers 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Wednesday, May 17, 2017 at 10:08 AM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck 
Monitoring

Thanks for the clarification Greg.  This sounds like it has the
potential to be a very useful capability.  May I suggest that you
propose a new user story for it, along similar lines to this existing
one?

http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html

Waines, Greg mailto:greg.wai...@windriver.com>> 
wrote:
Yes that’s correct.
VM Heartbeating / Health-check Monitoring would introduce intrusive / white-box 
type monitoring of VMs / Instances.

I realize this is somewhat in the gray-zone of what a cloud should be 
monitoring or not,
but I believe it provides an alternative for Applications deployed in VMs that 
do not have an external monitoring/management entity like a VNF Manager in the 
MANO architecture.
And even for VMs with VNF Managers, it provides a highly reliable alternate 
monitoring path that does not rely on Tenant Networking.

You’re correct, that VM HB/HC Monitoring would leverage
https://wiki.libvirt.org/page/Qemu_guest_agent
that would require the agent to be installed in the images for talking back to 
the compute host.
( there are other examples of similar approaches in openstack ... the 
murano-agent for installation, the swift-agent for object store management )
Although here, in the case of VM HB/HC Monitoring, via the QEMU Guest Agent, 
the messaging path is internal thru a QEMU virtual serial device.  i.e. a very 
simple interface with very few dependencies ... it’s up and available very 
early in VM lifecycle and virtually always up.

Wrt failure modes / use-cases

· a VM’s response to a Heartbeat Challenge Request can be as simple as 
just ACK-ing,
this alone allows for detection of:

oa failed or hung QEMU/KVM instance, or

oa failed or hung VM’s OS, or

oa failure of the VM’s OS to schedule the QEMU Guest Agent daemon, or

oa failure of the VM to route basic IO via linux sockets.

· I have had feedback that this is similar to the virtual hardware 
watchdog of QEMU/KVM ( https://libvirt.org/formatdomain.html#elementsWatchdog )

· However, the VM Heartbeat / Health-check Monitoring

o   provides a higher-level (i.e. application-level) heartbeating

§  i.e. if the Heartbeat requests are being answered by the Application running 
within the VM

o   provides more than just heartbeating, as the Application can use it to 
trigger a variety of audits,

o   provides a mechanism for the Application within the VM to report a Health 
Status / Info back to the Host / Cloud,

o   provides notification of the Heartbeat / Health-check status to 
higher-level cloud entities thru Vitrage

§  e.g.   VM-Heartbeat-Monitor - to - Vitrage - (EventAlarm) - Aodh - ... - 
VNF-Manager
   
- (StateChange) - Nova - ... - VNF Manager


Greg.


From: Adam Spiers mailto:aspi...@suse.com>>
Reply-To: "openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, May 16, 2017 at 7:29 PM
To: "openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck 
Monitoring

Waines, Greg 
mailto:greg.wai...@windriver.com>>
 wrote:
thanks for the pointers Sam.

I took a quick look.
I agree that the VM Heartbeat / Health-check looks like a good fit into 
Masakari.

Currently your instance monitoring looks like it is strictly black-box type 
monitoring thru libvirt events.
Is that correct ?
i.e. you do not do any intrusive type monitoring of the instance thru the QUEMU 
Guest Agent facility
  correct ?

That is correct:

https://github.com/openstack/masakari-monitors/blob/master/masakarimonitors/instancemonitor/instance.py

I think this is what VM Heartbeat / Health-check would add to Masaraki.
Let me know if you agree.

OK, so you are looking for something slightly different I guess, based
on this QEMU guest agent?

   https://wiki.libvirt.org/page/Qemu_guest_agent

That would require the agent to be installed in the images, which is
extra work but I imagine quite easily justifiable in some scenarios.
What failure modes do you have in mind for covering with this
approach - things like the guest kernel fre

Re: [openstack-dev] [manila] PTG involvement

2017-05-17 Thread Kendall Nelson
Hello Ben,

We started discussing ways of minimizing collisions between projects like
Cinder and Manila for the upcoming PTG and think we are close to a
solution. More details to come, but that doesn't need to be a con as we are
working to address those issues this PTG :)

-Kendall (diablo_rojo)

On Wed, May 17, 2017 at 12:20 PM Ben Swartzlander 
wrote:

> As I've mentioned in past meetings, the Manila community needs to decide
> whether the OpenStack PTG is a venue we want to take advantage of in the
> future. In particular, there's a deadline to declare our plans for the
> Denver PTG in September by the end of the week.
>
> Personally I'm conflicted because there are pros and cons either way,
> but having attended the Summit in Boston last week, I think I have all
> the information needed to form my own opinion and I'd really like to
> hear from the rest of you at the weekly meeting tomorrow.
>
> I believe our choice breaks down into 2 broad categories:
>
> 1) Continue to be involved with PTG. In this case we would need to lobby
> the foundation organizers to reduce the kinds of scheduling conflicts
> that made the Atlanta PTG so problematic for our team.
>
> 2) Drop out of PTG and plan a virtual PTG just for Manila a few weeks
> before or after the official PTG. In this case we would encourage team
> members to get together at the summits for face to face discussions.
>
>
> Pros for (1):
> * This is clearly what the foundation wants
> * For US-based developers it would save money compared to (2)
> * It ensures that timezone issues don't prevent participation in
> discussions
>
> Cons for (1):
> * We'd have to fight with cross project sessions and other project
> sessions (notably Cinder) for time slots to meet. Very likely it will be
> impossible to participate in all 3 tracks, which some of us currently
> try to do.
> * Some developers won't get budget for travel because it's not the kind
> of conference where there are customers and salespeople (and thus lots
> of spare money).
>
> Pros for (2):
> * Virtual meetups have worked out well for us in the past, and they save
> money.
> * It allows us to easily avoid any scheduling conflicts with other tracks.
> * It avoids exhaustion at the PTG itself where trying to participate in
> 3 tracks would probably mean no downtime.
> * It's pretty easy to get budget to travel to summits because there are
> customers and salespeople, so face to face time could be preserved by
> hanging out in hacking rooms and using forum sessions.
>
> Cons for (2):
> * Virtual meetups always cause problems for about 1/3 of the world's
> timezones. In the past this has meant west coast USA, and Asia/Pacific
> have been greatly inconvenienced because most participants where east
> coast USA and Europe based.
> * Less chance for cross pollination to occur at PTG where people from
> other projects drop in.
>
>
> Based on the pros/cons I personally lean towards (2), but I look forward
> to hearing from the community.
>
> There is one more complication affecting this decision, which is that
> the very next summit is planned for Sydney, which is uniquely far away
> and expensive to travel to (for most of the core team). For that summit
> only, I expect the argument that it's easier to travel to summits than
> PTGs to be less true, because Sydney might be simply too expensive or
> time consuming for some of us. In the long run though I expect Sydney to
> be an outlier and most summits will be relatively cheap/easy to travel to.
>
> -Ben Swartzlander
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Rename openstack-ansible-security role?

2017-05-17 Thread Jesse Pretorius
On 5/17/17, 6:25 PM, "Major Hayden"  wrote:

  1) Should the openstack-ansible-security role be
 renamed to alleviate confusion?

Sure, I’ve long thought that the name is misleading.

  2) If it should be renamed, what's your suggestion?

My uninspired suggestion is to make it something like ‘ansible-host-security’. 
A quick google search for that only shows the openstack-ansible-security role 
anyway. (



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] PTG involvement

2017-05-17 Thread Rodrigo Barbieri
Hello Ben,

That was a great summary of our choices. I believe it is worth mentioning
that the Summit in Sydney will be only 3 days long, so it most likely be a
very tight schedule for some of the participants if we plan to do face to
face discussions there, as there will be a lot going on during the Summit.

This strengthens your argument that Sydney will be an outlier. So maybe we
could consider #1 one more time, and #2 for Vancouver.


Regards,

On Wed, May 17, 2017 at 2:20 PM, Ben Swartzlander 
wrote:

> As I've mentioned in past meetings, the Manila community needs to decide
> whether the OpenStack PTG is a venue we want to take advantage of in the
> future. In particular, there's a deadline to declare our plans for the
> Denver PTG in September by the end of the week.
>
> Personally I'm conflicted because there are pros and cons either way, but
> having attended the Summit in Boston last week, I think I have all the
> information needed to form my own opinion and I'd really like to hear from
> the rest of you at the weekly meeting tomorrow.
>
> I believe our choice breaks down into 2 broad categories:
>
> 1) Continue to be involved with PTG. In this case we would need to lobby
> the foundation organizers to reduce the kinds of scheduling conflicts that
> made the Atlanta PTG so problematic for our team.
>
> 2) Drop out of PTG and plan a virtual PTG just for Manila a few weeks
> before or after the official PTG. In this case we would encourage team
> members to get together at the summits for face to face discussions.
>
>
> Pros for (1):
> * This is clearly what the foundation wants
> * For US-based developers it would save money compared to (2)
> * It ensures that timezone issues don't prevent participation in
> discussions
>
> Cons for (1):
> * We'd have to fight with cross project sessions and other project
> sessions (notably Cinder) for time slots to meet. Very likely it will be
> impossible to participate in all 3 tracks, which some of us currently try
> to do.
> * Some developers won't get budget for travel because it's not the kind of
> conference where there are customers and salespeople (and thus lots of
> spare money).
>
> Pros for (2):
> * Virtual meetups have worked out well for us in the past, and they save
> money.
> * It allows us to easily avoid any scheduling conflicts with other tracks.
> * It avoids exhaustion at the PTG itself where trying to participate in 3
> tracks would probably mean no downtime.
> * It's pretty easy to get budget to travel to summits because there are
> customers and salespeople, so face to face time could be preserved by
> hanging out in hacking rooms and using forum sessions.
>
> Cons for (2):
> * Virtual meetups always cause problems for about 1/3 of the world's
> timezones. In the past this has meant west coast USA, and Asia/Pacific have
> been greatly inconvenienced because most participants where east coast USA
> and Europe based.
> * Less chance for cross pollination to occur at PTG where people from
> other projects drop in.
>
>
> Based on the pros/cons I personally lean towards (2), but I look forward
> to hearing from the community.
>
> There is one more complication affecting this decision, which is that the
> very next summit is planned for Sydney, which is uniquely far away and
> expensive to travel to (for most of the core team). For that summit only, I
> expect the argument that it's easier to travel to summits than PTGs to be
> less true, because Sydney might be simply too expensive or time consuming
> for some of us. In the long run though I expect Sydney to be an outlier and
> most summits will be relatively cheap/easy to travel to.
>
> -Ben Swartzlander
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rodrigo Barbieri
MSc Computer Scientist
OpenStack Manila Core Contributor
Federal University of São Carlos
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Rename openstack-ansible-security role?

2017-05-17 Thread Logan V.
On Wed, May 17, 2017 at 12:39 PM, Jesse Pretorius
 wrote:
> On 5/17/17, 6:25 PM, "Major Hayden"  wrote:
>
>   1) Should the openstack-ansible-security role be
>  renamed to alleviate confusion?
>
> Sure, I’ve long thought that the name is misleading.

+1 definitely in favor of a rename

>
>   2) If it should be renamed, what's your suggestion?
>
> My uninspired suggestion is to make it something like 
> ‘ansible-host-security’. A quick google search for that only shows the 
> openstack-ansible-security role anyway. (

No amazing suggestions from me... ansible-host-security is fine, maybe
ansible-security or ansible-hardening. Nothing you haven't already
thought of I'm sure.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Rename openstack-ansible-security role?

2017-05-17 Thread Dave McCowan (dmccowan)

>
>So my questions are:
>
>  1) Should the openstack-ansible-security role be
> renamed to alleviate confusion?

+1 on the rename.

>
>  2) If it should be renamed, what's your suggestion?

How about linux-ansible-security?

>
>Thanks!
>
>- --
>Major Hayden
>
>[0] 
>https://www.openstack.org/summit/boston-2017/summit-schedule/events/17616/
>securing-openstack-clouds-and-beyond-with-ansible



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] OVB deployments for developers using tripleo-quickstart devmode

2017-05-17 Thread Ronelle Landy
Hello,

tripleo-quickstart devmode [1] is used by TripleO developers to test unmerged 
changes in review.openstack.org locally and reproduce the upstream TripleO CI.
Functionality has been added to tripleo-quickstart devmode to allow users to 
run TripleO deployments on an OVB host cloud.

See the documentation [2] for instructions on how to do a basic TripleO 
deployment with devmode.sh,
as well as more in-depth instructions on options like:
 - Removing old stacks and key pairs from the host cloud before deploying
 - Testing patches
 - Using alternative configurations/network isolation types

[1] - https://docs.openstack.org/developer/tripleo-quickstart/devmode.html
[2] - https://docs.openstack.org/developer/tripleo-quickstart/devmode-ovb.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-17 07:47:31 -0700:
> On 17 May 2017 at 04:14, Chris Dent  wrote:
> > On Wed, 17 May 2017, Thierry Carrez wrote:
> >
> >> Back to container image world, if we refresh those images daily and they
> >> are not versioned or archived (basically you can only use the latest and
> >> can't really access past dailies), I think we'd be in a similar situation
> >> ?
> >
> >
> > Yes, this.
> 
> I think it's not a bad idea to message "you are responsible for
> archving your containers". Do that, combine it with good toolset that
> helps users determine versions of packages and other metadata and
> we'll end up with something that itself would be greatly appreciated.
> 
> Few potential user stories.
> 
> I have OpenStack <100 nodes and need every single one of them, hence
> no CI. At the same time I want to have fresh packages to avoid CVEs. I
> deploy kolla with tip-of-the-stable-branch and setup cronjob that will
> upgrade it every week. Because my scenerio is quite typical and
> containers already ran through gates that tests my scenerio, I'm good.
> 
> Another one:
> 
> I have 300+ node cloud, heavy CI and security team examining every
> container. While I could build containers locally, downloading them is
> just simpler and effectively the same (after all, it's containers
> being tested not build process). Every download our security team
> scrutinize contaniers and uses toolset Kolla provides to help them.
> Additional benefit is that on top of our CI these images went through
> Kolla CI which is nice, more testing is always good.
> 
> And another one
> 
> We are Kolla community. We want to provide testing for full release
> upgrades every day in gates, to make sure OpenStack and Kolla is
> upgradable and improve general user experience of upgrades. Because
> infra is resource constrained, we cannot afford building 2 sets of
> containers (stable and master) and doing deploy->test->upgrade->test.
> However because we have these cached containers, that are fresh and
> passed CI for deploy, we can just use them! Now effectively we're not
> only testing Kolla's correctness of upgrade procedure but also all the
> other project team upgrades! Oh, it seems Nova merged something that
> negatively affects upgrades, let's make sure they are aware!
> 
> And last one, which cannot be underestimated
> 
> I am CTO of some company and I've heard OpenStack is no longer hard to
> deploy, I'll just download kolla-ansible and try. I'll follow this
> guide that deploys simple OpenStack with 2 commands and few small
> configs, and it's done! Super simple! We're moving to OpenStack and
> start contributing tomorrow!
> 
> Please, let's solve messaging problems, put burden of archiving on
> users, whatever it takes to protect our community from wrong
> expectations, but not kill this effort. There are very real and
> immediate benefits to OpenStack as a whole if we do this.
> 
> Cheers,
> Michal

You've presented some positive scenarios. Here's a worst case
situation that I'm worried about.

Suppose in a few months the top several companies contributing to
kolla decide to pull out of or reduce their contributions to
OpenStack.  IBM, Intel, Oracle, and Cisco either lay folks off or
redirect their efforts to other projects.  Maybe they start
contributing directly to kubernetes. The kolla team is hit badly,
and all of the people from that team who know how the container
publishing jobs work are gone.

The day after everyone says goodbye, the build breaks. Maybe a bad
patch lands, or maybe some upstream assumption changes. The issue
isn't with the infra jobs themselves. The break means no new container
images are being published. Since there's not much of a kolla team
any more, it looks like it will be a while before anyone has time
to figure out how to fix the problem.

Later that same day, a new zero-day exploit is announced in a
component included in all or most of those images. Something that
isn't developed in the community, such as OpenSSL or glibc. The
exploit allows a complete breach of any app running with it. All
existing published containers include the bad bits and need to be
updated.

We now have an unknown number of clouds running containers built
by the community with major security holes. The team responsible
for maintaining those images is a shambles, but even if they weren't
the automation isn't working, so no new images can be published.
The consumers of the existing containers haven't bothered to set
up build pipelines of their own, because why bother? Even though
we've clearly said the images "we" publish are for our own testing,
they have found it irresistibly convenient to use them and move on
with their lives.

When the exploit is announced, they start clamoring for new container
images, and become understandably irate when we say we didn't think
they would be using them in production and they *shouldn't have*
and their problems are not our problems because we tol

[openstack-dev] [tc][swg] Updates on the TC Vision for 2019

2017-05-17 Thread Colette Alexander
Hi everyone!

Just wanted to send the community some updates on the vision front and also
get a discussion with the members of the technical committee going here for
next steps on what we're up to with the TC Vision for 2019 [0].

A couple things:

1. We finished our feedback phase by closing the survey we had open, and
posting all collected feedback in a document for TC (and community) review
[1] and having a session at the Boston Forum where we presented the vision,
and also took feedback from those present [2]. There is also some feedback
in Gerrit for the review up of the first draft [0], so that's a few places
we've covered for feedback.

2. We're now entering the phase of work where we incorporate feedback into
the next draft of the vision and finalize it for posterity/the governance
repository/ourselves and ourselves.

So! What should feedback incorporation look like? I think we need to have
mbmers of the TC pipe up a bit here to discuss timelines for this round and
also who all will be involved. Some rough estimates we came up with were
something over the course of the 2-3 weeks after the Forum that could then
be +2d into governance by mid-June or so. I'm not sure if that's possible
based on everyone's travel/work/vacation schedules for the summer, but it
would be great. Thoughts on that?

Also - I promised to attempt to come up with some general themes in
feedback we've uncovered for suggestions for edits:

 - There is (understandably) a lot of feedback around what the nature of
the vision is itself. This might best be cleared up with a quick prologue
explaining why the vision reads the way it does, and what the intention was
behind writing it this way.
 - The writing about constellations, generally, got quite a bit of feedback
(some folks wanted more explanation, others wanted it to be more succinct,
so I think wading through the data to read everyone's input on this is
valuable here)
 - Ease of installation & upgrades are language that comes up multiple times
 - Many people asked for a bullet-points version, and/or that it be edited
& shortened a bit
 - A few people said 2 years was not enough time and this read more like a
5 year vision
 - A few people mentioned issues of OpenStack at scale and wondering
whether that might be addressed in the vision
 - There was some question of whether it was appropriate to vision for "x
number of users with yz types of deployments" as a target number to hit in
this vision, or whether it should be left for an OpenStack-wide community
vision.

Some favorites:
 - Lots of people loved the idea of constellations
 - A lot of likes for better mentoring & the ladders program
 - Some likes for diversity across the TC of the future

Okay - I think that's a pretty good summary. If anyone else has any
feedback they'd like to make sure gets into the conversation, please feel
free to reply in this thread.

Thanks!

-colette/gothicmindfood



[0] https://review.openstack.org/#/c/453262/
[1]
https://docs.google.com/spreadsheets/d/1YzHPP2EQh2DZWGTj_VbhwhtsDQebAgqldyi1MHm6QpE/edit?usp=sharing
[2]
https://www.openstack.org/videos/boston-2017/the-openstack-technical-committee-vision-for-2019-updates-stories-and-q-and-a
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Michał Jastrzębski
On 17 May 2017 at 11:04, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-17 07:47:31 -0700:
>> On 17 May 2017 at 04:14, Chris Dent  wrote:
>> > On Wed, 17 May 2017, Thierry Carrez wrote:
>> >
>> >> Back to container image world, if we refresh those images daily and they
>> >> are not versioned or archived (basically you can only use the latest and
>> >> can't really access past dailies), I think we'd be in a similar situation
>> >> ?
>> >
>> >
>> > Yes, this.
>>
>> I think it's not a bad idea to message "you are responsible for
>> archving your containers". Do that, combine it with good toolset that
>> helps users determine versions of packages and other metadata and
>> we'll end up with something that itself would be greatly appreciated.
>>
>> Few potential user stories.
>>
>> I have OpenStack <100 nodes and need every single one of them, hence
>> no CI. At the same time I want to have fresh packages to avoid CVEs. I
>> deploy kolla with tip-of-the-stable-branch and setup cronjob that will
>> upgrade it every week. Because my scenerio is quite typical and
>> containers already ran through gates that tests my scenerio, I'm good.
>>
>> Another one:
>>
>> I have 300+ node cloud, heavy CI and security team examining every
>> container. While I could build containers locally, downloading them is
>> just simpler and effectively the same (after all, it's containers
>> being tested not build process). Every download our security team
>> scrutinize contaniers and uses toolset Kolla provides to help them.
>> Additional benefit is that on top of our CI these images went through
>> Kolla CI which is nice, more testing is always good.
>>
>> And another one
>>
>> We are Kolla community. We want to provide testing for full release
>> upgrades every day in gates, to make sure OpenStack and Kolla is
>> upgradable and improve general user experience of upgrades. Because
>> infra is resource constrained, we cannot afford building 2 sets of
>> containers (stable and master) and doing deploy->test->upgrade->test.
>> However because we have these cached containers, that are fresh and
>> passed CI for deploy, we can just use them! Now effectively we're not
>> only testing Kolla's correctness of upgrade procedure but also all the
>> other project team upgrades! Oh, it seems Nova merged something that
>> negatively affects upgrades, let's make sure they are aware!
>>
>> And last one, which cannot be underestimated
>>
>> I am CTO of some company and I've heard OpenStack is no longer hard to
>> deploy, I'll just download kolla-ansible and try. I'll follow this
>> guide that deploys simple OpenStack with 2 commands and few small
>> configs, and it's done! Super simple! We're moving to OpenStack and
>> start contributing tomorrow!
>>
>> Please, let's solve messaging problems, put burden of archiving on
>> users, whatever it takes to protect our community from wrong
>> expectations, but not kill this effort. There are very real and
>> immediate benefits to OpenStack as a whole if we do this.
>>
>> Cheers,
>> Michal
>
> You've presented some positive scenarios. Here's a worst case
> situation that I'm worried about.
>
> Suppose in a few months the top several companies contributing to
> kolla decide to pull out of or reduce their contributions to
> OpenStack.  IBM, Intel, Oracle, and Cisco either lay folks off or
> redirect their efforts to other projects.  Maybe they start
> contributing directly to kubernetes. The kolla team is hit badly,
> and all of the people from that team who know how the container
> publishing jobs work are gone.

There are only 2 ways to defend against that: diverse community, which
we have. If Intel, Red Hat, Oracle, Cisco and IBM back out of
OpenStack, we'd still have almost 50% of contributors. I think we'll
much more likely to survive than most of other Big Tent projects. In
fact, I'd think with our current diversity, that we'll survive for as
long as OpenStack survives.

Also all the more reasons why *we shouldn't build images personally*,
we should have autonomous process to do it for us.

> The day after everyone says goodbye, the build breaks. Maybe a bad
> patch lands, or maybe some upstream assumption changes. The issue
> isn't with the infra jobs themselves. The break means no new container
> images are being published. Since there's not much of a kolla team
> any more, it looks like it will be a while before anyone has time
> to figure out how to fix the problem.

> Later that same day, a new zero-day exploit is announced in a
> component included in all or most of those images. Something that
> isn't developed in the community, such as OpenSSL or glibc. The
> exploit allows a complete breach of any app running with it. All
> existing published containers include the bad bits and need to be
> updated.

I guess this is problem of all the software ever written. If community
dies around it, people who uses it are in lots of trouble. One way to
make sure it won't happen is

Re: [openstack-dev] [tc] revised Postgresql support status patch for governance

2017-05-17 Thread Sean Dague
On 05/15/2017 07:16 AM, Sean Dague wrote:
> We had a forum session in Boston on Postgresql and out of that agreed to
> the following steps forward:
> 
> 1. explicitly warn in operator facing documentation that Postgresql is
> less supported than MySQL. This was deemed better than just removing
> documentation, because when people see Postgresql files in tree they'll
> make assumptions (at least one set of operators did).
> 
> 2. Suse is in process of investigating migration from PG to Gallera for
> future versions of their OpenStack product. They'll make their findings
> and tooling open to help determine how burdensome this kind of
> transition would be for folks.
> 
> After those findings, we can come back with any next steps (or just
> leave it as good enough there).
> 
> The TC governance patch is updated here -
> https://review.openstack.org/#/c/427880/ - or if there are other
> discussion questions feel free to respond to this thread.

In the interest of building summaries of progress, as there has been a
bunch of lively discussion on #openstack-dev today, there is a new
revision out there - https://review.openstack.org/#/c/427880/.

Some of the concerns/feedback has been "please describe things that are
harder by this being an abstraction", so examples are provided.

A statement around support was also put in there, because support only
meant QA jobs, or only developers for some folks. I think it's important
to ensure we paint the whole picture with how people get support in an
Open Source project.

There seems to be general agreement that we need to be more honest with
users, and that we've effectively been lying to them.

I feel like the current sticking points come down to whether:

* it's important that the operator community largely is already in one
camp or not
* future items listed that are harder are important enough to justify a
strict trade off here
* it's ok to have the proposal have a firm lean in tone, even though
it's set of concrete actions are pretty reversible and don't commit to
future removal of postgresql


Also, as I stated on IRC, if some set of individuals came through and
solved all the future problems on the list for us as a community, my
care on how many DBs supported would drastically decrease. Because its
the fact that it's costing us solving real problems that we want to
solve (by making them too complex for anyone to take on), is my key
concern. For folks asking the question about what they could do to make
pg a first class citizen, that's a pretty good starting point.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Michał Jastrzębski
On 17 May 2017 at 11:36, Michał Jastrzębski  wrote:
> On 17 May 2017 at 11:04, Doug Hellmann  wrote:
>> Excerpts from Michał Jastrzębski's message of 2017-05-17 07:47:31 -0700:
>>> On 17 May 2017 at 04:14, Chris Dent  wrote:
>>> > On Wed, 17 May 2017, Thierry Carrez wrote:
>>> >
>>> >> Back to container image world, if we refresh those images daily and they
>>> >> are not versioned or archived (basically you can only use the latest and
>>> >> can't really access past dailies), I think we'd be in a similar situation
>>> >> ?
>>> >
>>> >
>>> > Yes, this.
>>>
>>> I think it's not a bad idea to message "you are responsible for
>>> archving your containers". Do that, combine it with good toolset that
>>> helps users determine versions of packages and other metadata and
>>> we'll end up with something that itself would be greatly appreciated.
>>>
>>> Few potential user stories.
>>>
>>> I have OpenStack <100 nodes and need every single one of them, hence
>>> no CI. At the same time I want to have fresh packages to avoid CVEs. I
>>> deploy kolla with tip-of-the-stable-branch and setup cronjob that will
>>> upgrade it every week. Because my scenerio is quite typical and
>>> containers already ran through gates that tests my scenerio, I'm good.
>>>
>>> Another one:
>>>
>>> I have 300+ node cloud, heavy CI and security team examining every
>>> container. While I could build containers locally, downloading them is
>>> just simpler and effectively the same (after all, it's containers
>>> being tested not build process). Every download our security team
>>> scrutinize contaniers and uses toolset Kolla provides to help them.
>>> Additional benefit is that on top of our CI these images went through
>>> Kolla CI which is nice, more testing is always good.
>>>
>>> And another one
>>>
>>> We are Kolla community. We want to provide testing for full release
>>> upgrades every day in gates, to make sure OpenStack and Kolla is
>>> upgradable and improve general user experience of upgrades. Because
>>> infra is resource constrained, we cannot afford building 2 sets of
>>> containers (stable and master) and doing deploy->test->upgrade->test.
>>> However because we have these cached containers, that are fresh and
>>> passed CI for deploy, we can just use them! Now effectively we're not
>>> only testing Kolla's correctness of upgrade procedure but also all the
>>> other project team upgrades! Oh, it seems Nova merged something that
>>> negatively affects upgrades, let's make sure they are aware!
>>>
>>> And last one, which cannot be underestimated
>>>
>>> I am CTO of some company and I've heard OpenStack is no longer hard to
>>> deploy, I'll just download kolla-ansible and try. I'll follow this
>>> guide that deploys simple OpenStack with 2 commands and few small
>>> configs, and it's done! Super simple! We're moving to OpenStack and
>>> start contributing tomorrow!
>>>
>>> Please, let's solve messaging problems, put burden of archiving on
>>> users, whatever it takes to protect our community from wrong
>>> expectations, but not kill this effort. There are very real and
>>> immediate benefits to OpenStack as a whole if we do this.
>>>
>>> Cheers,
>>> Michal
>>
>> You've presented some positive scenarios. Here's a worst case
>> situation that I'm worried about.
>>
>> Suppose in a few months the top several companies contributing to
>> kolla decide to pull out of or reduce their contributions to
>> OpenStack.  IBM, Intel, Oracle, and Cisco either lay folks off or
>> redirect their efforts to other projects.  Maybe they start
>> contributing directly to kubernetes. The kolla team is hit badly,
>> and all of the people from that team who know how the container
>> publishing jobs work are gone.
>
> There are only 2 ways to defend against that: diverse community, which
> we have. If Intel, Red Hat, Oracle, Cisco and IBM back out of
> OpenStack, we'd still have almost 50% of contributors. I think we'll
> much more likely to survive than most of other Big Tent projects. In
> fact, I'd think with our current diversity, that we'll survive for as
> long as OpenStack survives.

Diverse community and off-by-one errors;) I was meaning to say diverse
community and involvement.

>
> Also all the more reasons why *we shouldn't build images personally*,
> we should have autonomous process to do it for us.
>
>> The day after everyone says goodbye, the build breaks. Maybe a bad
>> patch lands, or maybe some upstream assumption changes. The issue
>> isn't with the infra jobs themselves. The break means no new container
>> images are being published. Since there's not much of a kolla team
>> any more, it looks like it will be a while before anyone has time
>> to figure out how to fix the problem.
>
>> Later that same day, a new zero-day exploit is announced in a
>> component included in all or most of those images. Something that
>> isn't developed in the community, such as OpenSSL or glibc. The
>> exploit allows a complete breach of any app running with

Re: [openstack-dev] [tc][swg] Updates on the TC Vision for 2019

2017-05-17 Thread Doug Hellmann
Excerpts from Colette Alexander's message of 2017-05-17 14:29:07 -0400:
> Hi everyone!
> 
> Just wanted to send the community some updates on the vision front and also
> get a discussion with the members of the technical committee going here for
> next steps on what we're up to with the TC Vision for 2019 [0].
> 
> A couple things:
> 
> 1. We finished our feedback phase by closing the survey we had open, and
> posting all collected feedback in a document for TC (and community) review
> [1] and having a session at the Boston Forum where we presented the vision,
> and also took feedback from those present [2]. There is also some feedback
> in Gerrit for the review up of the first draft [0], so that's a few places
> we've covered for feedback.
> 
> 2. We're now entering the phase of work where we incorporate feedback into
> the next draft of the vision and finalize it for posterity/the governance
> repository/ourselves and ourselves.
> 
> So! What should feedback incorporation look like? I think we need to have
> mbmers of the TC pipe up a bit here to discuss timelines for this round and
> also who all will be involved. Some rough estimates we came up with were
> something over the course of the 2-3 weeks after the Forum that could then
> be +2d into governance by mid-June or so. I'm not sure if that's possible
> based on everyone's travel/work/vacation schedules for the summer, but it
> would be great. Thoughts on that?

The timeline depends on who signed up to do the next revision. Did
we get someone to do that, yet, or are we still looking for a
volunteer?  (Note that I am not volunteering here, just asking for
status.)

Doug

> 
> Also - I promised to attempt to come up with some general themes in
> feedback we've uncovered for suggestions for edits:
> 
>  - There is (understandably) a lot of feedback around what the nature of
> the vision is itself. This might best be cleared up with a quick prologue
> explaining why the vision reads the way it does, and what the intention was
> behind writing it this way.
>  - The writing about constellations, generally, got quite a bit of feedback
> (some folks wanted more explanation, others wanted it to be more succinct,
> so I think wading through the data to read everyone's input on this is
> valuable here)
>  - Ease of installation & upgrades are language that comes up multiple times
>  - Many people asked for a bullet-points version, and/or that it be edited
> & shortened a bit
>  - A few people said 2 years was not enough time and this read more like a
> 5 year vision
>  - A few people mentioned issues of OpenStack at scale and wondering
> whether that might be addressed in the vision
>  - There was some question of whether it was appropriate to vision for "x
> number of users with yz types of deployments" as a target number to hit in
> this vision, or whether it should be left for an OpenStack-wide community
> vision.
> 
> Some favorites:
>  - Lots of people loved the idea of constellations
>  - A lot of likes for better mentoring & the ladders program
>  - Some likes for diversity across the TC of the future
> 
> Okay - I think that's a pretty good summary. If anyone else has any
> feedback they'd like to make sure gets into the conversation, please feel
> free to reply in this thread.
> 
> Thanks!
> 
> -colette/gothicmindfood
> 
> 
> 
> [0] https://review.openstack.org/#/c/453262/
> [1]
> https://docs.google.com/spreadsheets/d/1YzHPP2EQh2DZWGTj_VbhwhtsDQebAgqldyi1MHm6QpE/edit?usp=sharing
> [2]
> https://www.openstack.org/videos/boston-2017/the-openstack-technical-committee-vision-for-2019-updates-stories-and-q-and-a

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-17 Thread Ken'ichi Ohmichi
+1, thanks Zhu for your work.

2017-05-16 4:22 GMT-04:00 Andrea Frittoli :
> Hello team,
>
> I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.
>
> Over the past two cycle Fanglei has been steadily contributing to Tempest
> and its community.
> She's done a great deal of work in making Tempest code cleaner, easier to
> read, maintain and
> debug, fixing bugs and removing cruft. Both her code as well as her reviews
> demonstrate a
> very good understanding of Tempest internals and of the project future
> direction.
> I believe Fanglei will make an excellent addition to the team.
>
> As per the usual, if the current Tempest core team members would please vote
> +1
> or -1(veto) to the nomination when you get a chance. We'll keep the polls
> open
> for 5 days or until everyone has voted.
>
> References:
> https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn
> https://review.openstack.org/#/q/reviewer:zhufl
>
> Thank you,
>
> Andrea (andreaf)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-05-17 Thread Jeremy Stanley
On 2017-05-17 16:16:30 +0200 (+0200), Thierry Carrez wrote:
[...]
> we need help with completing the migration to infra. If interested
> you can reach out to fungi (Infra team PTL) nor mrmartin (who
> currently helps with the transition work).
[...]

The main blocker for us right now is addressed by an Infra spec
(Stackalytics is an unofficial project and it's unclear to us where
design discussions for it happen):

https://review.openstack.org/434951

In particular, getting the current Stackalytics developers on-board
with things like this is where we've been failing to make progress
mainly (I think) because we don't have a clear venue for discussions
and they're stretched pretty thin with other work. If we can get
some additional core reviewers for that project (and maybe even talk
about turning it into an official team or joining them up as a
deliverable for an existing team) that might help.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][swg] Updates on the TC Vision for 2019

2017-05-17 Thread Dean Troyer
On Wed, May 17, 2017 at 1:47 PM, Doug Hellmann  wrote:
> The timeline depends on who signed up to do the next revision. Did
> we get someone to do that, yet, or are we still looking for a
> volunteer?  (Note that I am not volunteering here, just asking for
> status.)

I believe John (johnthetubaguy),Chris (cdent) and I (dtroyer) are the
ones identified to drive the next steps.  Timing-wise, having this
wrapped up by 2nd week of June suits me great as I am planning some
time off about then.  I see that as having a solid 'final' proposal by
then, not necessarily having it approved.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-17 11:36:40 -0700:
> On 17 May 2017 at 11:04, Doug Hellmann  wrote:
>
> > You've presented some positive scenarios. Here's a worst case
> > situation that I'm worried about.
> >
> > Suppose in a few months the top several companies contributing to
> > kolla decide to pull out of or reduce their contributions to
> > OpenStack.  IBM, Intel, Oracle, and Cisco either lay folks off or
> > redirect their efforts to other projects.  Maybe they start
> > contributing directly to kubernetes. The kolla team is hit badly,
> > and all of the people from that team who know how the container
> > publishing jobs work are gone.
> 
> There are only 2 ways to defend against that: diverse community, which
> we have. If Intel, Red Hat, Oracle, Cisco and IBM back out of
> OpenStack, we'd still have almost 50% of contributors. I think we'll
> much more likely to survive than most of other Big Tent projects. In
> fact, I'd think with our current diversity, that we'll survive for as
> long as OpenStack survives.
> 
> Also all the more reasons why *we shouldn't build images personally*,
> we should have autonomous process to do it for us.
> 
> > The day after everyone says goodbye, the build breaks. Maybe a bad
> > patch lands, or maybe some upstream assumption changes. The issue
> > isn't with the infra jobs themselves. The break means no new container
> > images are being published. Since there's not much of a kolla team
> > any more, it looks like it will be a while before anyone has time
> > to figure out how to fix the problem.
> 
> > Later that same day, a new zero-day exploit is announced in a
> > component included in all or most of those images. Something that
> > isn't developed in the community, such as OpenSSL or glibc. The
> > exploit allows a complete breach of any app running with it. All
> > existing published containers include the bad bits and need to be
> > updated.
> 
> I guess this is problem of all the software ever written. If community
> dies around it, people who uses it are in lots of trouble. One way to
> make sure it won't happen is to get involved yourself to make sure you
> can fix what is broken for you. This is how open source works. In
> Kolla most of our contributors are actually operators who run these
> very containers in their own infrastructure. This is where our
> diversity comes from. We aren't distro and that makes us, and our
> users, more protected from this scenario.
> 
> If nova loses all of it's community, and someone finds critical bug in
> nova that allows hackers to gain access to vm data, there will be
> nobody to fix it, that's bad right? Same argument can be made. We
> aren't discussing deleting Nova tho right?

I think there's a difference there, because of the way nova and the
other components currently have an intermediary doing the distribution.

> > Contrast that with a scenario in which consumers either take
> > responsibility for their systems by building their own images, by
> > collaborating directly with other consumers to share the resources
> > needed to build those images, or by paying a third-party a sustainable
> > amount of money to build images for them. In any of those cases,
> > there is an incentive for the responsible party to be ready and
> > able to produce new images in a timely manner. Consumers of the
> > images know exactly where to go for support when they have problems.
> > Issues in those images don't reflect on the community in any way,
> > because we were not involved in producing them.
> 
> Unless as you said build system breaks, then they are equally screwed
> locally. Unless someone fix it, and they can fix it for openstack
> infra too. Difference is, for OpenStack infra it's whole community
> that can fix it where local it's just you. That's the strength of open
> source.

The difference is that it's definitely not the community's problem in
that case. I'm looking at this from a community perspective, and not the
deployer or operator.

> > As I said at the start of this thread, we've long avoided building
> > and supporting simple operating system style packages of the
> > components we produce. I am still struggling to understand how
> > building more complex artifacts, including bits over which we have
> > little or no control, is somehow more sustainable than those simple
> > packages.
> 
> Binaries are built as standalone projects. Nova-api has no
> dependencies build into .rpm. If issue you just described would happen
> in any of projects openstack uses as dependency, in any of these [1],
> same argument applies. We pin specific versions in upper constraints.
> I'm willing to bet money of it that if today one of these libs would
> release CVE, there is good change we won't find out.

I don't understand what you're saying here. How do constraints we use in
our test gate enter into this?

> Bottom line, yes, we do *package* today with PIP. Exactly same issues
> apply to pip packages if ve

Re: [openstack-dev] [Heat] Heat template example repository

2017-05-17 Thread Zane Bitter

On 16/05/17 10:32, Lance Haig wrote:

What if instead of a directory per release, we just had a 'deprecated'
directory where we move stuff that is going away (e.g. anything
relying on OS::Glance::Image), and then deleted them when it
disappeared from any supported release (e.g. LBaaSv1 must be close if
it isn't gone already).


I agree in general this would be good. How would we deal with users who
are running older versions of openstack?
Most of the customers I support have Liberty and newer so I would
perhaps like to have these available as tested.
The challenge for us is that the newer the OStack version the more
features are available e.g. conditionals etc..
To support that in a backwards compatible fashion is going to be tough I
think. Unless I am missing something.


'stable' branches could achieve that, and it's the most feasible way to 
actually CI test them against older releases anyway.



As we've proven, maintaining these templates has been a challenge
given the
available resources, so I guess I'm still in favor of not duplicating
a bunch
of templates, e.g perhaps we could focus on a target of CI testing
templates on the current stable release as a first step?


I'd rather do CI against Heat master, I think, but yeah that sounds
like the first step. Note that if we're doing CI on old stuff then
we'd need to do heat-templates stable branches rather than
directory-per-release.

With my suggestion above, we could just not check anything in the
'deprecated' directory maybe?

I agree in part.
If we are using the heat examples to test the functionality of the
master branch then that would be a good idea.
If we want to provide useable templates for users to reference and use
then I would suggest we test against stable.


The downside of that is you can't add a template that uses a new feature 
in Heat until after the next release (which includes that feature).


I think the answer here is to have stable heat-templates test against 
stable heat and master against master.



I am sure we could find a way to do both.
I would suggets that we first get reliable CICD running on the current
templates and fix what we can in there.
Then we can look at what would be a good way forward.

I am just brain dumping so any other ideas would also be good.



As you guys mentioned in our discussions the Networking example I
quoted is
not something you guys can deal with as the source project affects
this.

Unless we can use this exercise to test these and fix them then I am
happier.

My vision would be to have a set of templates and examples that are
tested
regularly against a running OS deployment so that we can make sure the
combinations still run. I am sure we can agree on a way to do this
with CICD
so that we test the fetureset.


Agreed, getting the approach to testing agreed seems like the first
step -
FYI we do already have automated scenario tests in the main heat tree
that
consume templates similar to many of the examples:

https://github.com/openstack/heat/tree/master/heat_integrationtests/scenario


So, in theory, getting a similar test running on heat_templates
should be
fairly simple, but getting all the existing templates working is
likely to
be a bigger challenge.


Even if we just ran the 'template validate' command on them to check
that all of the resource types & properties still exist, that would be
pretty helpful. It'd catch of of the times when we break backwards
compatibility so we can decide to either fix it or deprecate/remove
the obsolete template. (Note that you still need all of the services
installed, or at least endpoints in the catalog, for the validation to
work.)


So apparently Thomas already implemented this (nice!). The limitation is 
that we're ignoring errors where the template contains a resource where 
the endpoint is not in the service catalog. That's likely to mean a lot 
of these templates aren't _really_ getting tested.


In theory all we have to do to fix that is add endpoints for all of them 
in the catalog (afaik we shouldn't need to actually run any of the 
services).


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Rename openstack-ansible-security role?

2017-05-17 Thread Mike Carden
I think that a rename is a good idea. Ansible-host-security,
Ansible-host-hardening, Ansible-server-security, etc... all good options.

-- 
MC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Log coloring under systemd

2017-05-17 Thread Eric Fried
Folks-

As of [1], devstack will include color escapes in the default log
formats under systemd.  Production deployments can emulate as they see fit.

Note that journalctl will strip those color escapes by default, which
is why we thought we lost log coloring with systemd.  Turns out that you
can get the escapes to come through by passing the -a flag to
journalctl.  The doc at [2] has been updated accordingly.  If there are
any other go-to documents that could benefit from similar content,
please let me know (or propose the changes).

Thanks,
Eric (efried)

[1] https://review.openstack.org/#/c/465147/
[2] https://docs.openstack.org/developer/devstack/systemd.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Log coloring under systemd

2017-05-17 Thread Sean Dague
On 05/17/2017 04:50 PM, Eric Fried wrote:
> Folks-
> 
>   As of [1], devstack will include color escapes in the default log
> formats under systemd.  Production deployments can emulate as they see fit.
> 
>   Note that journalctl will strip those color escapes by default, which
> is why we thought we lost log coloring with systemd.  Turns out that you
> can get the escapes to come through by passing the -a flag to
> journalctl.  The doc at [2] has been updated accordingly.  If there are
> any other go-to documents that could benefit from similar content,
> please let me know (or propose the changes).
> 
>   Thanks,
>   Eric (efried)
> 
> [1] https://review.openstack.org/#/c/465147/
> [2] https://docs.openstack.org/developer/devstack/systemd.html

Thanks for diving into this Eric, one more thing off the todo list!

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] LDAP user_id_attribute does not affect groups

2017-05-17 Thread Boris Kudryavtsev
Hello OpenStack-dev,

I am running Keystone in a virtual environment with LDAP backend.
When user_id_attribute is set to sn (and the LDAP directory is
configured accordingly),
`openstack user list --domain default --group test-group` results in
`Group member `userid` for group `f44a7fbb9e174ba5823474c759d43643`
not found in the directory.
The user should be removed from the group. The user will be ignored.`
for a groupOfNames that has userid as a member.

However, `openstack user list` works OK and lists all user names and ids.

Outputs: http://paste.openstack.org/show/609820/

It seems that the problem is here:
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap/common.py#L1280

cn is used as the id attribute regardless of configuration in
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap/core.py#L126.

keystone.conf: http://paste.openstack.org/show/609845/
LDAP directory: http://paste.openstack.org/show/609846/

Any ideas? This smells of a bug.

Boris Kudryavtsev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari] Intrusive Instance Monitoring

2017-05-17 Thread Waines, Greg
( I have been having a discussion with Adam Spiers on 
[openstack-dev][vitrage][nova] on this topic ... thought I would switchover to 
[masakari] )

I am interested in contributing an implementation of Intrusive Instance 
Monitoring,
initially specifically VM Heartbeat / Heath-check Monitoring thru the QEMU 
Guest Agent (https://wiki.libvirt.org/page/Qemu_guest_agent).

I’d like to know whether Masakari project leaders would consider a blueprint on 
“VM Heartbeat / Health-check Monitoring”.
See below for some more details,
Greg.

-


VM Heartbeating / Health-check Monitoring would introduce intrusive / white-box 
type monitoring of VMs / Instances to Masakari.

Briefly, “VM Heartbeat / Health-check Monitoring”
· is optionally enabled thru a Nova flavor extra-spec,
· is a service that runs on an OpenStack Compute Node,
· it sends periodic Heartbeat / Health-check Challenge Requests to a VM
over a virtio-serial-device setup between the Compute Node and the VM thru QEMU,
( https://wiki.libvirt.org/page/Qemu_guest_agent )
· on loss of heartbeat or a failed health check status will result in 
fault event, against the VM, being
reported to Masakari and any other registered reporting backends like Mistral, 
or Vitrage.

I realize this is somewhat in the gray-zone of what a cloud should be 
monitoring or not,
but I believe it provides an alternative for Applications deployed in VMs that 
do not have an external monitoring/management entity like a VNF Manager in the 
MANO architecture.
And even for VMs with VNF Managers, it provides a highly reliable alternate 
monitoring path that does not rely on Tenant Networking.

VM HB/HC Monitoring would leverage  
https://wiki.libvirt.org/page/Qemu_guest_agent
that would require the agent to be installed in the images for talking back to 
the compute host.
( there are other examples of similar approaches in openstack ... the 
murano-agent for installation, the swift-agent for object store management )
Although here, in the case of VM HB/HC Monitoring, via the QEMU Guest Agent, 
the messaging path is internal thru a QEMU virtual serial device.  i.e. a very 
simple interface with very few dependencies ... it’s up and available very 
early in VM lifecycle and virtually always up.

Wrt failure modes / use-cases
· a VM’s response to a Heartbeat Challenge Request can be as simple as 
just ACK-ing,
this alone allows for detection of:
oa failed or hung QEMU/KVM instance, or
oa failed or hung VM’s OS, or
oa failure of the VM’s OS to schedule the QEMU Guest Agent daemon, or
oa failure of the VM to route basic IO via linux sockets.
· I have had feedback that this is similar to the virtual hardware 
watchdog of QEMU/KVM (https://libvirt.org/formatdomain.html#elementsWatchdog )
· However, the VM Heartbeat / Health-check Monitoring
o   provides a higher-level (i.e. application-level) heartbeating
•  i.e. if the Heartbeat requests are being answered by the Application running 
within the VM
o   provides more than just heartbeating, as the Application can use it to 
trigger a variety of audits,
o   provides a mechanism for the Application within the VM to report a Health 
Status / Info back to the Host / Cloud,
o   provides notification of the Heartbeat / Health-check status to 
higher-level cloud entities thru Masakari, Mistral and/or Vitrage
•  e.g.   VM-Heartbeat-Monitor - to - Vitrage - (EventAlarm) - Aodh - ... - 
VNF-Manager

- (StateChange) - Nova - ... - VNF Manager

NOTE: perhaps the reporting to Vitrage would be a separate blueprint within 
Masakari.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Proposal to change the timing of Feature Freeze

2017-05-17 Thread Matt Riedemann

On 5/17/2017 11:20 AM, Dmitry Tantsur wrote:

On 05/17/2017 05:34 PM, Chris Jones wrote:

Hey folks

I have a fairly simple proposal to make - I'd like to suggest that
Feature Freeze move to being much earlier in the release cycle (no
earlier than M.1 and no later than M.2 would be my preference).


I welcome projects to experiment with it, but strong -1 to make it any
kinds of policy. Actually, we mostly got rid of the feature freeze in
Ironic because of how it did not work for us.



In the current arrangement (looking specifically at Pike), FF is
scheduled to happen 5 weeks before the upstream release. This means
that of a 26 week release cycle, 21 weeks are available for making
large changes, and only 5 weeks are available for stabilising the
release after the feature work has landed (possibly less if FF
exceptions are granted).

In my experience, significant issues are generally still being found
after the upstream release, by which point fixing them is much harder
- the patches need to land twice (master and stable/foo) and master
may already have diverged.

If the current model were inverted, and ~6 weeks of each release were
available for landing features, there would be ~20 weeks available for
upstream and downstream folk to do their testing/stabilising work. The
upstream release ought to have a higher quality, and downstream
releases would be more likely to be able to happen at the same time.


6 weeks of each release is not enough to land anything serious IMO. Most
of big feature I remember took months of review to land.

Also what this is going to end up with is folks working on features
during the freeze with -2 applied. And then in these 6 weeks they'll put
*enormous* pressure on the core team to merge their changes, knowing
that otherwise they'll have to wait 5 months more.

Even our regular feature freeze had this effect to some extent. People
don't plan that early, not all code patches are of equal quality
initially, etc. While you can blame them for it (and you'll probably be
right), this will still end up in core team pressurized.

This will also make prioritization much more difficult, as a feature
that was not deemed a priority is unlikely to land at all. We already
have complaints from some contributors about uneven prioritization, your
proposal has chances of making it much worse.



Obviously not all developers would be working on the stabilisation
work for those ~20 weeks, many would move on to working on features
for the following release, which would then be ready to land in the
much shorter period.


From our experience, most of the non-core developers just go away during
the feature freeze, and come back when it's lifted. I don't remember any
increase in the number of bugs fixed during that time frame. Most of the
folks not deploying from master still start serious testing after the GA.

And this proposal may make it harder for people to justify full-time
work on a certain projects.



This might slow the feature velocity of projects, and maybe ~6 weeks
is too aggressive, but I feel that the balance right now is weighted
strongly against timely, stable releasing of OpenStack, particularly
for downstream consumers :)

Rather than getting hung up on the specific numbers of weeks, perhaps
it would be helpful to start with opinions on whether or not there is
enough stabilisation time in the current release schedules.


It may differ per project. We have enough time, we don't have enough
people testing and fixing bugs before the GA.

As an alternative, I'd prefer people to work with stable branches more
actively. And deploy a kind of CI/CD system that will allow them to test
code at any point in time, not only close to the release.



--
Cheers,

Chris


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Projects can impose their own freeze dates. For a long time Nova had a 
non-priority feature freeze after the 2nd milestone and then only worked 
on priority features during the last milestone and before the global 
feature freeze. The priorities were determined at the design summit.


Moving the feature freeze date up just puts a huge crunch on the core 
team to review everything, and it was coming shortly after the spec 
freeze (typically the first milestone for nova).


So people with non-priority changes were mad because they didn't get 
their stuff reviewed or merged before the non-priority FF, and people 
working on features felt that we didn't put enough focus on them 
throu

[openstack-dev] [Swift] 404 on geo-replicated clusters with write affinity

2017-05-17 Thread Bruno L
Hi everyone,

It was great to meet you at the Boston summit and agree on the best way to
fix the 404 error code bug on geo-replicated clusters with write affinity.

As you know, this is a topic that is dear to us at Catalyst, and we would
like to provide a patch to it. Since we are relatively new to the Swift
community, I'd like to ask for some guidance from the more experienced
developers, to ensure we are going in the right direction.

I see multiple bugs in launchpad that are related to this issue. Could we
agree which one we should use to progress with this?

Our discussion at the summit lead us to three ideas on how this could be
fixed. We decided we would combine two of them, to make sure all exceptions
and corner cases are covered. Could one of you (maybe Clay or John) write a
brief summary of the changes proposed, so we can use it as a specification
for our code changes?

Cheers,
Bruno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] 404 on geo-replicated clusters with write affinity

2017-05-17 Thread Clay Gerrard
Hi Bruno,

On Wed, May 17, 2017 at 3:47 PM, Bruno L  wrote:

>
> I see multiple bugs in launchpad that are related to this issue.
>

AFAIK, only one bug for this issue is still open, and has the most recent
thoughts added from the Forum

https://bugs.launchpad.net/swift/+bug/1503161

write a brief summary of the changes proposed,
>

I think the skinny of what's in the bug report is "make more backend DELETE
requests to handoffs".

Personally I was coming around to the idea that an explicit configurable
(i.e. similar to "request_node_count" for GET) would be easy to reason
about and give us a lot of flexibility (it would pair well per-policy
configuration WIP https://review.openstack.org/#/c/448240/).  It's possible
this could be implicit using some heuristic over the sort order of
primaries in the ring - but I think it'd be whole 'nother thing, and could
be added later as an "auto" sort of value for the option (i.e. workers =
[N|auto], or "replicas + 2" sort of syntax).

Additionally, it's been pointed out various times that collecting
X-Backend-Timestamp from the responses would allow for further reasoning
over the collected responses in addition to just the status codes (similar
to WIP for GET https://review.openstack.org/#/c/371150/ ) - but I'm
starting to think that'd be an enhancement to the extra handoff DELETE
requests rather than an complete alternative solution.

I don't think anyone really likes the idea of blindly translating a
majority 404 response on DELETE to 204 and calling it a win - so
unfortunately the fix is non-trival.  Glad to see you're interested in
getting your teeth into this one - let me know if there's anything I can do
to help!

Good Luck,

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Discussion on how to do claims

2017-05-17 Thread Edward Leafe
I missed the summit, so I also missed out on some of the decisions that were 
made. I don’t feel that some of them were ideal, and in talking to the people 
involved I’ve gotten various degrees of certainty about how settled things 
were. So I’ve not only pushed a series of patches as POC code [0] for my 
approach, but I’ve written a summary of my concerns [1].

Speaking with a few people today, since some people are not around, and I’m 
leaving for PyCon tomorrow, we felt it would be worthwhile to have everyone 
read this, and review the current proposed code by Sylvain [2], my code, and 
the blog post summary. Next week when we are all back in town we can set up a 
time to discuss, either in IRC or perhaps a hangout.

[0] https://review.openstack.org/#/c/464086/
[1] https://blog.leafe.com/claims-in-the-scheduler/
[2] https://review.openstack.org/#/c/460177/8 


-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread KiYoun Sung
Hello, Spyros/
Thank you for your reply.

I executed "kubectl create" command in my openstack controller node.
I downloaded kubectl binary, it's version is 2.5.1.

Below are my steps.
1) install openstack newton by fuel 10.0
2) install magnum by source (master branch) in controller node
3) install magnum-client by source in controller node
4) execute magnum cluster-template-create for kubernetes
5) execute magnum cluster-create with kubernetes-cluster-template
6) download kubectl and connect my kubernetes cluster
7) execute kubectl get nodes, get pods => is normal
7) finally, kubectl create -f nginx.yaml

But 7th step was failed.

Best regards.

2017-05-17 20:58 GMT+09:00 Spyros Trigazis :

>
>
> On 17 May 2017 at 06:25, KiYoun Sung  wrote:
>
>> Hello,
>> Magnum team.
>>
>> I Installed Openstack newton and magnum.
>> I installed Magnum by source(master branch).
>>
>> I have two questions.
>>
>> 1.
>> After installation,
>> I created kubernetes cluster and it's CREATE_COMPLETE,
>> and I want to create kubernetes pod.
>>
>> My create script is below.
>> --
>> apiVersion: v1
>> kind: Pod
>> metadata:
>>   name: nginx
>>   labels:
>> app: nginx
>> spec:
>>   containers:
>>   - name: nginx
>> image: nginx
>> ports:
>> - containerPort: 80
>> --
>>
>> I tried "kubectl create -f nginx.yaml"
>> But, error has occured.
>>
>> Error message is below.
>> error validating "pod-nginx-with-label.yaml": error validating data:
>> unexpected type: object; if you choose to ignore these errors, turn
>> validation off with --validate=false
>>
>> Why did this error occur?
>>
>
> This is not related to magnum, it is related to your client. From where do
> you execute the
> kubectl create command? You computer? Some vm with a distributed file
> system?
>
>
>>
>> 2.
>> I want to access this kubernetes cluster service(like nginx) above the
>> Openstack magnum environment from outside world.
>>
>> I refer to this guide(https://docs.openstack.o
>> rg/developer/magnum/dev/kubernetes-load-balancer.html#how-it-works), but
>> it didn't work.
>>
>> Openstack: newton
>> Magnum: 4.1.1 (master branch)
>>
>> How can I do?
>> Do I must install Lbaasv2?
>>
>
> You need lbaas V2 with octavia preferably. Not sure what is the
> recommended way to install.
>
>
>>
>> Thank you.
>> Best regards.
>>
>
> Cheers,
> Spyros
>
>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-17 Thread Dan Prince
TripleO currently uses the default "loopback" docker storage device.
This is not recommended for production (see 'docker info').

We've been poking around with docker storage backends in TripleO for
almost 2 months now here:

 https://review.openstack.org/#/c/451916/

For TripleO there are a couple of considerations:

 - we intend to support in place upgrades from baremetal to containers

 - when doing in place upgrades re-partitioning disks is hard, if not
impossible. This makes using devicemapper hard.

 - we'd like to to use a docker storage backend that is production
ready.

 - our target OS is latest Centos/RHEL 7

As we approach pike 2 I'm keen to move towards a more production docker
storage backend. Is there consensus that 'overlay2' is a reasonable
approach to this? Or is it too early to use that with the combinations
above?

Looking around at what is recommended in other projects it seems to be
a mix as well from devicemapper to btrfs.

[1] https://docs.openshift.com/container-platform/3.3/install_config/in
stall/host_preparation.html#configuring-docker-storage
[2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_RedH
at.sh#n30


Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-17 Thread Michał Jastrzębski
Be careful with overlay, I've seen it acting in a ways you don't want
it to act up. That was some time ago, but memories persist. To my
experience best option is btrfs. If you don't want to repartition
disk, btrfs on loopback isn't horrible too. deviemapper on loopback is
horrible, but that's different.

On 17 May 2017 at 17:24, Dan Prince  wrote:
> TripleO currently uses the default "loopback" docker storage device.
> This is not recommended for production (see 'docker info').
>
> We've been poking around with docker storage backends in TripleO for
> almost 2 months now here:
>
>  https://review.openstack.org/#/c/451916/
>
> For TripleO there are a couple of considerations:
>
>  - we intend to support in place upgrades from baremetal to containers
>
>  - when doing in place upgrades re-partitioning disks is hard, if not
> impossible. This makes using devicemapper hard.
>
>  - we'd like to to use a docker storage backend that is production
> ready.
>
>  - our target OS is latest Centos/RHEL 7
>
> As we approach pike 2 I'm keen to move towards a more production docker
> storage backend. Is there consensus that 'overlay2' is a reasonable
> approach to this? Or is it too early to use that with the combinations
> above?
>
> Looking around at what is recommended in other projects it seems to be
> a mix as well from devicemapper to btrfs.
>
> [1] https://docs.openshift.com/container-platform/3.3/install_config/in
> stall/host_preparation.html#configuring-docker-storage
> [2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_RedH
> at.sh#n30
>
>
> Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-17 Thread Fox, Kevin M
I've only used btrfs and devicemapper on el7. btrfs has worked well. 
devicemapper ate may data on multiple occasions. Is redhat supporting overlay 
in the el7 kernels now?

Thanks,
Kevin

From: Dan Prince [dpri...@redhat.com]
Sent: Wednesday, May 17, 2017 5:24 PM
To: openstack-dev
Subject: [openstack-dev] [TripleO][Kolla] default docker storage backend for
TripleO

TripleO currently uses the default "loopback" docker storage device.
This is not recommended for production (see 'docker info').

We've been poking around with docker storage backends in TripleO for
almost 2 months now here:

 https://review.openstack.org/#/c/451916/

For TripleO there are a couple of considerations:

 - we intend to support in place upgrades from baremetal to containers

 - when doing in place upgrades re-partitioning disks is hard, if not
impossible. This makes using devicemapper hard.

 - we'd like to to use a docker storage backend that is production
ready.

 - our target OS is latest Centos/RHEL 7

As we approach pike 2 I'm keen to move towards a more production docker
storage backend. Is there consensus that 'overlay2' is a reasonable
approach to this? Or is it too early to use that with the combinations
above?

Looking around at what is recommended in other projects it seems to be
a mix as well from devicemapper to btrfs.

[1] https://docs.openshift.com/container-platform/3.3/install_config/in
stall/host_preparation.html#configuring-docker-storage
[2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_RedH
at.sh#n30


Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-17 Thread Adrian Turjak


On 17/05/17 23:20, Sean Dague wrote:
> On 05/16/2017 07:34 PM, Adrian Turjak wrote:
> 
>> Anyway that aside, I'm sold on API keys as a concept in this case
>> provided they are project owned rather than user owned, I just don't
>> think we should make them too unique, and we shouldn't be giving them a
>> unique policy system because that way madness lies.
>>
>> Policy is already a complicated system, lets not have to maintain two
>> systems. Any policy system we make for API keys ought to be built on top
>> of the policy systems we end up with using roles. An explosion of roles
>> will happen with dynamic policy anyway, and yes sadly it will be a DSL
>> for some clouds, but no sensible cloud operator is going to allow a
>> separate policy system in for API keys unless they can control it. I
>> don't think we can solve the "all clouds have the same policy for API
>> keys" problem and I'd suggest putting that in the "too hard for now
>> box". Thus we do your step 1, and leave step 2 until later when we have
>> a better idea of how to do it without pissing off a lot of operators,
>> breaking standard policy, or maintaining an entirely separate policy system.
> This is definitely steps. And I agree we do step 1 to get us at least
> revokable keys. That's Pike (hopefully), and then figure out the path
> through step 2.
>
> The thing about the policy system today, is it's designed for operators.
> Honestly, the only way you really know what policy is really doing is if
> you read the source code of openstack as well. That is very very far
> from a declarative way of a user to further drop privileges. If we went
> straight forward from here we're increasing the audience for this by a
> factor of 1000+, with documentation, guarantees that policy points don't
> ever change. No one has been thinking about microversioning on a policy
> front, for instance. It now becomes part of a much stricter contract,
> with a much wider audience.
>
> I think the user experience of API use is going to be really bad if we
> have to teach the world about our policy names. They are non mnemonic
> for people familiar with the API. Even just in building up testing in
> the Nova tree over the years mistakes have been made because it wasn't
> super clear what routes the policies in question were modifying. Nova
> did a giant replacement of all the policy names 2 cycles ago because of
> that. It's better now, but still not what I'd want to thrust on people
> that don't have at least 20% of the Nova source tree in their head.
>
> We also need to realize there are going to be 2 levels of permissions
> here. There is going to be what the operator allows (which is policy +
> roles they have built up on there side), and then what the user allows
> in their API Key. I would imagine that an API Key created by a user
> inherits any roles that user has (the API Key is still owned by a
> project). The user at any time can change the allowed routes on the key.
> The admin at any time can change the role / policy structure. *both*
> have to be checked on operations, and only if both succeed do we move
> forward.
>
> I think another question where we're clearly in a different space, is if
> we think about granting an API Key user the ability to create a server.
> In a classical role/policy move, that would require not just (compute,
> "os_compute_api:servers:create"), but also (image, "get_image"), (image,
> "download_image"), (network, "get_port"), (network, "create_port"), and
> possibly much more. Missing one of these policies means a deep late
> fail, which is not debugable unless you have the source code in front of
> you. And not only requires knowledge of the OpenStack API, but deep
> knowledge of the particular deployment, because the permissions needed
> around networking might be different on different clouds.
>
> Clearly, that's not the right experience for someone that just wants to
> write a cloud native application that works on multiple clouds.
>
> So we definitely are already doing something a bit different, that is
> going to need to not be evaluated everywhere that policy is current
> evaluated, but only *on the initial inbound request*. The user would
> express this as (region1, compute, /servers, POST), which means that's
> the API call they want this API Key to be able to make. Subsequent
> requests wrapped in service tokens bypass checking API Key permissions.
> The role system is still in play, keeping the API Key in the box the
> operator wanted to put it in.
>
> Given that these systems are going to act differently, and at different
> times, I don't actually see it being a path to madness. I actually see
> it as less confusing to manage correctly in the code, because they two
> things won't get confused, and the wrong permissions checks get made. I
> totally agree that policy today is far too complicated, and I fear
> making it a new related, but very different task, way more than building
> a different declarative approach 

Re: [openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-17 Thread Jeffrey Zhang
btrfs and direct-lvm is recommended for prod env.

overlay is bad.


On Thu, May 18, 2017 at 8:38 AM, Fox, Kevin M  wrote:

> I've only used btrfs and devicemapper on el7. btrfs has worked well.
> devicemapper ate may data on multiple occasions. Is redhat supporting
> overlay in the el7 kernels now?
>
> Thanks,
> Kevin
> 
> From: Dan Prince [dpri...@redhat.com]
> Sent: Wednesday, May 17, 2017 5:24 PM
> To: openstack-dev
> Subject: [openstack-dev] [TripleO][Kolla] default docker storage backend
> forTripleO
>
> TripleO currently uses the default "loopback" docker storage device.
> This is not recommended for production (see 'docker info').
>
> We've been poking around with docker storage backends in TripleO for
> almost 2 months now here:
>
>  https://review.openstack.org/#/c/451916/
>
> For TripleO there are a couple of considerations:
>
>  - we intend to support in place upgrades from baremetal to containers
>
>  - when doing in place upgrades re-partitioning disks is hard, if not
> impossible. This makes using devicemapper hard.
>
>  - we'd like to to use a docker storage backend that is production
> ready.
>
>  - our target OS is latest Centos/RHEL 7
>
> As we approach pike 2 I'm keen to move towards a more production docker
> storage backend. Is there consensus that 'overlay2' is a reasonable
> approach to this? Or is it too early to use that with the combinations
> above?
>
> Looking around at what is recommended in other projects it seems to be
> a mix as well from devicemapper to btrfs.
>
> [1] https://docs.openshift.com/container-platform/3.3/install_config/in
> stall/host_preparation.html#configuring-docker-storage
> [2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_RedH
> at.sh#n30
>
>
> Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday May 18th at 9:00 UTC

2017-05-17 Thread Ghanshyam Mann
Hello everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, May 18th at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:

https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_May_18th_2017_.280900_UTC.29

Anyone is welcome to add an item to the agenda.

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-17 Thread Steve Baker
On Thu, May 18, 2017 at 12:38 PM, Fox, Kevin M  wrote:

> I've only used btrfs and devicemapper on el7. btrfs has worked well.
> devicemapper ate may data on multiple occasions. Is redhat supporting
> overlay in the el7 kernels now?
>

overlay2 is documented as a Technology Preview graph driver in the Atomic
Host 7.3.4 release notes:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/release_notes/




> _
> From: Dan Prince [dpri...@redhat.com]
> Sent: Wednesday, May 17, 2017 5:24 PM
> To: openstack-dev
> Subject: [openstack-dev] [TripleO][Kolla] default docker storage backend
> forTripleO
>
> TripleO currently uses the default "loopback" docker storage device.
> This is not recommended for production (see 'docker info').
>
> We've been poking around with docker storage backends in TripleO for
> almost 2 months now here:
>
>  https://review.openstack.org/#/c/451916/
>
> For TripleO there are a couple of considerations:
>
>  - we intend to support in place upgrades from baremetal to containers
>
>  - when doing in place upgrades re-partitioning disks is hard, if not
> impossible. This makes using devicemapper hard.
>
>  - we'd like to to use a docker storage backend that is production
> ready.
>
>  - our target OS is latest Centos/RHEL 7
>
> As we approach pike 2 I'm keen to move towards a more production docker
> storage backend. Is there consensus that 'overlay2' is a reasonable
> approach to this? Or is it too early to use that with the combinations
> above?
>
> Looking around at what is recommended in other projects it seems to be
> a mix as well from devicemapper to btrfs.
>
> [1] https://docs.openshift.com/container-platform/3.3/install_config/in
> stall/host_preparation.html#configuring-docker-storage
> [2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_RedH
> at.sh#n30
>
>
I'd love to be able to use overlay2. I've CCed Daniel Walsh with the hope
we can get a general overview of the maturity of overlay2 on rhel/centos.

I tried using overlay2 recently to create an undercloud and hit an issue
doing a "cp -a *" on deleted files. This was with kernel-3.10.0-514.16.1
and docker-1.12.6.

I want to get to the bottom of it so I'll reproduce and raise a bug as
appropriate.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-17 Thread Andres Alvarez
Hello folks

The gnocchi-api command allows running the API server usign a spefic port:

usage: gnocchi-api [-h] [--port PORT] -- [passed options]

positional arguments:
  -- [passed options]   '--' is the separator of the arguments used to start
the WSGI server and the arguments passed to the WSGI
application.

optional arguments:
  -h, --helpshow this help message and exit
  --port PORT, -p PORT  TCP port to listen on (default: 8000)

I was wondering if it's possible as well to use a specific interface? (In
my case, I am working on a cloud dev environment, so I need 0.0.0.0)?

If not, would this be a welcomed change for a pull request?

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-17 Thread Steven Dake (stdake)
My experience with BTRFS has been flawless.  My experience with overlayfs is 
that occasionally (older centos kernels) returned  as permissions 
(rather the drwxrwrw).  This most often happened after using the yum overlay 
driver.  I’ve found overlay to be pretty reliable as a “read-only” filesystem – 
eg just serving up container images, not persistent storage.

YMMV.  Overlayfs is the long-term filesystem of choice for the use case you 
outlined.  I’ve heard overlayfs has improved over the last year in terms of 
backport quality so maybe it is approaching ready.

Regards
-steve


From: Steve Baker 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, May 17, 2017 at 7:30 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
, "dwa...@redhat.com" 
Subject: Re: [openstack-dev] [TripleO][Kolla] default docker storage backend 
for TripleO



On Thu, May 18, 2017 at 12:38 PM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
I've only used btrfs and devicemapper on el7. btrfs has worked well. 
devicemapper ate may data on multiple occasions. Is redhat supporting overlay 
in the el7 kernels now?

overlay2 is documented as a Technology Preview graph driver in the Atomic Host 
7.3.4 release notes:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/release_notes/



_
From: Dan Prince [dpri...@redhat.com]
Sent: Wednesday, May 17, 2017 5:24 PM
To: openstack-dev
Subject: [openstack-dev] [TripleO][Kolla] default docker storage backend for
TripleO

TripleO currently uses the default "loopback" docker storage device.
This is not recommended for production (see 'docker info').

We've been poking around with docker storage backends in TripleO for
almost 2 months now here:

 https://review.openstack.org/#/c/451916/

For TripleO there are a couple of considerations:

 - we intend to support in place upgrades from baremetal to containers

 - when doing in place upgrades re-partitioning disks is hard, if not
impossible. This makes using devicemapper hard.

 - we'd like to to use a docker storage backend that is production
ready.

 - our target OS is latest Centos/RHEL 7

As we approach pike 2 I'm keen to move towards a more production docker
storage backend. Is there consensus that 'overlay2' is a reasonable
approach to this? Or is it too early to use that with the combinations
above?

Looking around at what is recommended in other projects it seems to be
a mix as well from devicemapper to btrfs.

[1] https://docs.openshift.com/container-platform/3.3/install_config/in
stall/host_preparation.html#configuring-docker-storage
[2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_RedH
at.sh#n30

I'd love to be able to use overlay2. I've CCed Daniel Walsh with the hope we 
can get a general overview of the maturity of overlay2 on rhel/centos.

I tried using overlay2 recently to create an undercloud and hit an issue doing 
a "cp -a *" on deleted files. This was with kernel-3.10.0-514.16.1 and 
docker-1.12.6.

I want to get to the bottom of it so I'll reproduce and raise a bug as 
appropriate.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-17 Thread Mehdi Abaakouk

Hi,

On Thu, May 18, 2017 at 11:14:06AM +0800, Andres Alvarez wrote:

Hello folks

The gnocchi-api command allows running the API server usign a spefic port:

usage: gnocchi-api [-h] [--port PORT] -- [passed options]

positional arguments:
 -- [passed options]   '--' is the separator of the arguments used to start
   the WSGI server and the arguments passed to the WSGI
   application.

optional arguments:
 -h, --helpshow this help message and exit
 --port PORT, -p PORT  TCP port to listen on (default: 8000)

I was wondering if it's possible as well to use a specific interface? (In
my case, I am working on a cloud dev environment, so I need 0.0.0.0)?


gnocchi-api is for testing purpose, for production or any HTTP server
advanced usage, I would recommend to use the wsgi application inside a
real HTTP server, you can find an example with uwsgi here:
http://gnocchi.xyz/running.htm

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-17 Thread Mehdi Abaakouk

Hi,

On Mon, May 15, 2017 at 01:01:57PM -0400, Zane Bitter wrote:

On 15/05/17 12:10, Steven Hardy wrote:

On Mon, May 15, 2017 at 04:46:28PM +0200, Lance Haig wrote:

Hi Steve,

I am happy to assist in any way to be honest.


It was great to meet you in Boston, and thanks very much for 
volunteering to help out.


BTW one issue I'm aware of is that the autoscaling template examples 
we have all use OS::Ceilometer::* resources for alarms. We have a 
global environment thingy that maps those to OS::Aodh::*, so at least 
in theory those templates should continue to work, but there are 
actually no examples that I can find of autoscaling templates doing 
things the way we want everyone to do them.


This is not only an Aodh/Ceilometer alarm issue. I can confirm that
whatever the resource prefix, this works well.

But an alarm description also contains a query an external API to
retrieve statistics. Aodh alarms are currently able to
query the deprecated Ceilometer-API and the Gnocchi-API. Creating alarms
that query the deprecated Ceilometer-API is obviously deprecated too.

Unfortunately, I have seen that all templates still use the deprecated
Ceilometer-API. Since Ocata, this API don't even run by default.

I just propose an update for one template as example here:

 https://review.openstack.org/#/c/465817/

I can't really do the others, I don't have enough knowledge in
Mistral/Senlin/Openshift.

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-17 Thread aalvarez
I do not need this functionality for production, but for testing. I think it
would be nice if we can specify the interface for the gnocchi-api even for
test purposes, just like the port.



--
View this message in context: 
http://openstack.10931.n7.nabble.com/gnocchi-Running-Gnocchi-API-in-specific-interface-tp135004p135008.html
Sent from the Developer mailing list archive at Nabble.com.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev