Re: [openstack-dev] [neutron][taas] Voting for new core reviewers

2016-03-01 Thread SUZUKI, Kazuhiro
+1 for both Soichi and Yamamoto.

Thanks,
KAZ

From: Fawad Khaliq 
Subject: Re: [openstack-dev] [neutron][taas] Voting for new core reviewers
Date: Tue, 1 Mar 2016 20:09:07 -0800

> +1 to both. Great to see the core team growing.
> 
> Fawad Khaliq
> 
> 
> On Tue, Mar 1, 2016 at 5:35 PM, Anil Rao  wrote:
> 
>> Both Takashi and Soichi have been involved with the project for a while
>> now and have made significant contributions in terms of code and reviews.
>>
>>
>>
>> +1 for both.
>>
>>
>>
>> Thanks,
>>
>> Anil
>>
>>
>>
>> *From:* reedip banerjee [mailto:reedi...@gmail.com]
>> *Sent:* Tuesday, March 01, 2016 4:21 PM
>> *To:* openstack-dev@lists.openstack.org
>> *Subject:* [openstack-dev] [neutron][taas] Voting for new core reviewers
>>
>>
>>
>>
>>
>> +1 to both Soichi and Yamamoto
>>
>> --
>>
>> Date: Tue, 1 Mar 2016 21:26:59 +0100
>> From: Vinay Yadhav 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> Subject: [openstack-dev]  [neutron][taas]
>> Message-ID:
>> <
>> ca+bcmm3avextk-vb4y0e8dphyb0pjxsutkxfv6_bsshqkf-...@mail.gmail.com>
>> Content-Type: text/plain; charset="utf-8"
>>
>> Hi All,
>>
>> Its time to induct new members to the TaaS core reviewers, to kick start
>> the process i nominate the following developers and active contributors to
>> the TaaS project
>>
>> 1. Yamamoto Takashi
>> 2. Soichi Shigeta
>>
>>
>> Cheers,
>> Vinay Yadhav
>>
>>
>>
>> --
>>
>> Thanks and Regards,
>> Reedip Banerjee
>>
>> IRC: reedip
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara]FFE Request for nfs-as-a-data-source

2016-03-01 Thread Vitaly Gridnev
Hi,

>From my point of view, if we adding new type of the datasources (or
configurations for that), it means that it should be supported in almost
all plugins (at least in vanilla, spark, ambari, cdh I guess). Current
 implementation is nice, but it looks like it touches only vanilla 2.7.1
plugin which strange for me. Are there plans to add support for other
plugins? If yes, then I think this feature should be done in Newton cycle
to have complete picture of this support. If no, I think it's ok to land
this code in RC with other improvements in validation.

At conclusion I would like to say that from point of view we should
collaborate actively to implement this support in early Newton-1 cycle,
that would be a best choice.

Thanks.

On Wed, Mar 2, 2016 at 4:23 AM, Chen, Weiting 
wrote:

> Hi all,
>
>
>
> I would like to request a FFE for the feature “nfs-as-a-data-source”:
>
> BP: https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source
>
> BP Review: https://review.openstack.org/#/c/210839/
>
> Sahara Code: https://review.openstack.org/#/c/218638/
>
> Sahara Image Elements Code: https://review.openstack.org/#/c/218637/
>
>
>
> Estimate Complete Time: The BP has been complete and the implementation
> has been complete as well. All the code is under code reviewing and since
> there is no big change or modify for the code we expect it can only take
> one weeks to be merged.
>
> The Benefits for this change: Provide NFS support in Sahara.
>
> The Risk: The risk would be low for this patch, since all the functions
> have been delivered.
>
>
>
> Thanks,
>
> Weiting(William) Chen
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Vitaly Gridnev
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [FFE] Unlock Settings Tab

2016-03-01 Thread Vitaly Kramskikh
I think it's not a part of best practices to introduce changes like
https://review.openstack.org/#/c/279714/ (adding yet another DSL to the
project) without a blueprint and review and discussion of the spec.

2016-03-02 2:19 GMT+07:00 Alexey Shtokolov :

> Fuelers,
>
> I would like to request a feature freeze exception for "Unlock settings
> tab" feature [0]
>
> This feature being combined with Task-based deployment [1] and
> LCM-readiness for Fuel deployment tasks [2] unlocks Basic LCM in Fuel. We
> conducted a thorough redesign of this feature and splitted it into several
> granular changes [3]-[6] to allow users to change settings on deployed,
> partially deployed, stopped or erred clusters and further run redeployment
> using a particular graph (custom or calculated based on expected changes
> stored in DB) and with new parameters.
>
> We need 3 weeks after FF to finish this feature.
> Risk of not delivering it after 3 weeks is low.
>
> Patches on review or in progress:
> 
> https://review.openstack.org/#/c/284139/
> https://review.openstack.org/#/c/279714/
> https://review.openstack.org/#/c/286754/
> https://review.openstack.org/#/c/286783/
>
> Specs:
> https://review.openstack.org/#/c/286713/
> https://review.openstack.org/#/c/284797/
> https://review.openstack.org/#/c/282695/
> https://review.openstack.org/#/c/284250/
>
>
> [0] https://blueprints.launchpad.net/fuel/+spec/unlock-settings-tab
> [1]
> https://blueprints.launchpad.net/fuel/+spec/enable-task-based-deployment
> [2]
> https://blueprints.launchpad.net/fuel/+spec/granular-task-lcm-readiness
> [3]
> https://blueprints.launchpad.net/fuel/+spec/computable-task-fields-yaql
> [4]
> https://blueprints.launchpad.net/fuel/+spec/store-deployment-tasks-history
> [5] https://blueprints.launchpad.net/fuel/+spec/custom-graph-execution
> [6]
> https://blueprints.launchpad.net/fuel/+spec/save-deployment-info-in-database
>
> --
> ---
> WBR, Alexey Shtokolov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Upgrade][FFE] Reassigning Nodes without Re-Installation

2016-03-01 Thread Ilya Kharin
I'd like to request a feature freeze exception for Reassigning Nodes
without Re-Installation [1].

This feature is very important to several upgrade strategies that re-deploy
control plane nodes, alongside of re-using some already deployed nodes,
such as computes nodes or storage nodes. These changes affect only the
upgrade part of Nailgun that mostly implemented in the cluster_upgrade
extension and do not affect both the provisioning and the deployment.

I need one week to finish implementation and testing.

[1] https://review.openstack.org/#/c/280067/ (review in progress)

Best regards,
Ilya Kharin.
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching and force detach

2016-03-01 Thread John Griffith
On Tue, Mar 1, 2016 at 3:48 PM, Murray, Paul (HP Cloud) 
wrote:

>
> > -Original Message-
> > From: D'Angelo, Scott
> >
> > Matt, changing Nova to store the connector info at volume attach time
> does
> > help. Where the gap will remain is after Nova evacuation or live
> migration,
>
> This will happen with shelve as well I think. Volumes are not detached in
> shelve
> IIRC.
>
> > when that info will need to be updated in Cinder. We need to change the
> > Cinder API to have some mechanism to allow this.
> > We'd also like Cinder to store the appropriate info to allow a
> force-detach for
> > the cases where Nova cannot make the call to Cinder.
> > Ongoing work for this and related issues is tracked and discussed here:
> > https://etherpad.openstack.org/p/cinder-nova-api-changes
> >
> > Scott D'Angelo (scottda)
> > 
> > From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
> > Sent: Monday, February 29, 2016 7:48 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [nova][cinder] volumes stuck detaching
> > attaching and force detach
> >
> > On 2/22/2016 4:08 PM, Walter A. Boring IV wrote:
> > > On 02/22/2016 11:24 AM, John Garbutt wrote:
> > >> Hi,
> > >>
> > >> Just came up on IRC, when nova-compute gets killed half way through a
> > >> volume attach (i.e. no graceful shutdown), things get stuck in a bad
> > >> state, like volumes stuck in the attaching state.
> > >>
> > >> This looks like a new addition to this conversation:
> > >> http://lists.openstack.org/pipermail/openstack-dev/2015-
> > December/0826
> > >> 83.html
> > >>
> > >> And brings us back to this discussion:
> > >> https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova
> > >>
> > >> What if we move our attention towards automatically recovering from
> > >> the above issue? I am wondering if we can look at making our usually
> > >> recovery code deal with the above situation:
> > >>
> > https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24
> > >> c79f4bf615/nova/compute/manager.py#L934
> > >>
> > >>
> > >> Did we get the Cinder APIs in place that enable the force-detach? I
> > >> think we did and it was this one?
> > >> https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force
> > >> -detach-needs-cinderclient-api
> > >>
> > >>
> > >> I think diablo_rojo might be able to help dig for any bugs we have
> > >> related to this. I just wanted to get this idea out there before I
> > >> head out.
> > >>
> > >> Thanks,
> > >> John
> > >>
> > >>
> > __
> > ___
> > >> _
> > >>
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> > >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >> .
> > >>
> > > The problem is a little more complicated.
> > >
> > > In order for cinder backends to be able to do a force detach
> > > correctly, the Cinder driver needs to have the correct 'connector'
> > > dictionary passed in to terminate_connection.  That connector
> > > dictionary is the collection of initiator side information which is
> gleaned
> > here:
> > > https://github.com/openstack/os-brick/blob/master/os_brick/initiator/c
> > > onnector.py#L99-L144
> > >
> > >
> > > The plan was to save that connector information in the Cinder
> > > volume_attachment table.  When a force detach is called, Cinder has
> > > the existing connector saved if Nova doesn't have it.  The problem was
> > > live migration.  When you migrate to the destination n-cpu host, the
> > > connector that Cinder had is now out of date.  There is no API in
> > > Cinder today to allow updating an existing attachment.
> > >
> > > So, the plan at the Mitaka summit was to add this new API, but it
> > > required microversions to land, which we still don't have in Cinder's
> > > API today.
> > >
> > >
> > > Walt
> > >
> > >
> > __
> > 
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > Regarding storing off the initial connector information from the attach,
> does
> > this [1] help bridge the gap? That adds the connector dict to the
> > connection_info dict that is serialized and stored in the nova
> > block_device_mappings table, and then in that patch is used to pass it to
> > terminate_connection in the case that the host has changed.
> >
> > [1] https://review.openstack.org/#/c/266095/
> >
> > --
> >
> > Thanks,
> >
> > Matt Riedemann
> >
> >
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-

Re: [openstack-dev] [kolla][security] Obtaining the vulnerability:managed tag

2016-03-01 Thread Michał Jastrzębski
As I said on irc:) count me in sdake!

On 1 March 2016 at 22:11, Swapnil Kulkarni  wrote:
> On Tue, Mar 1, 2016 at 10:25 PM, Steven Dake (stdake)  
> wrote:
>> Core reviewers,
>>
>> Please review this document:
>> https://github.com/openstack/governance/blob/master/reference/tags/vulnerability_managed.rst
>>
>> It describes how vulnerability management is handled at a high level for
>> Kolla.  When we are ready, I want the kolla delivery repos vulnerabilities
>> to be managed by the VMT team.  By doing this, we standardize with other
>> OpenStack processes for handling security vulnerabilities.
>>
>> The first step is to form a kolla-coresec team, and create a separate
>> kolla-coresec tracker.  I have already created the tracker for kolla-coresec
>> and the kolla-coresec team in launchpad:
>>
>> https://launchpad.net/~kolla-coresec
>>
>> https://launchpad.net/kolla-coresec
>>
>> I have a history of security expertise, and the PTL needs to be on the team
>> as an escalation point as described in the VMT tagging document above.  I
>> also need 2-3 more volunteers to join the team.  You can read the
>> requirements of the job duties in the vulnerability:managed tag.
>>
>> If your interested in joining the VMT team, please respond on this thread.
>> If there are more then 4 individuals interested in joining this team, I will
>> form the team from the most active members based upon liberty + mitaka
>> commits, reviews, and PDE spent.
>>
>> Regards
>> -steve
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> I am interested in security. I would .like to be a part of it.
>
> ~coolsvap
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][security] Obtaining the vulnerability:managed tag

2016-03-01 Thread Swapnil Kulkarni
On Tue, Mar 1, 2016 at 10:25 PM, Steven Dake (stdake)  wrote:
> Core reviewers,
>
> Please review this document:
> https://github.com/openstack/governance/blob/master/reference/tags/vulnerability_managed.rst
>
> It describes how vulnerability management is handled at a high level for
> Kolla.  When we are ready, I want the kolla delivery repos vulnerabilities
> to be managed by the VMT team.  By doing this, we standardize with other
> OpenStack processes for handling security vulnerabilities.
>
> The first step is to form a kolla-coresec team, and create a separate
> kolla-coresec tracker.  I have already created the tracker for kolla-coresec
> and the kolla-coresec team in launchpad:
>
> https://launchpad.net/~kolla-coresec
>
> https://launchpad.net/kolla-coresec
>
> I have a history of security expertise, and the PTL needs to be on the team
> as an escalation point as described in the VMT tagging document above.  I
> also need 2-3 more volunteers to join the team.  You can read the
> requirements of the job duties in the vulnerability:managed tag.
>
> If your interested in joining the VMT team, please respond on this thread.
> If there are more then 4 individuals interested in joining this team, I will
> form the team from the most active members based upon liberty + mitaka
> commits, reviews, and PDE spent.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I am interested in security. I would .like to be a part of it.

~coolsvap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] Voting for new core reviewers

2016-03-01 Thread Fawad Khaliq
+1 to both. Great to see the core team growing.

Fawad Khaliq


On Tue, Mar 1, 2016 at 5:35 PM, Anil Rao  wrote:

> Both Takashi and Soichi have been involved with the project for a while
> now and have made significant contributions in terms of code and reviews.
>
>
>
> +1 for both.
>
>
>
> Thanks,
>
> Anil
>
>
>
> *From:* reedip banerjee [mailto:reedi...@gmail.com]
> *Sent:* Tuesday, March 01, 2016 4:21 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [neutron][taas] Voting for new core reviewers
>
>
>
>
>
> +1 to both Soichi and Yamamoto
>
> --
>
> Date: Tue, 1 Mar 2016 21:26:59 +0100
> From: Vinay Yadhav 
> To: "OpenStack Development Mailing List (not for usage questions)"
> Subject: [openstack-dev]  [neutron][taas]
> Message-ID:
> <
> ca+bcmm3avextk-vb4y0e8dphyb0pjxsutkxfv6_bsshqkf-...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi All,
>
> Its time to induct new members to the TaaS core reviewers, to kick start
> the process i nominate the following developers and active contributors to
> the TaaS project
>
> 1. Yamamoto Takashi
> 2. Soichi Shigeta
>
>
> Cheers,
> Vinay Yadhav
>
>
>
> --
>
> Thanks and Regards,
> Reedip Banerjee
>
> IRC: reedip
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova] Many same "region_name" configuration really meaingful for Multi-region customers?

2016-03-01 Thread Kai Qiang Wu

Hi All,


Right now, we found that nova.conf have many places for region_name
configuration. Check below:

nova.conf

***
[cinder]
os_region_name = ***

[neutron]
region_name= ***



***


From some mult-region environments observation, those two regions would
always config same value.
Question 1: Does nova support config different regions in nova.conf ? Like
below

[cinder]

os_region_name = RegionOne

[neutron]
region_name= RegionTwo


From Keystone point, I suspect those regions can access from each other.


Question 2:  If all need to config with same value, why we not use single
region_name in nova.conf ? (instead of create many region_name in same
file )

Is it just for code maintenance or else consideration ?



Could nova and keystone community members help this question ?


Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [magnum][magnum-ui] Liaisons for I18n

2016-03-01 Thread Shuu Mutou
Hi Hongbin, Yuanying and team,

Thank you for your recommendation.
I'm keeping 100% of EN to JP translation of Magnum-UI everyday.
I'll do my best, if I become a liaison.

Since translation has became another point of review for Magnum-UI, I hope that 
members translate Magnum-UI into your native language.

Best regards,
Shu Muto

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][log] Ideas to log request-ids in cross-projects

2016-03-01 Thread Kekane, Abhishek
Hi,

Added openstack-operators in cc so that they can share there views as well.

Abhishek


From: Bogdan Dobrelya [bdobre...@mirantis.com]
Sent: Tuesday, March 01, 2016 3:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][log] Ideas to log request-ids in 
cross-projects

On 01.03.2016 07:17, Kekane, Abhishek wrote:
> Hi Devs,
>
> Considering return request-id to caller specs [1] is implemented in
> python-*client, I would like to begin discussion on how these request-ids
> will be logged in cross-projects. In logging work-group meeting
> (11-Nov-2015)
> [2] there was a discussion about how to log request-id in the log messages.
> In same meeting it wass decided to write oslo.log specs but as of now no
> specs has been submitted.
>
> I would like to share our approach to log request-ids and seek suggestions
> for the same. We are planning to use request_utils module [3] which was
> earlier part of oslo-incubator but removed as no one was using it.
>
> A typical use case is: Tempest asking Nova to perform some action and Nova
> calling Glance internally, then the linkages might look like this:
>
> RequestID mapping in nova for nova and glance:
> -
>
> INFO nova.utils [req-f0fb885b-18a2-4510-9e85-b9066b410ee4 admin admin]
> Request ID Link: request.link 'req-f0fb885b-18a2-4510-9e85-b9066b410ee4'
> -> Target='glance' TargetId=req-a1ac739c-c816-4f82-ad82-9a9b1a603f43
>
> RequestID mapping in tempest for tempest and nova:
> -
>
> INFO tempest.tests [req-a0df655b-18a2-4510-9e85-b9435dh8ye4 admin admin]
> Request ID Link: request.link 'req-a0df655b-18a2-4510-9e85-b9435dh8ye4'
> -> Target='nova' TargetId=req-f0fb885b-18a2-4510-9e85-b9066b410ee4
>
> As there is a reference of nova's request-id in tempest and glance's
> request-id in nova, operator can easily trace the cause of failure.
>
> Using request_utils module we can also mention the 'stage' parameter to
> divide the entire api cycle with stages, e.g. create server can be
> staged as start, get-image can be staged as download-image and active
> instance
> can be staged as end of the operation.
>
> Advantages:
> ---
>
> With stages provided for API, it's easy for the operator to find out the
> failure stage from entire API cycle.
>
> An example with 'stage' is,
> Tempest asking Nova to perform some action and Nova calling Glance
> internally,
> then the linkages might look like this:
>
> INFO tempest.tests [req-a0df655b-18a2-4510-9e85-b9435dh8ye4 admin admin]
> Request ID Link: request.link.start
> 'req-a0df655b-18a2-4510-9e85-b9435dh8ye4'
>
> INFO nova.utils [req-f0fb885b-18a2-4510-9e85-b9066b410ee4 admin admin]
> Request ID Link: request.link.image_download
> 'req-f0fb885b-18a2-4510-9e85-b9066b410ee4' -> Target='glance'
> TargetId=req-a1ac739c-c816-4f82-ad82-9a9b1a603f43
>
> INFO tempest.tests [req-b0df857fb-18a2-4510-9e85-b9435dh8ye4 admin
> admin] Request ID Link: request.link.end
> 'req-b0df857fb-18a2-4510-9e85-b9435dh8ye4'
>
> Concern:
> 
>
> As request_utils module is removed from oslo-incubator and this module is
> also getting deprecated, I have following options to add it back to
> OpenStack.
>
> Option 1: Add request_utils module in oslo.log (as it is related to logging
> request_ids)
> Option 2: Add request_utils module in oslo.utils
> Option 3: Add link_request_ids method in utils.py of individual projects.
> (this will cause code duplication)
>
> Please let me know your thoughts about the same.

I believe the former option should work good as well. By the way, any
plans to track requests down to the root wrappers' shell commands? There
is also interesting R related to the topic directly [0], see "4.
Logging and coordination". Would be nice to reach those people and ask
for code snippets or cooperation as well...

[0] https://kabru.eecs.umich.edu/papers/publications/2013/socc2013_ju.pdf

>
> [1]
> http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html
> [2]
> http://eavesdrop.openstack.org/meetings/log_wg/2015/log_wg.2015-11-11-20.02.log.html
> [3]
> http://docs.openstack.org/developer/oslo-incubator/api/openstack.common.request_utils.html
>
> Thank You,
>
> Abhishek Kekane
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] FW: [magnum][magnum-ui] Liaisons for I18n

2016-03-01 Thread Hongbin Lu
+1. Shu Muto contributed a lot to magnum-ui. Highly recommended.

Best regards,
Hongbin

From: 大塚元央 [mailto:yuany...@oeilvert.org]
Sent: March-01-16 9:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] FW: [magnum][magnum-ui] Liaisons for I18n

Hi team,

Shu Muto is interested in to became liaisons  from magnum-ui.
He put great effort into translating English to Japanease in magnum-ui and 
horizon.
I recommend him to be liaison.

Thanks
-yuanying
2016年2月29日(月) 23:56 Hongbin Lu 
>:
Hi team,

FYI, I18n team needs liaisons from magnum-ui. Please contact the i18n team if 
you interest in this role.

Best regards,
Hongbin

From: Ying Chun Guo [mailto:guoyi...@cn.ibm.com]
Sent: February-29-16 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [all][i18n] Liaisons for I18n

Hello,

Mitaka translation will start soon, from this week.
In Mitaka translation, IBM full time translators will join the
translation team and work with community translators.
With their help, I18n team is able to cover more projects.
So I need liaisons from dev projects who can help I18n team to work
compatibly with development team in the release cycle.

I especially need liaisons in below projects, which are in Mitaka translation 
plan:
nova, glance, keystone, cinder, swift, neutron, heat, horizon, ceilometer.

I also need liaisons from Horizon plugin projects, which are ready in 
translation website:
trove-dashboard, sahara-dashboard,designate-dasbhard, magnum-ui,
monasca-ui, murano-dashboard and senlin-dashboard.
I need liaisons tell us whether they are ready for translation from project 
view.

As to other projects, liaisons are welcomed too.

Here are the descriptions of I18n liaisons:
- The liaison should be a core reviewer for the project and understand the i18n 
status of this project.
- The liaison should understand project release schedule very well.
- The liaison should notify I18n team happens of important moments in the 
project release in time.
For example, happen of soft string freeze, happen of hard string freeze, and 
happen of RC1 cutting.
- The liaison should take care of translation patches to the project, and make 
sure the patches are
successfully merged to the final release version. When the translation patch is 
failed, the liaison
should notify I18n team.

If you are interested to be a liaison and help translators,
input your information here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n .

Thank you for your support.
Best regards
Ying Chun Guo (Daisy)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Fuel 8.0 is released

2016-03-01 Thread Dmitry Borodaenko
We are proud to announce the release of Fuel 8.0, deployment and
management tool for OpenStack.

This release initroduces support for OpenStack Liberty, adds a number of
exciting new features and enhancements, fixes over 1600 bugs, and
eliminates a great deal of technical debt.

Some highlights:

- Support for multi-rack deployments with L3 routing between racks that
  was first introduced in Fuel 6.0 was expanded with more automation and
  validation; some key limitations of the previous implementation, such
  as placing all VIPs, floating IPs, and controllers in a single rack,
  have been relaxed (although controller services failover across racks
  still needs extra work); node groups can now be managed via Fuel UI.

- Fuel master node now runs on CentOS 7 with Python 2.7.

- The bootstrap image used for node discovery and provisioning is now
  generated when Fuel node is setup, and can be dynamically rebuilt to
  include additional drivers. This unifies the kernel version from
  discovery to a working install, removing a whole host of possible
  compatibility issues

- As another small step towards enabling life cycle management, a
  limited set of cloud configuration parameters can now be changed after
  deployment. This includes changing configuration of OpenStack services
  and installation of additional software via plugins.

Learn more about Fuel:
https://wiki.openstack.org/wiki/Fuel

How we work:
https://wiki.openstack.org/wiki/Fuel/How_to_contribute

Specs for features in 8.0 and other Fuel releases:
http://specs.openstack.org/openstack/fuel-specs/

ISO image:
http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-8.0.iso.torrent

RPM packages:
http://mirror.fuel-infra.org/mos-repos/centos/mos8.0-centos7-fuel/

Great work Fuel team, thanks to everyone who contributed to this awesome
release!

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [magnum][magnum-ui] Liaisons for I18n

2016-03-01 Thread 大塚元央
Hi team,

Shu Muto is interested in to became liaisons  from magnum-ui.
He put great effort into translating English to Japanease in magnum-ui and
horizon.
I recommend him to be liaison.

Thanks
-yuanying

2016年2月29日(月) 23:56 Hongbin Lu :

> Hi team,
>
>
>
> FYI, I18n team needs liaisons from magnum-ui. Please contact the i18n team
> if you interest in this role.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Ying Chun Guo [mailto:guoyi...@cn.ibm.com]
> *Sent:* February-29-16 3:48 AM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [all][i18n] Liaisons for I18n
>
>
>
> Hello,
>
> Mitaka translation will start soon, from this week.
> In Mitaka translation, IBM full time translators will join the
> translation team and work with community translators.
> With their help, I18n team is able to cover more projects.
> So I need liaisons from dev projects who can help I18n team to work
> compatibly with development team in the release cycle.
>
> I especially need liaisons in below projects, which are in Mitaka
> translation plan:
> nova, glance, keystone, cinder, swift, neutron, heat, horizon, ceilometer.
>
>
>
> I also need liaisons from Horizon plugin projects, which are ready in
> translation website:
>
> trove-dashboard, sahara-dashboard,designate-dasbhard, magnum-ui,
>
> monasca-ui, murano-dashboard and senlin-dashboard.
>
> I need liaisons tell us whether they are ready for translation from
> project view.
>
>
>
> As to other projects, liaisons are welcomed too.
>
> Here are the descriptions of I18n liaisons:
> - The liaison should be a core reviewer for the project and understand the
> i18n status of this project.
> - The liaison should understand project release schedule very well.
> - The liaison should notify I18n team happens of important moments in the
> project release in time.
>
> For example, happen of soft string freeze, happen of hard string freeze,
> and happen of RC1 cutting.
> - The liaison should take care of translation patches to the project, and
> make sure the patches are
>
> successfully merged to the final release version. When the translation
> patch is failed, the liaison
>
> should notify I18n team.
>
> If you are interested to be a liaison and help translators,
> input your information here:
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n .
>
>
>
> Thank you for your support.
>
> Best regards
> Ying Chun Guo (Daisy)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]weekly meeting of Mar. 2

2016-03-01 Thread Vega Cai
What I would like to discuss has been included :), the Reliable async, job.

BR
Zhiyuan

On 1 March 2016 at 17:19, joehuang  wrote:

> Hi,
>
>
>
> If you have any topic to discuss, please reply in the M-L.
>
>
>
> IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting
> on every Wednesday starting from UTC 13:00.
>
>
>
> Agenda:
>
> # Progress of To-do list review:
> https://etherpad.openstack.org/p/TricircleToDo
>
> # Reliable async. job
>
> # L2 networking across pods
>
> # quota management
>
>
>
> Best Regards
>
> Chaoyi Huang ( Joe Huang )
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] Voting for new core reviewers

2016-03-01 Thread Anil Rao
Both Takashi and Soichi have been involved with the project for a while now and 
have made significant contributions in terms of code and reviews.

+1 for both.

Thanks,
Anil

From: reedip banerjee [mailto:reedi...@gmail.com]
Sent: Tuesday, March 01, 2016 4:21 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron][taas] Voting for new core reviewers


+1 to both Soichi and Yamamoto
--
Date: Tue, 1 Mar 2016 21:26:59 +0100
From: Vinay Yadhav >
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: [openstack-dev]  [neutron][taas]
Message-ID:

>
Content-Type: text/plain; charset="utf-8"

Hi All,

Its time to induct new members to the TaaS core reviewers, to kick start
the process i nominate the following developers and active contributors to
the TaaS project

1. Yamamoto Takashi
2. Soichi Shigeta


Cheers,
Vinay Yadhav

--
Thanks and Regards,
Reedip Banerjee
IRC: reedip



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara]FFE Request for nfs-as-a-data-source

2016-03-01 Thread Chen, Weiting
Hi all,

I would like to request a FFE for the feature "nfs-as-a-data-source":
BP: https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source
BP Review: https://review.openstack.org/#/c/210839/
Sahara Code: https://review.openstack.org/#/c/218638/
Sahara Image Elements Code: https://review.openstack.org/#/c/218637/

Estimate Complete Time: The BP has been complete and the implementation has 
been complete as well. All the code is under code reviewing and since there is 
no big change or modify for the code we expect it can only take one weeks to be 
merged.
The Benefits for this change: Provide NFS support in Sahara.
The Risk: The risk would be low for this patch, since all the functions have 
been delivered.

Thanks,
Weiting(William) Chen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-01 Thread Rick Jones

On 03/01/2016 04:29 PM, Preston L. Bannister wrote:


Running "dd" in the physical host against the Cinder-allocated volumes
nets ~1.2GB/s (roughly in line with expectations for the striped flash
volume).

Running "dd" in an instance against the same volume (now attached to the
instance) got ~300MB/s, which was pathetic. (I was expecting 80-90% of
the raw host volume numbers, or better.) Upping read-ahead in the
instance via "hdparm" boosted throughput to ~450MB/s. Much better, but
still sad.

In the second measure the volume data passes through iSCSI and then the
QEMU hypervisor. I expected to lose some performance, but not more than
half!

Note that as this is an all-in-one OpenStack node, iSCSI is strictly
local and not crossing a network. (I did not want network latency or
throughput to be a concern with this first measure.)


Well, not crossing a physical network :)  You will be however likely 
crossing the loopback network on the node.


What sort of per-CPU utilizations do you see when running the test to 
the instance?  Also, out of curiosity, what block size are you using in 
dd?  I wonder how well that "maps" to what iSCSI will be doing.


rick jones
http://www.netperf.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][security] Obtaining the vulnerability:managed tag

2016-03-01 Thread Martin André
On Wed, Mar 2, 2016 at 1:55 AM, Steven Dake (stdake) 
wrote:

> Core reviewers,
>
> Please review this document:
>
> https://github.com/openstack/governance/blob/master/reference/tags/vulnerability_managed.rst
>
> It describes how vulnerability management is handled at a high level for
> Kolla.  When we are ready, I want the kolla delivery repos vulnerabilities
> to be managed by the VMT team.  By doing this, we standardize with other
> OpenStack processes for handling security vulnerabilities.
>
> The first step is to form a kolla-coresec team, and create a separate
> kolla-coresec tracker.  I have already created the tracker for
> kolla-coresec and the kolla-coresec team in launchpad:
>
> https://launchpad.net/~kolla-coresec
>
> https://launchpad.net/kolla-coresec
>
> I have a history of security expertise, and the PTL needs to be on the
> team as an escalation point as described in the VMT tagging document
> above.  I also need 2-3 more volunteers to join the team.  You can read the
> requirements of the job duties in the vulnerability:managed tag.
>
> If your interested in joining the VMT team, please respond on this
> thread.  If there are more then 4 individuals interested in joining this
> team, I will form the team from the most active members based upon liberty
> + mitaka commits, reviews, and PDE spent.
>

How many more cores do you need? If you don't have enough volunteers you
can sign me up for it.

Martin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] OpenStack Contributor Awards

2016-03-01 Thread SULLIVAN, BRYAN L
(maybe further off-topic)

Re use of NUCs for OpenStack, I have been doing this as a cheap lab environment 
for a few months, using the OPNFV Brahmaputra release (OS Liberty and ODL 
Lithium/Beryllium). It works great, and has really helped me come on board with 
OpenStack (as compared to DevStack) without having to build an expensive 
server/switch environment.

The setup is described at https://wiki.opnfv.org/copper/academy.

Thanks,
Bryan Sullivan | AT

From: Silence Dogood [mailto:m...@nycresistor.com]
Sent: Tuesday, March 01, 2016 2:33 PM
To: Tim Bell 
Cc: OpenStack Development Mailing List (not for usage questions) 
; OpenStack Operators 

Subject: Re: [Openstack-operators] [openstack-dev] OpenStack Contributor Awards

I believe Eric Windisch did at one point run OpenStack on a pi.

The problem is that it's got so little ram, and no hypervisor.  Also at least 
it USED to not be able to run docker since docker wasn't crosscompiled to arm 
at the time.

It's a terrible target for openstack.  NUCs on the other hand...

=/

-Matt

On Tue, Mar 1, 2016 at 3:04 PM, Tim Bell 
> wrote:

Just to check, does OpenStack run on a Raspberry Pi ? Could cause some negative 
comments if it was
not compatible/sized for a basic configuration.

Tim





On 01/03/16 20:41, "Thomas Goirand" > 
wrote:

>On 03/01/2016 11:30 PM, Tom Fifield wrote:
>> Excellent, excellent.
>>
>> What's the best place to buy Raspberry Pis these days?
>
>One of the 2 official sites:
>https://www.element14.com/community/community/raspberry-pi
>
>The Pi 3 is the super nice shiny new stuff, with 64 arm bits.
>
>Cheers,
>
>Thomas Goirand (zigo)
>
>
>Hopefully, with it, there will be no need of raspbian anymore (it was
>there because of a very poor choice of CPU in model 1 and 2, just below
>what the armhf builds required, forcing to use armel which is arm v4
>instruction sets).
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-01 Thread Preston L. Bannister
I have need to benchmark volume-read performance of an application running
in an instance, assuming extremely fast storage.

To simulate fast storage, I have an AIO install of OpenStack, with local
flash disks. Cinder LVM volumes are striped across three flash drives (what
I have in the present setup).

Since I am only interested in sequential-read performance, the "dd" utility
is sufficient as a measure.

Running "dd" in the physical host against the Cinder-allocated volumes nets
~1.2GB/s (roughly in line with expectations for the striped flash volume).

Running "dd" in an instance against the same volume (now attached to the
instance) got ~300MB/s, which was pathetic. (I was expecting 80-90% of the
raw host volume numbers, or better.) Upping read-ahead in the instance via
"hdparm" boosted throughput to ~450MB/s. Much better, but still sad.

In the second measure the volume data passes through iSCSI and then the
QEMU hypervisor. I expected to lose some performance, but not more than
half!

Note that as this is an all-in-one OpenStack node, iSCSI is strictly local
and not crossing a network. (I did not want network latency or throughput
to be a concern with this first measure.)

I do not see any prior mention of performance of this sort on the web or in
the mailing list. Possible I missed something.

What sort of numbers are you seeing out of high performance storage?

Is the huge drop in read-rate within an instance something others have seen?

Is the default iSCSI configuration used by Nova and Cinder optimal?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] FFE request for osnailyfacter refactoring for Puppet Master compatibility

2016-03-01 Thread Scott Brimhall
Greetings,

As you might know, we are working on integrating a 3rd party
configuration management platform (Puppet Master) with Fuel.
This integration will provide the capability for state enforcement
and will further enable "day 2" operations of a Fuel-deployed site.
We must refactor the 'osnailyfacter' module in Fuel Library to be
compatible with both a masterful and masterless Puppet approach.

This change is required to enable a Puppet Master based LCM
solution.

We request a FFE for this feature for 3 weeks, until Mar 24.  By that
time, we will provide a tested solution in accordance with the following
specifications [1].

The feature includes the following components:

1. Refactor 'osnailyfacter' Fuel Library module to be compatible with
Puppet Master by becoming a valid and compliant Puppet module.
This involves moving manifests into the proper manifests directory
and moving the contents into classes that can be included by Puppet
Master.
2. Update deployment tasks to update their manifest path to the new
location.

Merging this code is relatively non-intrusive to core Fuel Library code
as it is merely re-organizing the file structure of the osnailyfacter
module to be compatible with Puppet Master.  Upon updating the
deployment tasks to reflect the new location of manifests, this feature
remains compatible with the masterless puppet apply approach that
Fuel uses while providing the ability to integrate a Puppet Master
based LCM solution.

Overall, I consider this change as low risk for integrity and timeline of
the release and it is a critical feature for the ability to integrate an LCM
solution using Puppet Master.

Please consider our request and share concerns so we can properly
resolve them.

[1] 
https://blueprints.launchpad.net/fuel/+spec/fuel-refactor-osnailyfacter-for-puppet-master-compatibility

---
Best Regards,

Scott Brimhall
Systems Architect
Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][taas] Voting for new core reviewers

2016-03-01 Thread reedip banerjee
+1 to both Soichi and Yamamoto
--
Date: Tue, 1 Mar 2016 21:26:59 +0100
From: Vinay Yadhav 
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: [openstack-dev]  [neutron][taas]
Message-ID:

Content-Type: text/plain; charset="utf-8"

Hi All,

Its time to induct new members to the TaaS core reviewers, to kick start
the process i nominate the following developers and active contributors to
the TaaS project

1. Yamamoto Takashi
2. Soichi Shigeta


Cheers,
Vinay Yadhav

-- 
Thanks and Regards,
Reedip Banerjee
IRC: reedip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-03-01 Thread Adam Lawson
After reading through this, it seems to me like Anita is frustrated with
the motivation among those who submit one patch because of her perception
that they are doing so only to take advantage of a free ticket and by
extension, crowding rooms that affects her ability to hear/engage
effectively. I understand the desire to engage more (for lack of better
term) but reaching beyond someone's contribution and openly suggest they
are not contributing for the right reasons isn't anyone'e concern to be
perfectly frank.

Aside from that, I don't think anyone is taking advantage of anything if
they earn a free ticket to an OpenStack Summit by fulfilling the needful
requirements. If they submit a patch, OpenStack becomes better and the
contributor is entitled to a free ticket to the next Summit. I fail to see
a single downside in that.

Tough love: "gaming" within the context of those who submit one patch,
making remarks about a perceived dishonorable motivation of our peers; that
level of discourse among peers is simply damaging.

//adam


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Tue, Mar 1, 2016 at 2:08 PM, Chris Friesen 
wrote:

> On 03/01/2016 03:52 PM, Clint Byrum wrote:
>
>> Excerpts from Eoghan Glynn's message of 2016-03-01 02:08:00 -0800:
>>
>
> There are a whole slew of folks who work fulltime on OpenStack but
>>> contribute mainly in the background: operating clouds, managing
>>> engineering teams, supporting customers, designing product roadmaps,
>>> training new users etc. TBH we should be flattered that the design
>>> summit sessions are interesting and engaging enough to also attract
>>> some of that sort of audience, as well as the core contributors of
>>> code. If those interested folks happen to also have the gumption to
>>> earn an ATC pass by meeting the threshold for contributor activity,
>>> then good for them! As long as no-one is actively derailing the
>>> discussion, I don't see much of an issue with the current mix of
>>> attendees.
>>>
>>>
>> I think you're right on all of these points. However, what you might
>> not have considered is that _the sheer number of people in the room_
>> can derail the productivity of a group of developers arguing complicated
>> points. It's not that we want to be operating in the shadows; it is that
>> we want to be operating in a safe, creative environment. A room with 5
>> friends, 5 acquaintances, and 100 strangers, is not that. But if there
>> are only, say, 15 strangers, one can take the time to get to know those
>> people, to understand their needs, and make far more progress and be
>> far more inclusive in discussions.
>>
>> What we want is for people to be attending and participating _on
>> purpose_.
>>
>
> I think it's pretty unlikely that people would attend the developer summit
> sessions by accident.  :)
>
> It kind of sounds to me like you want to limit the number of 'tourists'
> that aren't actively involved in the issues being discussed, but are just
> there to observe.  Or am I misinterpreting?
>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-01 Thread Fawad Khaliq
Thanks for brining this up.

On Tue, Mar 1, 2016 at 2:52 PM, Kevin Benton  wrote:

> Hi,
>
> I know this has come up in the past, but some folks in the infra channel
> brought up the topic of changing the default security groups to allow all
> traffic.
>
> They had a few reasons for this that I will try to summarize here:
> * Ports 'just work' out of the box so there is no troubleshooting to
> eventually find out that ingress is blocked by default.
> * Instances without ingress are useless so a bunch of API calls are
> required to make them useful.
> * Some cloud providers allow all traffic by default (e.g. Digital Ocean,
> RAX).
> * It violates the end-to-end principle of the Internet to have a
> middle-box meddling with traffic (the compute node in this case).
> * Neutron cannot be trusted to do what it says it's doing with the
> security groups API so users want to orchestrate firewalls directly on
> their instances.
>
>
> So this ultimately brings up two big questions. First, can we agree on a
> set of defaults that is different than the one we have now; and, if so, how
> could we possibly manage upgrades where this will completely change the
> default filtering for users using the API?
>
> Second, would it be acceptable to make this operator configurable? This
> would mean users could receive different default filtering as they moved
> between clouds.
>

+1, there is definitely value in having an option to change defaults.


> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] FFE Tracking

2016-03-01 Thread Andrew Woodward
As FF quickly approaches (March 2) we have started posting feature freeze
exception requests.

If you would like to request an exception for your feature, please request
one in accordance with the guidelines [0], and include your estimate for
completion and risk at that time of completion. Also please add your
feature to the fuel meeting agenda [1] and we will review all of the
outstanding FFE requests on this Thursdays IRC meeting.

[0] https://wiki.openstack.org/wiki/FeatureFreeze
[1] https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][FFE] Remove conflicting openstack module parts

2016-03-01 Thread Andrew Woodward
I'd like to request a feature freeze exception for the Remove conflicting
openstack module parts feature [0]

This is necessary to make the feature Decouple Fuel and OpenStack tasks
feature useful [1] , some of the patches are ready for review and some
still need to be written [2]

We need 2 - 3 weeks after FF to finish this feature. Risk of not delivering
it after 3 weeks is low.

[0]
https://blueprints.launchpad.net/fuel/+spec/fuel-remove-conflict-openstack
[1] https://blueprints.launchpad.net/fuel/+spec/fuel-openstack-tasks
[2]
https://review.openstack.org/#/q/topic:bp/fuel-remove-conflict-openstack,n,z
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-01 Thread Kevin Benton
Hi,

I know this has come up in the past, but some folks in the infra channel
brought up the topic of changing the default security groups to allow all
traffic.

They had a few reasons for this that I will try to summarize here:
* Ports 'just work' out of the box so there is no troubleshooting to
eventually find out that ingress is blocked by default.
* Instances without ingress are useless so a bunch of API calls are
required to make them useful.
* Some cloud providers allow all traffic by default (e.g. Digital Ocean,
RAX).
* It violates the end-to-end principle of the Internet to have a middle-box
meddling with traffic (the compute node in this case).
* Neutron cannot be trusted to do what it says it's doing with the security
groups API so users want to orchestrate firewalls directly on their
instances.


So this ultimately brings up two big questions. First, can we agree on a
set of defaults that is different than the one we have now; and, if so, how
could we possibly manage upgrades where this will completely change the
default filtering for users using the API?

Second, would it be acceptable to make this operator configurable? This
would mean users could receive different default filtering as they moved
between clouds.


Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching and force detach

2016-03-01 Thread Murray, Paul (HP Cloud)

> -Original Message-
> From: D'Angelo, Scott
> 
> Matt, changing Nova to store the connector info at volume attach time does
> help. Where the gap will remain is after Nova evacuation or live migration,

This will happen with shelve as well I think. Volumes are not detached in shelve
IIRC.

> when that info will need to be updated in Cinder. We need to change the
> Cinder API to have some mechanism to allow this.
> We'd also like Cinder to store the appropriate info to allow a force-detach 
> for
> the cases where Nova cannot make the call to Cinder.
> Ongoing work for this and related issues is tracked and discussed here:
> https://etherpad.openstack.org/p/cinder-nova-api-changes
> 
> Scott D'Angelo (scottda)
> 
> From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
> Sent: Monday, February 29, 2016 7:48 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][cinder] volumes stuck detaching
> attaching and force detach
> 
> On 2/22/2016 4:08 PM, Walter A. Boring IV wrote:
> > On 02/22/2016 11:24 AM, John Garbutt wrote:
> >> Hi,
> >>
> >> Just came up on IRC, when nova-compute gets killed half way through a
> >> volume attach (i.e. no graceful shutdown), things get stuck in a bad
> >> state, like volumes stuck in the attaching state.
> >>
> >> This looks like a new addition to this conversation:
> >> http://lists.openstack.org/pipermail/openstack-dev/2015-
> December/0826
> >> 83.html
> >>
> >> And brings us back to this discussion:
> >> https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova
> >>
> >> What if we move our attention towards automatically recovering from
> >> the above issue? I am wondering if we can look at making our usually
> >> recovery code deal with the above situation:
> >>
> https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24
> >> c79f4bf615/nova/compute/manager.py#L934
> >>
> >>
> >> Did we get the Cinder APIs in place that enable the force-detach? I
> >> think we did and it was this one?
> >> https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force
> >> -detach-needs-cinderclient-api
> >>
> >>
> >> I think diablo_rojo might be able to help dig for any bugs we have
> >> related to this. I just wanted to get this idea out there before I
> >> head out.
> >>
> >> Thanks,
> >> John
> >>
> >>
> __
> ___
> >> _
> >>
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> .
> >>
> > The problem is a little more complicated.
> >
> > In order for cinder backends to be able to do a force detach
> > correctly, the Cinder driver needs to have the correct 'connector'
> > dictionary passed in to terminate_connection.  That connector
> > dictionary is the collection of initiator side information which is gleaned
> here:
> > https://github.com/openstack/os-brick/blob/master/os_brick/initiator/c
> > onnector.py#L99-L144
> >
> >
> > The plan was to save that connector information in the Cinder
> > volume_attachment table.  When a force detach is called, Cinder has
> > the existing connector saved if Nova doesn't have it.  The problem was
> > live migration.  When you migrate to the destination n-cpu host, the
> > connector that Cinder had is now out of date.  There is no API in
> > Cinder today to allow updating an existing attachment.
> >
> > So, the plan at the Mitaka summit was to add this new API, but it
> > required microversions to land, which we still don't have in Cinder's
> > API today.
> >
> >
> > Walt
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> Regarding storing off the initial connector information from the attach, does
> this [1] help bridge the gap? That adds the connector dict to the
> connection_info dict that is serialized and stored in the nova
> block_device_mappings table, and then in that patch is used to pass it to
> terminate_connection in the case that the host has changed.
> 
> [1] https://review.openstack.org/#/c/266095/
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> 

[openstack-dev] {openstack-dev][tc] Leadership training proposal/info

2016-03-01 Thread Colette Alexander
Hello Stackers,

This is the continuation of an ongoing conversation within the TC about
encouraging the growth of leadership skills within the community that began
just after the Mitaka summit last year[1]. After being asked by lifeless to
do a bit of research and discussing needs/wants re: leadership directly
with TC members, I made some suggestions on an etherpad[2], and was then
asked to go find out about funding possibilities.

tl;dr - If you're a member of the TC or the Board and would like to attend
Leadership Training at ZingTrain in Ann Arbor, please get back to me ASAP
with your contact info and preferred timing/dates for this two-day training
- also, please let me know whether April 20/21 (or April 21/22) would
specifically work for you or not.


Longer version:
Mark Collier and the Foundation have graciously offered to cover the costs
of training for a two-day session at ZingTrain in Ann Arbor  - this
includes the cost of breakfast/lunch for two days as well as two full
working-days of seminars. Attendees would be responsible for their own
travel, lodging, and incidental expenses beyond that (hopefully picked up
by your employer who sees this as an amazing opportunity for your career
growth). Currently, I've heard the week before the Austin Summit suggested
by more than one person coming in from out of the country as preferred
dates, but we've not committed to anything yet, so here might be a great
time and place to hash that out among interested parties. ZingTrain has
suggested a cap of ~20 people on the course, but that's not totally firm,
so it's possible to add more if more are interested, or we could hold two
separate two-day sessions to accommodate overflow. My ideal mix of people
include those who are really excited by the idea of training, and those who
are are seriously skeptical of any leadership training at all. In fact, if
you've been to leadership training before and have found it to be terrible
and awful, I think your input would be most valuable on this one. My
summary of reasoning behind the 'why' of ZingTrain can be found on the
etherpad I already mentioned[2]. Also, did I mention, the food will be
amazing? It will be[3].

Some complications: the week before the Newton Summit there will be a set
of incoming TC members (elected in early April) and likely some TC members
who will be outgoing. Some possible solutions: we can certainly push back
training til post-Summit when we can have a set of the 'new' TC, or we can
sign up anyone interested currently, and allow a limited number of newly
elected folks who are interested sign on as the election is finished. I
certainly welcome any thoughts on that.

A note about starting out with the TC/Board for training: this initiative
began as a set of conversations about leadership as a whole in the entire
OpenStack community, so the intent with limiting to TC/Board here is not
exclusion, merely finding the right place to start. My proposal with the TC
begins with them, because the leadership conversation within OpenStack
began with them, and the goals of training are really to help them talk
about defining the issue/problem collectively, within a space designed to
help people do that.

If you have any questions at all, please feel free to ping me on IRC
(gothicmindfood) or ask them here.

Thanks everyone!

-colette


[1]
http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-11-03-20.07.log.html
[2] https://etherpad.openstack.org/p/Leadershiptraining
[3] http://www.zingermansdeli.com/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] OpenStack Contributor Awards

2016-03-01 Thread Silence Dogood
I believe Eric Windisch did at one point run OpenStack on a pi.

The problem is that it's got so little ram, and no hypervisor.  Also at
least it USED to not be able to run docker since docker wasn't
crosscompiled to arm at the time.

It's a terrible target for openstack.  NUCs on the other hand...

=/

-Matt

On Tue, Mar 1, 2016 at 3:04 PM, Tim Bell  wrote:

>
> Just to check, does OpenStack run on a Raspberry Pi ? Could cause some
> negative comments if it was
> not compatible/sized for a basic configuration.
>
> Tim
>
>
>
>
>
> On 01/03/16 20:41, "Thomas Goirand"  wrote:
>
> >On 03/01/2016 11:30 PM, Tom Fifield wrote:
> >> Excellent, excellent.
> >>
> >> What's the best place to buy Raspberry Pis these days?
> >
> >One of the 2 official sites:
> >https://www.element14.com/community/community/raspberry-pi
> >
> >The Pi 3 is the super nice shiny new stuff, with 64 arm bits.
> >
> >Cheers,
> >
> >Thomas Goirand (zigo)
> >
> >
> >Hopefully, with it, there will be no need of raspbian anymore (it was
> >there because of a very poor choice of CPU in model 1 and 2, just below
> >what the armhf builds required, forcing to use armel which is arm v4
> >instruction sets).
> >
> >
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][FFE] Decouple Fuel and OpenStack tasks

2016-03-01 Thread Andrew Woodward
I'd like to request a feature freeze exception for Decouple Fuel and
OpenStack tasks feature [0].

While the code change [1] is ready and usually passing CI we have too much
churn in the tasks currently which puts the patch set in conflict
constantly so it has to be rebased multiple times a day.

We need more review and feedback on the change, and a quiet period to merge
it

[0] https://blueprints.launchpad.net/fuel/+spec/fuel-openstack-tasks
[1] https://review.openstack.org/#/c/283332/
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-03-01 Thread Chris Friesen

On 03/01/2016 03:52 PM, Clint Byrum wrote:

Excerpts from Eoghan Glynn's message of 2016-03-01 02:08:00 -0800:



There are a whole slew of folks who work fulltime on OpenStack but
contribute mainly in the background: operating clouds, managing
engineering teams, supporting customers, designing product roadmaps,
training new users etc. TBH we should be flattered that the design
summit sessions are interesting and engaging enough to also attract
some of that sort of audience, as well as the core contributors of
code. If those interested folks happen to also have the gumption to
earn an ATC pass by meeting the threshold for contributor activity,
then good for them! As long as no-one is actively derailing the
discussion, I don't see much of an issue with the current mix of
attendees.



I think you're right on all of these points. However, what you might
not have considered is that _the sheer number of people in the room_
can derail the productivity of a group of developers arguing complicated
points. It's not that we want to be operating in the shadows; it is that
we want to be operating in a safe, creative environment. A room with 5
friends, 5 acquaintances, and 100 strangers, is not that. But if there
are only, say, 15 strangers, one can take the time to get to know those
people, to understand their needs, and make far more progress and be
far more inclusive in discussions.

What we want is for people to be attending and participating _on
purpose_.


I think it's pretty unlikely that people would attend the developer summit 
sessions by accident.  :)


It kind of sounds to me like you want to limit the number of 'tourists' that 
aren't actively involved in the issues being discussed, but are just there to 
observe.  Or am I misinterpreting?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-03-01 Thread Anita Kuno
On 03/01/2016 04:30 PM, Chris Friesen wrote:
> On 03/01/2016 09:03 AM, Anita Kuno wrote:
>> On 03/01/2016 05:08 AM, Eoghan Glynn wrote:
> 
>> In Vancouver I happened to be sitting behind someone who stated "I'm
>> just here for the buzz." Which is lovely for that person. The
>> problem is
>> that the buzz that person is there for is partially created by me
>> and I
>> create it and mean to offer it to people who will return it in
>> kind, not
>> just soak it up and keep it to themselves.
>>
> I don't know if drive-by attendance at design summit sessions by
> under-
> qualified or uninformed summiteers is encouraged by the
> availability of
> ATC passes. But as long as those individuals aren't actively derailing
> the conversation in sessions, I wouldn't consider their buzz
> soakage as
> a major issue TBH.
>
 Folks who want to help (even if they don't know how yet) carry an
 energy
 of intention with them which is nourishing to be around. Folks who are
 trying to get in the door and not be expected to help and hope noone
 notices carry an entirely different kind of energy with them. It is a
 non-nourishing energy.
>>>
>>> Personally I don't buy into that notion of the wrong sort of people
>>> sneaking in the door of summit, keeping their heads down and hoping
>>> no-one notices.
> 
> 
> 
>>> TBH we should be flattered that the design
>>> summit sessions are interesting and engaging enough to also attract
>>> some of that sort of audience, as well as the core contributors of
>>> code. If those interested folks happen to also have the gumption to
>>> earn an ATC pass by meeting the threshold for contributor activity,
>>> then good for them! As long as no-one is actively derailing the
>>> discussion, I don't see much of an issue with the current mix of
>>> attendees.
>>
>> Yeah, I don't feel you have understood what my point is, and that is
>> fine. We did put forward an attempt to communicate and it failed. We
>> will have other opportunities on other issues in the future.
> 
> I don't think it's so much a failure to communicate, but rather simply a
> failure to arrive at a consensus.  As I see it, Eoghan understands your
> point but does not feel the same way.

No anyone who believes that my position includes dishonouring users
isn't understanding my point.

Thanks,
Anita.

> 
> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-03-01 Thread Clint Byrum
Excerpts from Eoghan Glynn's message of 2016-03-01 02:08:00 -0800:
> 
> > > Current thinking would be to give preferential rates to access the 
> > > main
> > > summit to people who are present to other events (like this new
> > > separated contributors-oriented event, or Ops midcycle(s)). That would
> > > allow for a wider definition of "active community member" and reduce
> > > gaming.
> > >
> > 
> >  I think reducing gaming is important. It is valuable to include those
> >  folks who wish to make a contribution to OpenStack, I have confidence
> >  the next iteration of entry structure will try to more accurately
> >  identify those folks who bring value to OpenStack.
> > >>>
> > >>> There have been a couple references to "gaming" on this thread, which
> > >>> seem to imply a certain degree of dishonesty, in the sense of bending
> > >>> the rules.
> > >>>
> > >>> Can anyone who has used the phrase clarify:
> > >>>
> > >>>  (a) what exactly they mean by gaming in this context
> > >>>
> > >>> and:
> > >>>
> > >>>  (b) why they think this is a clear & present problem demanding a
> > >>>  solution?
> > >>>
> > >>> For the record, landing a small number of patches per cycle and thus
> > >>> earning an ATC summit pass as a result is not, IMO at least, gaming.
> > >>>
> > >>> Instead, it's called *contributing*.
> > >>>
> > >>> (on a small scale, but contributing none-the-less).
> > >>>
> > >>> Cheers,
> > >>> Eoghan
> > >>
> > >> Sure I can tell you what I mean.
> > >>
> > >> In Vancouver I happened to be sitting behind someone who stated "I'm
> > >> just here for the buzz." Which is lovely for that person. The problem is
> > >> that the buzz that person is there for is partially created by me and I
> > >> create it and mean to offer it to people who will return it in kind, not
> > >> just soak it up and keep it to themselves.
> > >>
> > >> Now I have no way of knowing who this person is and how they arrived at
> > >> the event. But the numbers for people offering one patch to OpenStack
> > >> (the bar for a summit pass) is significantly higher than the curve of
> > >> people offering two, three or four patches to OpenStack (patches that
> > >> are accepted and merged). So some folks are doing the minimum to get a
> > >> summit pass rather than being part of the cohort that has their first
> > >> patch to OpenStack as a means of offering their second patch to 
> > >> OpenStack.
> > >>
> > >> I consider it an honour and a privilege that I get to work with so many
> > >> wonderful people everyday who are dedicated to making open source clouds
> > >> available for whoever would wish to have clouds. I'm more than a little
> > >> tired of having my energy drained by folks who enjoy feeding off of it
> > >> while making no effort to return beneficial energy in kind.
> > >>
> > >> So when I use the phrase gaming, this is the dynamic to which I refer.
> > > 
> > > Thanks for the response.
> > > 
> > > I don't know if drive-by attendance at design summit sessions by under-
> > > qualified or uninformed summiteers is encouraged by the availability of
> > > ATC passes. But as long as those individuals aren't actively derailing
> > > the conversation in sessions, I wouldn't consider their buzz soakage as
> > > a major issue TBH.
> > > 
> > > In any case, I would say that just meeting the bar for an ATC summit pass
> > > (by landing the required number of patches) is not bending the rules or
> > > misrepresenting in any way.
> > > 
> > > Even if specifically motivated by the ATC pass (as opposed to scratching
> > > a very specific itch) it's still simply an honest and rational response
> > > to an incentive offered by the foundation.
> > > 
> > > One could argue whether the incentive is mis-designed, but that doesn't
> > > IMO make a gamer of any contributor who simply meets the required 
> > > threshold
> > > of activity.
> > > 
> > > Cheers,
> > > Eoghan
> > > 
> > 
> > No I'm not saying that. I'm saying that the larger issue is one of
> > motivation.
> > 
> > Folks who want to help (even if they don't know how yet) carry an energy
> > of intention with them which is nourishing to be around. Folks who are
> > trying to get in the door and not be expected to help and hope noone
> > notices carry an entirely different kind of energy with them. It is a
> > non-nourishing energy.
> 
> Personally I don't buy into that notion of the wrong sort of people
> sneaking in the door of summit, keeping their heads down and hoping
> no-one notices.
> 
> We have an open community that conducts its business in public. Not
> wanting folks with the wrong sort of energy to be around when that
> business is being done, runs counter to our open ethos IMO.
> 
> There are a whole slew of folks who work fulltime on OpenStack but
> contribute mainly in the background: operating clouds, managing
> engineering teams, supporting customers, designing product roadmaps,
> training new users etc. TBH 

Re: [openstack-dev] [all] A proposal to separate the design summit

2016-03-01 Thread Chris Friesen

On 03/01/2016 09:03 AM, Anita Kuno wrote:

On 03/01/2016 05:08 AM, Eoghan Glynn wrote:



In Vancouver I happened to be sitting behind someone who stated "I'm
just here for the buzz." Which is lovely for that person. The problem is
that the buzz that person is there for is partially created by me and I
create it and mean to offer it to people who will return it in kind, not
just soak it up and keep it to themselves.


I don't know if drive-by attendance at design summit sessions by under-
qualified or uninformed summiteers is encouraged by the availability of
ATC passes. But as long as those individuals aren't actively derailing
the conversation in sessions, I wouldn't consider their buzz soakage as
a major issue TBH.


Folks who want to help (even if they don't know how yet) carry an energy
of intention with them which is nourishing to be around. Folks who are
trying to get in the door and not be expected to help and hope noone
notices carry an entirely different kind of energy with them. It is a
non-nourishing energy.


Personally I don't buy into that notion of the wrong sort of people
sneaking in the door of summit, keeping their heads down and hoping
no-one notices.





TBH we should be flattered that the design
summit sessions are interesting and engaging enough to also attract
some of that sort of audience, as well as the core contributors of
code. If those interested folks happen to also have the gumption to
earn an ATC pass by meeting the threshold for contributor activity,
then good for them! As long as no-one is actively derailing the
discussion, I don't see much of an issue with the current mix of
attendees.


Yeah, I don't feel you have understood what my point is, and that is
fine. We did put forward an attempt to communicate and it failed. We
will have other opportunities on other issues in the future.


I don't think it's so much a failure to communicate, but rather simply a failure 
to arrive at a consensus.  As I see it, Eoghan understands your point but does 
not feel the same way.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][FFE] FF exception request for non-root accounts on slave nodes

2016-03-01 Thread Dmitry Nikishov
Hello,

I'd like to request a FF exception for "Run Fuel slave nodes as non-root"
feature[1].

Current status:
larger part of the feature is already merged[2] and some more
patches[3][4][5][6] are expected to land before the FF.

When these patches are in the master, Fuel 9.0 will be able to create
non-root accounts on target nodes, however, root SSH will still be enabled.
To change that we'll need actually to
- Fix fuel-qa to be able to use non-root accounts [7].
- Fix ceph deployment by either merging [8] or waiting for community ceph
module support.
- Disable root SSH [9].

For that, we need 2.5 weeks after the FF to finish the feature. Risk of not
delivering the feature after 2.5 weeks is low.

Thanks.

[1] https://blueprints.launchpad.net/fuel/+spec/fuel-nonroot-openstack-nodes
[2]
https://review.openstack.org/#/q/status:merged+topic:bp-fuel-nonsuperuser
[3] https://review.openstack.org/258200
[4] https://review.openstack.org/284682
[5] https://review.openstack.org/285299
[6] https://review.openstack.org/258671
[7] https://review.openstack.org/281776
[8] https://review.openstack.org/278953
[9] https://review.openstack.org/278954
-- 
Dmitry Nikishov,
Deployment Engineer,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][security] Obtaining the vulnerability:managed tag

2016-03-01 Thread Steven Dake (stdake)
Adam,

Thank you for your offer, but I believe the VMT kolla-coresec team must be 
formed from core reviewers or I'd ask Dave Mccowan to consider an invitation.

The text that I think this comes from is:
Deliverables with more than five core reviewers should (so as to limit the 
unnecessary exposure of private reports) settle on a subset of these to act as 
security core reviewers whose responsibility it is to be able to confirm 
whether a bug report is accurate/applicable or at least know other subject 
matter experts they can in turn subscribe to perform those activities in a 
timely manner

It is pretty easy to become a core reviewer in Kolla over time but it requires 
doing consistently proven good reviewing of the code going into the repository, 
consistent irc participation, as well as implementation work.

If your interested, please join us on IRC and begin the process :)

Regards
-steve


From: Adam Heczko >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, March 1, 2016 at 1:57 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla][security] Obtaining the 
vulnerability:managed tag

Hi Steven,
I'd like to help you with vulnerability management process of Kolla and become 
a member of Kolla VMT team.
I have experience and expertise in IT security and related to it processes.

Best regards,

Adam

On Tue, Mar 1, 2016 at 5:55 PM, Steven Dake (stdake) 
> wrote:
Core reviewers,

Please review this document:
https://github.com/openstack/governance/blob/master/reference/tags/vulnerability_managed.rst

It describes how vulnerability management is handled at a high level for Kolla. 
 When we are ready, I want the kolla delivery repos vulnerabilities to be 
managed by the VMT team.  By doing this, we standardize with other OpenStack 
processes for handling security vulnerabilities.

The first step is to form a kolla-coresec team, and create a separate 
kolla-coresec tracker.  I have already created the tracker for kolla-coresec 
and the kolla-coresec team in launchpad:

https://launchpad.net/~kolla-coresec

https://launchpad.net/kolla-coresec

I have a history of security expertise, and the PTL needs to be on the team as 
an escalation point as described in the VMT tagging document above.  I also 
need 2-3 more volunteers to join the team.  You can read the requirements of 
the job duties in the vulnerability:managed tag.

If your interested in joining the VMT team, please respond on this thread.  If 
there are more then 4 individuals interested in joining this team, I will form 
the team from the most active members based upon liberty + mitaka commits, 
reviews, and PDE spent.

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][security] Obtaining the vulnerability:managed tag

2016-03-01 Thread Adam Heczko
Hi Steven,
I'd like to help you with vulnerability management process of Kolla and
become a member of Kolla VMT team.
I have experience and expertise in IT security and related to it processes.

Best regards,

Adam

On Tue, Mar 1, 2016 at 5:55 PM, Steven Dake (stdake) 
wrote:

> Core reviewers,
>
> Please review this document:
>
> https://github.com/openstack/governance/blob/master/reference/tags/vulnerability_managed.rst
>
> It describes how vulnerability management is handled at a high level for
> Kolla.  When we are ready, I want the kolla delivery repos vulnerabilities
> to be managed by the VMT team.  By doing this, we standardize with other
> OpenStack processes for handling security vulnerabilities.
>
> The first step is to form a kolla-coresec team, and create a separate
> kolla-coresec tracker.  I have already created the tracker for
> kolla-coresec and the kolla-coresec team in launchpad:
>
> https://launchpad.net/~kolla-coresec
>
> https://launchpad.net/kolla-coresec
>
> I have a history of security expertise, and the PTL needs to be on the
> team as an escalation point as described in the VMT tagging document
> above.  I also need 2-3 more volunteers to join the team.  You can read the
> requirements of the job duties in the vulnerability:managed tag.
>
> If your interested in joining the VMT team, please respond on this
> thread.  If there are more then 4 individuals interested in joining this
> team, I will form the team from the most active members based upon liberty
> + mitaka commits, reviews, and PDE spent.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Hongbin Lu


From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-01-16 9:54 AM
To: Guz Egor
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

This issue involves what I refer to as "OS religion" operators have this WRT 
bay nodes, but users don't. I suppose this is a key reason why OpenStack does 
not have any concept of supported OS images today. Where I can see the value in 
offering various choices in Magnum, maintaining a reference implementation of 
an OS image has shown that it requires non-trivial resources, and
It is definitely non-trivial to create the first reference implementation, 
since we create it from scratch. However, I don't think it is hard to maintain 
an additional reference implementation. From technical point of view, most of 
the work in the first implementation can be reused and consolidated in some 
ways. Perhaps, what I failed to see is the claimed difficulties to maintain an 
additional OS. To discuss it further, I would suggest to work on an etherpad to 
list the overheads and benefits so that we can do a reasonable tradeoff. 
Thoughts?

expanding that to several will certainly require more. The question really 
comes down to the importance of this particular choice as a development team 
focus. Is it more important than a compelling network or storage integration 
with OpenStack services? I doubt it.

We all agree there should be a way to use an alternate OS image with Magnum. 
That has been our intent from the start. We are not discussing removing that 
option. However, rather than having multiple OS images the Magnum team 
maintains, maybe we could clearly articulate how to plug in to Magnum, and set 
up a third party CI, and allow various OS vendors to participate to make their 
options work with those requirements. If this approach works, then it may even 
reduce the need for a reference implementation at all if multiple upstream 
options result.

--
Adrian

On Mar 1, 2016, at 12:28 AM, Guz Egor 
> wrote:
Adrian,

I disagree, host OS is very important for operators because of integration with 
all internal tools/repos/etc.

I think it make sense to limit OS support in Magnum main source. But not sure 
that Fedora Atomic is right choice,
first of all there is no documentation about it and I don't think it's 
used/tested a lot by Docker/Kub/Mesos community.
It make sense to go with Ubuntu (I believe it's still most adopted platform in 
all three COEs and OpenStack deployments)
and CoreOS (is highly adopted/tested in Kub community and Mesosphere DCOS uses 
it as well).

We can implement CoreOS support as driver and users can use it as reference 
implementation.

---
Egor


From: Adrian Otto >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Monday, February 29, 2016 10:36 AM
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Consider this: Which OS runs on the bay nodes is not important to end users. 
What matters to users is the environments their containers execute in, which 
has only one thing in common with the bay node OS: the kernel. The linux 
syscall interface is stable enough that the various linux distributions can all 
run concurrently in neighboring containers sharing same kernel. There is really 
no material reason why the bay OS choice must match what distro the container 
is based on. Although I'm persuaded by Hongbin's concern to mitigate risk of 
future changes WRT whatever OS distro is the prevailing one for bay nodes, 
there are a few items of concern about duality I'd like to zero in on:

1) Participation from Magnum contributors to support the CoreOS specific 
template features has been weak in recent months. By comparison, participation 
relating to Fedora/Atomic have been much stronger.

2) Properly testing multiple bay node OS distros (would) significantly increase 
the run time and complexity of our functional tests.

3) Having support for multiple bay node OS choices requires more extensive 
documentation, and more comprehensive troubleshooting details.

If we proceed with just one supported disto for bay nodes, and offer 
extensibility points to allow alternates to be used in place of it, we should 
be able to address the risk concern of the chosen distro by selecting an 
alternate when that change is needed, by using those extensibility points. 
These include the ability to specify your own bay image, and the ability to use 
your own associated Heat template.

I see value in risk mitigation, it may make sense to simplify in the short term 
and address that need when it becomes necessary. My point of view might be 
different if we had contributors willing and ready to 

Re: [openstack-dev] [Fuel][Fuel-Library] Fuel CI issues

2016-03-01 Thread Sergey Kolekonov
I think we should also look at the root cause of these CI failures.
They are connected with difference in packages and not with manifests or
deployment process.
So another possible solution is to stay as close as possible to the package
sources used by OpenStack Puppet modules CI.
For example, we have a BP [0] that adds an ability to deploy UCA packages
with Fuel.
Current package sources used by openstack-modules CI can be found here [1]

Just my 2c.
Thanks.

[0] https://blueprints.launchpad.net/fuel/+spec/deploy-with-uca-packages
[1]
https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L4

On Tue, Mar 1, 2016 at 2:21 PM, Vladimir Kuklin 
wrote:

> Dmitry
>
> >I don't think "hurried" is applicable here: the only way to become more
> >ready to track upstream than we already are is to *start* tracking
> >upstream. Postponing that leaves us in a Catch-22 situation where we
> >can't stay in sync with upstream because we're not continuously catching
> >up with upstream.
>
> First of all, if you read my email again, you will see that I propose a
> way of tracking upstream in less continuous mode with nightly testing and
> switching to it based on automated integration testing which will leave us
> 0 opportunity to face the aforementioned issues.
>
> >That would lock us into that Catch-22 situation where we can't allow
> >Fuel CI to vote on puppet-openstack commits because fuel-library is
> >always too far behind puppet-openstack for its votes to mean anything
> >useful.
>
> This is not true. We can run FUEL CI against any set of commits.
>
> > We have to approach this from the opposite direction: make Fuel CI
> > stable and meaningful enough so that, 9 times out of 10, Fuel CI failure
> > indicates a real problem with the code, and the remaining cases can be
> > quickly unblocked by pushing a catch-up commit to fuel-library (ideally
> > with a Depends-On tag).
>
> Dmitry, could you please point me at the person who will be strictly
> responsible for creating this 'ketchup' commit? Do you know that this may
> take up the whole day (couple of hours to do RCA, couple of hours on
> writing and debugging and couple of hours for FUEL CI tests run) and block
> the entire Fuel project from having ANY code merged? Taking into
> consideration that openstack infra is currently under really high load it
> may take even several days for the fix to land into master. How do you
> expect us to have any feature merged prior to FF?
>
> > It is a matter of trust between projects: do we trust Puppet OpenStack
> > project to take Fuel's problems seriously and to avoid breaking our CI
> > more often than necessary? Can Puppet OpenStack project trust us with
> > the same? So far, our collaboration track record has been pretty good
> > bordering on examplary, and I think we should build on that and move
> > forward instead of clinging to the old ways.
> > The problem with moving only one piece at a time is that you end up so
> > far behind that you're slowing everyone down. BKL and GIL are not the
> > only way to deal with concurrency, we can do better than that.
>
> I have always thought that buliding software is about verification being
> more important than 'trust'. There should not be any humanitarian stuff
> invloved - we are not in a relationship with Puppet-OpenStack folks,
> although I really admire their work very much. We should not follow sliding
> git references without being 100% sure that we have mutual gating of the
> code.
>
> Moreover, having such git ref as a source in our Puppetfile will lead to
> the situation when we have UNREPRODUCIBLE build of Fuel project. Each build
> may and will result in different code and you will not be able to tell
> which one without actually looking into the build logs. I think, that this
> is unacceptable as it may lead to impossibilty of debugging and
> troubleshooting of anything because it will be impossible to reproduce
> things.
>
> Additionally, we do not have:
>
> 1) depends-on support
> 2) any process instantiated for monitoring of the Puppet-Openstack FUEL CI
> job
> 3) any person responsible for monitoring of that job
>
> Finally, we have another blocker issue [0] which essentially killed March
> 1st in EU timezone from code merge point of view as our master is blocked
> right now by non-boostrapping master node.
>
> Having that said I propose that we:
>
> 1) immediately pick a set of stable commits to puppet-openstck
> 2) immediately update puppetfile  with this set
> 3) unblock other fuel developers
> 4) fix the aforementioned issues
> 5) turn tracking of upstream commits on when we have all the pieces set up
> and when we are sure that we will not ever break master with this change.
>
>
> [0] https://bugs.launchpad.net/fuel/+bug/1551584
>
> On Tue, Mar 1, 2016 at 5:10 AM, Dmitry Borodaenko <
> dborodae...@mirantis.com> wrote:
>
>> On Mon, Feb 29, 2016 at 01:19:29PM +0300, Vladimir Kuklin wrote:
>> > Thanks for 

Re: [openstack-dev] OpenStack Contributor Awards

2016-03-01 Thread Cody A.W. Somerville
On Tue, Feb 16, 2016 at 5:43 AM, Tom Fifield  wrote:

> Hi all,
>
> I'd like to introduce a new round of community awards handed out by the
> Foundation, to be presented at the feedback session of the summit.
>
> 

>
> in the meantime, let's use this thread to discuss the fun part: goodies.
> What do you think we should lavish award winners with? Soft toys? Perpetual
> trophies? baseball caps ?
>

For some, an appreciation plaque or nicely framed, signed letter of
appreciation from the board or TC would provide the right amount of
personal touch and effort to appropriately show our appreciation.

-- 
Cody A.W. Somerville
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][taas]

2016-03-01 Thread Vinay Yadhav
Hi All,

Its time to induct new members to the TaaS core reviewers, to kick start
the process i nominate the following developers and active contributors to
the TaaS project

1. Yamamoto Takashi
2. Soichi Shigeta


Cheers,
Vinay Yadhav
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] OpenStack Contributor Awards

2016-03-01 Thread Anne Gentle
Or they use it to do a cool demo of running apps on an OpenStack cloud.

On Tue, Mar 1, 2016 at 2:04 PM, Tim Bell  wrote:

>
> Just to check, does OpenStack run on a Raspberry Pi ? Could cause some
> negative comments if it was
> not compatible/sized for a basic configuration.
>
> Tim
>
>
>
>
>
> On 01/03/16 20:41, "Thomas Goirand"  wrote:
>
> >On 03/01/2016 11:30 PM, Tom Fifield wrote:
> >> Excellent, excellent.
> >>
> >> What's the best place to buy Raspberry Pis these days?
> >
> >One of the 2 official sites:
> >https://www.element14.com/community/community/raspberry-pi
> >
> >The Pi 3 is the super nice shiny new stuff, with 64 arm bits.
> >
> >Cheers,
> >
> >Thomas Goirand (zigo)
> >
> >
> >Hopefully, with it, there will be no need of raspbian anymore (it was
> >there because of a very poor choice of CPU in model 1 and 2, just below
> >what the armhf builds required, forcing to use armel which is arm v4
> >instruction sets).
> >
> >
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova hooks - document & test or deprecate?

2016-03-01 Thread Sean Dague
On 03/01/2016 02:04 PM, Rob Crittenden wrote:
> Daniel P. Berrange wrote:
>> On Mon, Feb 29, 2016 at 11:59:06AM -0500, Sean Dague wrote:
>>> The nova/hooks.py infrastructure has been with us since early Nova. It's
>>> currently only annotated on a few locations - 'build_instance',
>>> 'create_instance', 'delete_instance', and 'instance_network_info'. It's
>>> got a couple of unit tests on it, but nothing that actually tests real
>>> behavior of the hooks we have specified.
>>>
>>> It does get used in the wild, and we do break it with changes we didn't
>>> ever anticipate would impact it -
>>> https://bugs.launchpad.net/nova/+bug/1518321
>>>
>>> However, when you look into how that is used, it's really really odd and
>>> fragile -
>>> https://github.com/richm/rdo-vm-factory/blob/master/rdo-ipa-nova/novahooks.py#L248
>>>
>>>
>>> def pre(self, *args, **kwargs):
>>> # args[7] is the injected_files parameter array
>>> # the value is ('filename', 'base64 encoded contents')
>>> ipaotp = str(uuid.uuid4())
>>> ipainject = ('/tmp/ipaotp', base64.b64encode(ipaotp))
>>> args[7].extend(self.inject_files)
>>> args[7].append(ipainject)
>>>
>>> In our continued quest on being more explicit about plug points it feels
>>> like we should other document the interface (which means creating
>>> stability on the hook parameters) or we should deprecate this construct
>>> as part of a bygone era.
>>>
>>> I lean on deprecation because it feels like a thing we don't really want
>>> to support going forward, but I can go either way.
>>
>> As it is designed, I think the hooks mechanism is really unsupportable
>> long term. It is exposing users to arbitrary internal Nova data structures
>> which we have changed at will and we cannot possibly ever consider them
>> to be a stable consumable API. I'm rather surprised we've not seen more
>> bugs like the one you've shown above - most likely thats a reflection
>> on not many people actually using this facility.
>>
>> I'd be strongly in favour of deprecation & removal of this hooking
>> mechanism, as its unsupportable in any sane manner when it exposes
>> code to our internal unstable API & object model.
>>
>> If there's stuff people are doing in hooks that's impossible any other
>> way, we should really be looking at what we need to provide in our
>> public API to achieve the same goal, if it is use case we wish to be
>> able to support.
> 
> I'll just add that lots and lots of software has hooks. Just because the
> initial implementation decided to expose internal structures (which lots
> of other APIs do in order to be interesting) doesn't make it inherently
> bad. It just requires a stable API rather than passing *args, **kwargs
> around. It seems like the original design was to throw in the kitchen
> sink when a more targeted set of arguments would probably do just fine.

The original implementation comes from a time when OpenStack was a kit
to build cloud software out of. And nearly every bit was extendable in
various ways.

We are now transitioning into a phase where interoperability is more
important than differentiation. That means deprecating out some
generalized hook points and requiring that features be baked in, so that
all clouds will have them, and user experience on different things that
are OpenStack (tm) are more similar.

At the end of the day this is python, so you can monkey patch anything
you want. But by documenting a hook point we endorse that. And the
sentiment in this thread is pretty much that we can't endorse this one
any more.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] OpenStack Contributor Awards

2016-03-01 Thread Tim Bell

Just to check, does OpenStack run on a Raspberry Pi ? Could cause some negative 
comments if it was
not compatible/sized for a basic configuration.

Tim





On 01/03/16 20:41, "Thomas Goirand"  wrote:

>On 03/01/2016 11:30 PM, Tom Fifield wrote:
>> Excellent, excellent.
>> 
>> What's the best place to buy Raspberry Pis these days?
>
>One of the 2 official sites:
>https://www.element14.com/community/community/raspberry-pi
>
>The Pi 3 is the super nice shiny new stuff, with 64 arm bits.
>
>Cheers,
>
>Thomas Goirand (zigo)
>
>
>Hopefully, with it, there will be no need of raspbian anymore (it was
>there because of a very poor choice of CPU in model 1 and 2, just below
>what the armhf builds required, forcing to use armel which is arm v4
>instruction sets).
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][FFE] Multi-release packages

2016-03-01 Thread Alexey Shtokolov
Fuelers,

I would like to request a feature freeze exception for "Multi-release
packages" feature [0][1]

This feature extends already existing multi-release support in Fuel Plugins.
We would like to allow plugin developers to specify all plugin components
per release and distribute different deployment graphs, partitioning
schemas, network and node roles for each release in one package.

This feature is not blocker for us, but it provides very important
improvement of users and plugin developers experience.

We need 3 weeks after FF to finish this feature.
Risk of not delivering it after 3 weeks is low.

[0] https://blueprints.launchpad.net/fuel/+spec/plugins-v5
[1] https://review.openstack.org/#/c/271417
---
WBR, Alexey Shtokolov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] 8.0.0b1 is out (Mitaka beta)

2016-03-01 Thread Emilien Macchi
Our group is proud to announce we have a first beta release of Puppet
modules for Mitaka.

Those modules have now a 8.0.0b1 tag that packagers can use:
puppet-aodh
puppet-ceilometer
puppet-cinder
puppet-designate
puppet-glance
puppet-gnocchi
puppet-heat
puppet-horizon
puppet-ironic
puppet-keystone
puppet-manila
puppet-mistral
puppet-murano
puppet-neutron
puppet-nova
puppet-openstack_extras
puppet-openstacklib
puppet-openstack_spec_helper
puppet-sahara
puppet-swift
puppet-tempest
puppet-trove
puppet-zaqar

Thank you all for your hard work to make it happen!

Note: no release has been done in Puppetforge.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] OpenStack Contributor Awards

2016-03-01 Thread Thomas Goirand
On 03/01/2016 11:30 PM, Tom Fifield wrote:
> Excellent, excellent.
> 
> What's the best place to buy Raspberry Pis these days?

One of the 2 official sites:
https://www.element14.com/community/community/raspberry-pi

The Pi 3 is the super nice shiny new stuff, with 64 arm bits.

Cheers,

Thomas Goirand (zigo)


Hopefully, with it, there will be no need of raspbian anymore (it was
there because of a very poor choice of CPU in model 1 and 2, just below
what the armhf builds required, forcing to use armel which is arm v4
instruction sets).



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] audience for release notes

2016-03-01 Thread Doug Hellmann
Excerpts from Robert Collins's message of 2016-03-02 07:14:19 +1300:
> There is a fixture in pbr's test suite to create keys with out entropy use,
> should be easy to adapt for user by Reno's tests.

That's likely where I got the original version of the one in reno now.

Doug

> On 1 Mar 2016 6:52 AM, "Thomas Goirand"  wrote:
> 
> > On 02/29/2016 10:31 PM, Doug Hellmann wrote:
> > >>> Give this a try and see if it works any better:
> > >>> https://review.openstack.org/285812
> > >>
> > >> Oh, thanks so much! I'll try and give feedback on review.d.o. Is the
> > >> issue around the (missed) use of --debug-quick-random?
> > >>
> > >> Why do we need Reno unit tests to generate so many GPG keys btw, why not
> > >> just one or 2?
> > >
> > > As you've noticed, reno scans the history of a git repository to find
> > > release notes. It doesn't actually read any of the files from the work
> > > tree. So for tests, we need to create a little git repo and then tag
> > > commits to look like releases. We do that for a bunch of scenarios, and
> > > each test case generates a GPG key to use for signing the tags.
> >
> > The patch works super well, and I uploaded a new version of the Debian
> > package with unit tests at build time.
> >
> > However, the combined output of gnupg and testr is a bit ugly. Wouldn't
> > it be nicer to just generate a single GPG key, reused by multiple tests,
> > using a unit test fixture?
> >
> > Cheers,
> >
> > Thomas Goirand (zigo)
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][FFE] FF exception request for Numa and CPU pinning

2016-03-01 Thread Dmitry Klenov
Hi,

I'd like to to request a feature freeze exception for "Add support for
NUMA/CPU pinning features" feature [0].

Part of this feature is already merged [1]. We have the following patches
in work / on review:

https://review.openstack.org/#/c/281802/
https://review.openstack.org/#/c/285282/
https://review.openstack.org/#/c/284171/
https://review.openstack.org/#/c/280624/
https://review.openstack.org/#/c/280115/
https://review.openstack.org/#/c/285309/

No new patches are expected.

We need 2 weeks after FF to finish this feature.
Risk of not delivering it after 2 weeks is low.

Regards,
Dmitry

[0] https://blueprints.launchpad.net/fuel/+spec/support-numa-cpu-pinning
[1]
https://review.openstack.org/#/q/status:merged+topic:bp/support-numa-cpu-pinning
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][FFE] FF exception request for HugePages

2016-03-01 Thread Dmitry Klenov
Hi,

I'd like to to request a feature freeze exception for "Support for Huge
pages for improved performance" feature [0].

Part of this feature is already merged [1]. We have the following patches
in work / on review:

https://review.openstack.org/#/c/286628/
https://review.openstack.org/#/c/282367/
https://review.openstack.org/#/c/286495/

And we need to write new patches for the following parts of this feature:
https://blueprints.launchpad.net/fuel/+spec/support-hugepages

We need 1.5 weeks after FF to finish this feature.
Risk of not delivering it after 1.5 weeks is low.

Regards,
Dmitry

[0] https://blueprints.launchpad.net/fuel/+spec/support-hugepages
[1]
https://review.openstack.org/#/q/status:merged+topic:bp/support-hugepages
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [FFE] Unlock Settings Tab

2016-03-01 Thread Alexey Shtokolov
Fuelers,

I would like to request a feature freeze exception for "Unlock settings
tab" feature [0]

This feature being combined with Task-based deployment [1] and
LCM-readiness for Fuel deployment tasks [2] unlocks Basic LCM in Fuel. We
conducted a thorough redesign of this feature and splitted it into several
granular changes [3]-[6] to allow users to change settings on deployed,
partially deployed, stopped or erred clusters and further run redeployment
using a particular graph (custom or calculated based on expected changes
stored in DB) and with new parameters.

We need 3 weeks after FF to finish this feature.
Risk of not delivering it after 3 weeks is low.

Patches on review or in progress:

https://review.openstack.org/#/c/284139/
https://review.openstack.org/#/c/279714/
https://review.openstack.org/#/c/286754/
https://review.openstack.org/#/c/286783/

Specs:
https://review.openstack.org/#/c/286713/
https://review.openstack.org/#/c/284797/
https://review.openstack.org/#/c/282695/
https://review.openstack.org/#/c/284250/


[0] https://blueprints.launchpad.net/fuel/+spec/unlock-settings-tab
[1]
https://blueprints.launchpad.net/fuel/+spec/enable-task-based-deployment
[2] https://blueprints.launchpad.net/fuel/+spec/granular-task-lcm-readiness
[3] https://blueprints.launchpad.net/fuel/+spec/computable-task-fields-yaql
[4]
https://blueprints.launchpad.net/fuel/+spec/store-deployment-tasks-history
[5] https://blueprints.launchpad.net/fuel/+spec/custom-graph-execution
[6]
https://blueprints.launchpad.net/fuel/+spec/save-deployment-info-in-database

-- 
---
WBR, Alexey Shtokolov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][security] Obtaining the vulnerability:managed tag

2016-03-01 Thread Steven Dake (stdake)


On 3/1/16, 10:47 AM, "Tristan Cacqueray"  wrote:

>On 03/01/2016 05:12 PM, Ryan Hallisey wrote:
>> Hello,
>> 
>> I have experience writing selinux policy. My plan was to write the
>>selinux policy for Kolla in the next cycle.  I'd be interested in
>>joining if that fits the criteria here.
>> 
>
>Hello Ryan,
>
>While knowing howto write SELinux policy is a great asset for a coresec
>team member, it's not a requirement. Such team purpose isn't to
>implement core security features, but rather be responsive about private
>security bug to confirm the issue and discuss the scope of any
>vulnerability along with potential solutions.
>
>
>
>> Thanks,
>> -Ryan
>> 
>> - Original Message -
>> From: "Steven Dake (stdake)" 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>>
>> Sent: Tuesday, March 1, 2016 11:55:55 AM
>> Subject: [openstack-dev] [kolla][security] Obtaining
>>the   vulnerability:managed tag
>> 
>> Core reviewers, 
>> 
>> Please review this document:
>> 
>>https://github.com/openstack/governance/blob/master/reference/tags/vulner
>>ability_managed.rst
>> 
>> It describes how vulnerability management is handled at a high level
>>for Kolla. When we are ready, I want the kolla delivery repos
>>vulnerabilities to be managed by the VMT team. By doing this, we
>>standardize with other OpenStack processes for handling security
>>vulnerabilities. 
>> 
>For reference, the full process is described here:
>https://security.openstack.org/vmt-process.html
>
>> The first step is to form a kolla-coresec team, and create a separate
>>kolla-coresec tracker. I have already created the tracker for
>>kolla-coresec and the kolla-coresec team in launchpad:
>> 
>> https://launchpad.net/~kolla-coresec
>> 
>> https://launchpad.net/kolla-coresec
>> 
>> I have a history of security expertise, and the PTL needs to be on the
>>team as an escalation point as described in the VMT tagging document
>>above. I also need 2-3 more volunteers to join the team. You can read
>>the requirements of the job duties in the vulnerability:managed tag.
>> 
>> If your interested in joining the VMT team, please respond on this
>>thread. If there are more then 4 individuals interested in joining this
>>team, I will form the team from the most active members based upon
>>liberty + mitaka commits, reviews, and PDE spent.
>> 
>Note that the VMT team is global to openstack, I guess you are referring
>to the Kolla VMT team (now known as kolla-coresec).

Yes that is correct.  Thanks Tristan for clarifying.
>
>
>Regards,
>-Tristan
>
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova hooks - document & test or deprecate?

2016-03-01 Thread Rob Crittenden
Daniel P. Berrange wrote:
> On Mon, Feb 29, 2016 at 11:59:06AM -0500, Sean Dague wrote:
>> The nova/hooks.py infrastructure has been with us since early Nova. It's
>> currently only annotated on a few locations - 'build_instance',
>> 'create_instance', 'delete_instance', and 'instance_network_info'. It's
>> got a couple of unit tests on it, but nothing that actually tests real
>> behavior of the hooks we have specified.
>>
>> It does get used in the wild, and we do break it with changes we didn't
>> ever anticipate would impact it -
>> https://bugs.launchpad.net/nova/+bug/1518321
>>
>> However, when you look into how that is used, it's really really odd and
>> fragile -
>> https://github.com/richm/rdo-vm-factory/blob/master/rdo-ipa-nova/novahooks.py#L248
>>
>>
>> def pre(self, *args, **kwargs):
>> # args[7] is the injected_files parameter array
>> # the value is ('filename', 'base64 encoded contents')
>> ipaotp = str(uuid.uuid4())
>> ipainject = ('/tmp/ipaotp', base64.b64encode(ipaotp))
>> args[7].extend(self.inject_files)
>> args[7].append(ipainject)
>>
>> In our continued quest on being more explicit about plug points it feels
>> like we should other document the interface (which means creating
>> stability on the hook parameters) or we should deprecate this construct
>> as part of a bygone era.
>>
>> I lean on deprecation because it feels like a thing we don't really want
>> to support going forward, but I can go either way.
> 
> As it is designed, I think the hooks mechanism is really unsupportable
> long term. It is exposing users to arbitrary internal Nova data structures
> which we have changed at will and we cannot possibly ever consider them
> to be a stable consumable API. I'm rather surprised we've not seen more
> bugs like the one you've shown above - most likely thats a reflection
> on not many people actually using this facility.
> 
> I'd be strongly in favour of deprecation & removal of this hooking
> mechanism, as its unsupportable in any sane manner when it exposes
> code to our internal unstable API & object model.
> 
> If there's stuff people are doing in hooks that's impossible any other
> way, we should really be looking at what we need to provide in our
> public API to achieve the same goal, if it is use case we wish to be
> able to support.

I'll just add that lots and lots of software has hooks. Just because the
initial implementation decided to expose internal structures (which lots
of other APIs do in order to be interesting) doesn't make it inherently
bad. It just requires a stable API rather than passing *args, **kwargs
around. It seems like the original design was to throw in the kitchen
sink when a more targeted set of arguments would probably do just fine.

rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova hooks - document & test or deprecate?

2016-03-01 Thread Rob Crittenden
Daniel P. Berrange wrote:
> On Mon, Feb 29, 2016 at 12:36:03PM -0700, Rich Megginson wrote:
>> On 02/29/2016 12:19 PM, Chris Friesen wrote:
>>> On 02/29/2016 12:22 PM, Daniel P. Berrange wrote:
>>>
 There's three core scenarios for hooks

  1. Modifying some aspect of the Nova operation
  2. Triggering an external action synchronously to some Nova operation
  3. Triggering an external action asynchronously to some Nova operation

 The Rdo example is falling in scenario 1 since it is modifying the
 injected files. I think this is is absolutely the kind of thing
 we should explicitly *never* support. When external code can arbitrarily
 modify some aspect of Nova operation we're in totally unchartered
 territory as to the behaviour of Nova. To support that we'd need to
 provide a stable internal API which is just not something we want to
 tie ourselves into. I don't know just what the Rdo example is trying
 to achieve, but whatever it is, it should be via some supportable API
 and not a hook.,

 Scenaris 2 and 3 are both valid to consider. Using the notifications
 system gets as an asynchronous trigger mechanism, which is probably
 fine for many scenarios.  The big question is whether there's a
 compelling need for scenario two, where the external action blocks
 execution of the Nova operation until it has completed its hook.
>>>
>>> Even in the case of scenario two it is possible in some cases to use a
>>> proxy to intercept the HTTP request, take action, and then forward it or
>>> reject it as appropriate.
>>>
>>> I think the real question is whether there's a need to trigger an external
>>> action synchronously from down in the guts of the nova code.
>>
>> The hooks do the following: 
>> https://github.com/rcritten/rdo-vm-factory/blob/use-centos/rdo-ipa-nova/novahooks.py#L271
>>
>> We need to generate a token (ipaotp) and call ipa host-add with that token
>> _before_ the new machine has a chance to call ipa-client-install.  We have
>> to guarantee that the client cannot call ipa-client-install until we get
>> back the response from ipa that the host has been added with the token.
>> Additionally, we store the token in an injected_file in the new machine, so
>> the file can be removed as soon as possible.  We tried storing the token in
>> the VM metadata, but there is apparently no way to delete it?  Can the
>> machine do
>>
>> curl -XDELETE http://168.254.169.254/openstack/latest/metadata?key=value ?
>>
>> Using the build_instance.pre hook in Nova makes this simple and
>> straightforward.  It's also relatively painless to use the network_info.post
>> hook to handle the floating ip address assignment.
>>
>> Is it possible to do the above using notifications without jumping through
>> too many hoops?
> 
> So from a high level POV, you are trying to generate a security token
> which will be provided to the guest OS before it is booted.
> 
> I think that is a pretty clearly useful feature, and something that
> should really be officially integrated into Nova as a concept rather
> than done behind nova's back as a hook.

Note that the reason the file was injected the way it was is so that
Nova would have no idea there even is a token. We didn't want someone
later peeking at the metadata, or a notification, to get the token.

rob


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Hitting Mitaka Milestone 3 and feature freeze exception process

2016-03-01 Thread Armando M.
Hi Neutrinos,

As I am sure all of you know, this week is Milestone week [1], for this
reason, we need to cut releases for both neutron, neutron *-aas, and the
client. Patches [2, 3] are still in draft, and whilst stuff is in flight,
we'll hold on until we have reached a sensible point where it's safe to
land them with the right hashes.

That said, not everything is complete and we'll need a few feature freeze
exceptions.

This cycle I would like to experiment with a new exception process: I
proposed a change to neutron-specs [4]. I would like to invite anyone who
has been identified as feature owner [5] (or anyone who actively
participated on the development of the feature as approver or simply
interested party), to comment on the status of the feature, so that I can
assess (with your help) if the feature is complete, whether it can safely
be granted an exception (being very close to being complete) or if it needs
to be deferred altogether.

So, consider this a collective sign-off. We'll look into finalizing this
document once we have the first release candidate. Then, this document can
be used as a base for Newton planning.

For any questions/comments, please do not hesitate to ask.

Many thanks,
Armando

[1] http://releases.openstack.org/mitaka/schedule.html
[2] https://review.openstack.org/#/c/286609
[3] https://review.openstack.org/#/c/286585/
[4] https://review.openstack.org/#/c/286413/
[5]
https://review.openstack.org/#/c/286413/3/specs/mitaka/postmortem/postmortem.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova hooks - document & test or deprecate?

2016-03-01 Thread Joshua Harlow

Sorry for the top posting, but seems like doing so is ok,

So an idea, feel free to use it or not...

But if a in-memory notification mechanism is desired then a library that 
I have been using and extracted from taskflow could be helpful here,


It doesn't bring in the full oslo.messaging notifications (since its 
only for in-memory after-all); but it could be just the thing needed to 
hook into observation points (with a more concrete definition/schema of 
what is/can be being observed being done by the nova folks):


https://notifier.readthedocs.org

https://pypi.python.org/pypi/notifier/

A really simple example:

>>> import notifier
>>> import time
>>> def watcher(event_type, details):
...print("%s happened with details: %s" % (event_type, details))
...
>>> n = notifier.Notifier()
>>> n.register('vm-booted', watcher)
0x7f1e6f294c80>'>

>>> n.notify('vm-booted', details={'when': time.time()})
vm-booted happened with details: {'when': 1456855953.650803}


Already in global-requirements btw,

-Josh

On 02/29/2016 08:59 AM, Sean Dague wrote:

The nova/hooks.py infrastructure has been with us since early Nova. It's
currently only annotated on a few locations - 'build_instance',
'create_instance', 'delete_instance', and 'instance_network_info'. It's
got a couple of unit tests on it, but nothing that actually tests real
behavior of the hooks we have specified.

It does get used in the wild, and we do break it with changes we didn't
ever anticipate would impact it -
https://bugs.launchpad.net/nova/+bug/1518321

However, when you look into how that is used, it's really really odd and
fragile -
https://github.com/richm/rdo-vm-factory/blob/master/rdo-ipa-nova/novahooks.py#L248


 def pre(self, *args, **kwargs):
 # args[7] is the injected_files parameter array
 # the value is ('filename', 'base64 encoded contents')
 ipaotp = str(uuid.uuid4())
 ipainject = ('/tmp/ipaotp', base64.b64encode(ipaotp))
 args[7].extend(self.inject_files)
 args[7].append(ipainject)

In our continued quest on being more explicit about plug points it feels
like we should other document the interface (which means creating
stability on the hook parameters) or we should deprecate this construct
as part of a bygone era.

I lean on deprecation because it feels like a thing we don't really want
to support going forward, but I can go either way.

-Sean

P.S. I'm starting to look at in tree functional testing for all of this,
in the event that we decide not to deprecate it. It's definitely made a
little hard by the way all exceptions are caught when hooks go wrong.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] audience for release notes

2016-03-01 Thread Robert Collins
There is a fixture in pbr's test suite to create keys with out entropy use,
should be easy to adapt for user by Reno's tests.
On 1 Mar 2016 6:52 AM, "Thomas Goirand"  wrote:

> On 02/29/2016 10:31 PM, Doug Hellmann wrote:
> >>> Give this a try and see if it works any better:
> >>> https://review.openstack.org/285812
> >>
> >> Oh, thanks so much! I'll try and give feedback on review.d.o. Is the
> >> issue around the (missed) use of --debug-quick-random?
> >>
> >> Why do we need Reno unit tests to generate so many GPG keys btw, why not
> >> just one or 2?
> >
> > As you've noticed, reno scans the history of a git repository to find
> > release notes. It doesn't actually read any of the files from the work
> > tree. So for tests, we need to create a little git repo and then tag
> > commits to look like releases. We do that for a bunch of scenarios, and
> > each test case generates a GPG key to use for signing the tags.
>
> The patch works super well, and I uploaded a new version of the Debian
> package with unit tests at build time.
>
> However, the combined output of gnupg and testr is a bit ugly. Wouldn't
> it be nicer to just generate a single GPG key, reused by multiple tests,
> using a unit test fixture?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we cleanup migration from virt drivers?

2016-03-01 Thread Matt Riedemann



On 3/1/2016 11:52 AM, Matt Riedemann wrote:



On 3/1/2016 1:47 AM, Eli Qiao wrote:

hello Nova hackers

I see in some of virt drivers (see belows links) 's
confirm_resize/revert_resize(or something like that) passing a parameter
'migration',
but it is not used at all in virt layer drivers, I wonder why we still
keep them (to reserved for future usage)? can we do a cleanup?
IMO, migration status should be maintained by nova compute layer instead
of virt layer.

https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py#L1372


https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L137


https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7437





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



It at least looks safe to do that. The xenapi and ironic drivers don't
use it either.

Looking at the history of the virt driver method, it looks like it was
added back in essex [1] but wasn't used in that change either, so
probably just an oversight on why it needed to be passed down to the
virt driver.

[1]
https://github.com/openstack/nova/commit/6f3ae6e1e5453330e14807348f6e3f6587877946




I see the out of tree nova-powervm driver uses it [1] but that could 
either be handled in the compute manager or that code in the virt driver 
could just be checking the instance.host value against the CONF.host 
value from nova.conf.


[1] 
http://git.openstack.org/cgit/openstack/nova-powervm/tree/nova_powervm/virt/powervm/driver.py#n1217


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we cleanup migration from virt drivers?

2016-03-01 Thread Matt Riedemann



On 3/1/2016 1:47 AM, Eli Qiao wrote:

hello Nova hackers

I see in some of virt drivers (see belows links) 's
confirm_resize/revert_resize(or something like that) passing a parameter
'migration',
but it is not used at all in virt layer drivers, I wonder why we still
keep them (to reserved for future usage)? can we do a cleanup?
IMO, migration status should be maintained by nova compute layer instead
of virt layer.

https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py#L1372

https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L137

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7437




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



It at least looks safe to do that. The xenapi and ironic drivers don't 
use it either.


Looking at the history of the virt driver method, it looks like it was 
added back in essex [1] but wasn't used in that change either, so 
probably just an oversight on why it needed to be passed down to the 
virt driver.


[1] 
https://github.com/openstack/nova/commit/6f3ae6e1e5453330e14807348f6e3f6587877946


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][security] Obtaining the vulnerability:managed tag

2016-03-01 Thread Tristan Cacqueray
On 03/01/2016 05:12 PM, Ryan Hallisey wrote:
> Hello,
> 
> I have experience writing selinux policy. My plan was to write the selinux 
> policy for Kolla in the next cycle.  I'd be interested in joining if that 
> fits the criteria here.
> 

Hello Ryan,

While knowing howto write SELinux policy is a great asset for a coresec
team member, it's not a requirement. Such team purpose isn't to
implement core security features, but rather be responsive about private
security bug to confirm the issue and discuss the scope of any
vulnerability along with potential solutions.



> Thanks,
> -Ryan
> 
> - Original Message -
> From: "Steven Dake (stdake)" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Tuesday, March 1, 2016 11:55:55 AM
> Subject: [openstack-dev] [kolla][security] Obtaining the  
> vulnerability:managed tag
> 
> Core reviewers, 
> 
> Please review this document: 
> https://github.com/openstack/governance/blob/master/reference/tags/vulnerability_managed.rst
>  
> 
> It describes how vulnerability management is handled at a high level for 
> Kolla. When we are ready, I want the kolla delivery repos vulnerabilities to 
> be managed by the VMT team. By doing this, we standardize with other 
> OpenStack processes for handling security vulnerabilities. 
> 
For reference, the full process is described here:
https://security.openstack.org/vmt-process.html

> The first step is to form a kolla-coresec team, and create a separate 
> kolla-coresec tracker. I have already created the tracker for kolla-coresec 
> and the kolla-coresec team in launchpad: 
> 
> https://launchpad.net/~kolla-coresec 
> 
> https://launchpad.net/kolla-coresec 
> 
> I have a history of security expertise, and the PTL needs to be on the team 
> as an escalation point as described in the VMT tagging document above. I also 
> need 2-3 more volunteers to join the team. You can read the requirements of 
> the job duties in the vulnerability:managed tag. 
> 
> If your interested in joining the VMT team, please respond on this thread. If 
> there are more then 4 individuals interested in joining this team, I will 
> form the team from the most active members based upon liberty + mitaka 
> commits, reviews, and PDE spent. 
> 
Note that the VMT team is global to openstack, I guess you are referring
to the Kolla VMT team (now known as kolla-coresec).


Regards,
-Tristan




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][horizon] django_openstack_auth 1.2.2 release (kilo)

2016-03-01 Thread doug
We are glad to announce the release of:

django_openstack_auth 1.2.2: Django authentication backend for use
with OpenStack Identity

This release is part of the kilo stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/django_openstack_auth/

Please report issues through launchpad:

https://bugs.launchpad.net/django-openstack-auth

For more details, please see below.

Changes in django_openstack_auth 1.2.1..1.2.2
-

9bf38c8 Fixing backward compatibility
bf88c17 Revert - Cache the User's Project by Token ID

Diffstat (except docs and test files)
-

openstack_auth/utils.py   | 37 +++
openstack_auth/views.py   |  2 --
3 files changed, 4 insertions(+), 94 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Reminder our next meeting is 23:00 UTC

2016-03-01 Thread Steven Dake (stdake)
Reminder our next meeting is Wednesday at 23:00 UTC to accommodate APAC/US 
folks.

This is a change from the past, where we only had meeting at 16:30 UTC.  For 
more details see:

https://wiki.openstack.org/wiki/Meetings/Kolla

Regards,
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][security] Obtaining the vulnerability:managed tag

2016-03-01 Thread Ryan Hallisey
Hello,

I have experience writing selinux policy. My plan was to write the selinux 
policy for Kolla in the next cycle.  I'd be interested in joining if that fits 
the criteria here.

Thanks,
-Ryan

- Original Message -
From: "Steven Dake (stdake)" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Tuesday, March 1, 2016 11:55:55 AM
Subject: [openstack-dev] [kolla][security] Obtaining the
vulnerability:managed tag

Core reviewers, 

Please review this document: 
https://github.com/openstack/governance/blob/master/reference/tags/vulnerability_managed.rst
 

It describes how vulnerability management is handled at a high level for Kolla. 
When we are ready, I want the kolla delivery repos vulnerabilities to be 
managed by the VMT team. By doing this, we standardize with other OpenStack 
processes for handling security vulnerabilities. 

The first step is to form a kolla-coresec team, and create a separate 
kolla-coresec tracker. I have already created the tracker for kolla-coresec and 
the kolla-coresec team in launchpad: 

https://launchpad.net/~kolla-coresec 

https://launchpad.net/kolla-coresec 

I have a history of security expertise, and the PTL needs to be on the team as 
an escalation point as described in the VMT tagging document above. I also need 
2-3 more volunteers to join the team. You can read the requirements of the job 
duties in the vulnerability:managed tag. 

If your interested in joining the VMT team, please respond on this thread. If 
there are more then 4 individuals interested in joining this team, I will form 
the team from the most active members based upon liberty + mitaka commits, 
reviews, and PDE spent. 

Regards 
-steve 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-03-01 Thread Michael Krotscheck
The keystone patch has landed. I've gone ahead and filed the appropriate
launchpad bug to address this issue:

https://bugs.launchpad.net/oslo.config/+bug/1551836

Note: Using latent configuration imposes a forward-going maintenance burden
on all projects impacted, if released in mitaka. As such I recommend that
all PTL's mark this as a release-blocking bug, in order to buy us more time
to get these patches landed. I am working as best I can, however I cannot
guarantee that I'll be able to land all these patches in time.

Additionally, I will not be able to address issues caused by projects that
have not adopted oslo.config's generate-config. I hope those teams will be
able to find their own paths forward.

Who is willing to help?

Michael

On Fri, Feb 26, 2016 at 6:09 AM Michael Krotscheck 
wrote:

> Alright, I have a first sample patch up for what was discussed in this
> thread here:
>
> (Keystone) https://review.openstack.org/#/c/285308/
>
> The noted TODO on that is the cors middleware should (eventually) provide
> its own set_defaults method, so that CORS_OPTS isn't exposed publicly.
> However, dhellmann doesn't believe we have time for that in Mitaka, since
> oslo_middleware is already frozen for the release. I'll mark it as a todo
> item for myself, as the next cycle will contain a good amount of additional
> work on this portion of openstack.
>
> Given the time constraints, I'll wait until Tuesday for everyone to weigh
> in on the implementation. After that I will start converting the other
> projects over as best I can and as I have time. Who is willing to help?
>
> Michael
>
> On Thu, Feb 25, 2016 at 9:05 AM Michael Krotscheck 
> wrote:
>
>> On Thu, Feb 18, 2016 at 10:18 AM Morgan Fainberg <
>> morgan.fainb...@gmail.com> wrote:
>>
>>>
>>> I am against "option 1". This could be a case where we classify it as a
>>> release blocking bug for Mitaka final (is that reasonable to have m3 with
>>> the current scenario and final to be fixed?), which opens the timeline a
>>> bit rather than hard against feature-freeze.
>>>
>>
>> This sounds like a really good way to get us more time, so I'm in favor
>> of this. However, even with the additional time I will not be able to land
>> all these patches on my own.
>>
>> Who is willing to help?
>>
>> Michael
>>
>>>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Unable to get ceilometer events for instances running for demo project

2016-03-01 Thread Umar Yousaf
I have a single node configuration for devstack liberty working and I want
to record all the ceilometer events like compute.instance.start,
compute.instance.end, compute.instance.update etc occurred recently.
I am unable to get any event occurred for instances running for demo
project i.e when I try ceilometer event-list I end up with an empty list
but I could fortunately get all the necessary events occurred for instances
running for admin project/tenant with the same command. In addition to this
I want to get these through python client so if someone could provide me
with the equivalent python call, that would be more than handy.
Thanks in advance :)
Regard,
Umar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][security] Obtaining the vulnerability:managed tag

2016-03-01 Thread Steven Dake (stdake)
Core reviewers,

Please review this document:
https://github.com/openstack/governance/blob/master/reference/tags/vulnerability_managed.rst

It describes how vulnerability management is handled at a high level for Kolla. 
 When we are ready, I want the kolla delivery repos vulnerabilities to be 
managed by the VMT team.  By doing this, we standardize with other OpenStack 
processes for handling security vulnerabilities.

The first step is to form a kolla-coresec team, and create a separate 
kolla-coresec tracker.  I have already created the tracker for kolla-coresec 
and the kolla-coresec team in launchpad:

https://launchpad.net/~kolla-coresec

https://launchpad.net/kolla-coresec

I have a history of security expertise, and the PTL needs to be on the team as 
an escalation point as described in the VMT tagging document above.  I also 
need 2-3 more volunteers to join the team.  You can read the requirements of 
the job duties in the vulnerability:managed tag.

If your interested in joining the VMT team, please respond on this thread.  If 
there are more then 4 individuals interested in joining this team, I will form 
the team from the most active members based upon liberty + mitaka commits, 
reviews, and PDE spent.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Performance] Let's have next meeting on March 15th

2016-03-01 Thread Dina Belova
Folks,

due to the schedule our next meeting
 is going to happen
on March 8th, that's International Women's Day (non-working day in Russia,
for instance). So let's have our next meeting scheduled for March 15th.

Cheers,
Dina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [FFE] FF exception request for DPDK

2016-03-01 Thread Aleksandr Didenko
Hi,

I'd like to to request a feature freeze exception for "Support for DPDK for
improved networking performance" feature [0].

Part of this feature is already merged [1]. We have the following patches
in work / on review:

https://review.openstack.org/281827
https://review.openstack.org/283044
https://review.openstack.org/286595
https://review.openstack.org/284285
https://review.openstack.org/284283
https://review.openstack.org/286611

And we need to write new patches for the following parts of this feature:
https://blueprints.launchpad.net/fuel/+spec/network-verification-dpdk
https://blueprints.launchpad.net/fuel/+spec/support-dpdk-bond

We need 3 weeks after FF to finish this feature.
Risk of not delivering it after 3 weeks is low.

Regards,
Alex

[0] https://blueprints.launchpad.net/fuel/+spec/support-dpdk
[1]
https://review.openstack.org/#/q/status:merged+branch:master+topic:bp/support-dpdk
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [FFE] FF exception request for SR-IOV

2016-03-01 Thread Aleksandr Didenko
Hi,

I'd like to to request a feature freeze exception for "Support for SR-IOV
for improved networking performance" feature [0].

Part of this feature is already merged [1]. We have the following patches
in work / on review:

https://review.openstack.org/280782
https://review.openstack.org/284603
https://review.openstack.org/286633

And we need to write a new patch for:
https://blueprints.launchpad.net/fuel/+spec/nailgun-should-serialize-sriov

We need 2 weeks at most after FF to accomplish this.
Risk of not delivering it after 2 weeks is very low.

Regards,
Alex

[0] https://blueprints.launchpad.net/fuel/+spec/support-sriov
[1]
https://review.openstack.org/#/q/status:merged+branch:master+topic:bp/support-sriov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilosca][Neutron][Monasca]

2016-03-01 Thread Rubab Syed
Hi all,

I'm planning to write a plugin for Monasca that would enable router's
traffic monitoring per subnet per tenant. For that purpose, I'm using
Neutron l3 metering extension [1] that allows you to filter traffic based
on CIDRs.

My concerns:

- Now given the fact that this extension can be used to create labels and
rules for particular set of IPs and ceilometer can be used to meter the
bandwidth based on this data and monasca publisher for ceilometer is also
available, would that plugin be useful somehow? Where are we at ceilosca
right now?

- Even though ceilometer allows to meter bandwidth at l3 level, we still
have to create explicit labels and rules for all subnets attached to a
router. In a production environment where there could be multiple routers
belonging to multiple tenants, isn't it a bit of a work? I was wondering if
I could automate the label and rule creation process. My script would
automatically detect subnets and create rules per interface of router. It
would help in ceilosca as well and can be used by the router plugin (given
plugin is not redundant work). Comments?


[1] https://wiki.openstack.org/wiki/Neutron/Metering/Bandwidth


Thanks,
Rubab
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][FFE] API handler for serialized graph

2016-03-01 Thread Dmitriy Shulyak
Hello folks,

I am not sure that i will need FFE, but in case i wont be able to land this
patch [0] tomorrow - i would like to ask for one in advance. I will need
FFE for 2-3 days, depends mainly on fuel-web cores availability.

Merging this patch has zero user impact, and i am also using it already for
several days to test others things (works as expected), so it can be
considered as risk-free.

0. https://review.openstack.org/#/c/284293/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Use RGW as a default object store instead of Swift

2016-03-01 Thread Igor Kalnitsky
Hey Konstantin,

I see that provided patch [1] is for stable/8.0. Fuel 8.0 is recently
released and we usually do not accept any features to stable branch.

Or your meant that patch for master branch?

Thanks,
Igor

[1]: https://review.openstack.org/#/c/286100/

On Tue, Mar 1, 2016 at 4:44 PM, Konstantin Danilov
 wrote:
> Colleagues,
> I would like to request a feature freeze exception for
> 'Use RGW as a default object store instead of Swift' [1].
>
> To merge the changes we need at most one week of time.
>
> [1]: https://review.openstack.org/#/c/286100/
>
> Thanks
> --
>
> Kostiantyn Danilov aka koder.ua
> Principal software engineer, Mirantis
>
> skype:koder.ua
> http://koder-ua.blogspot.com/
> http://mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Contributor Awards

2016-03-01 Thread David Medberry
On Tue, Mar 1, 2016 at 8:30 AM, Tom Fifield  wrote:

> Excellent, excellent.
>
> What's the best place to buy Raspberry Pis these days?
>

Raspberry Pi 3 is available to buy today from our partners element14
 and RS
Components ,
and other resellers.
ref:
https://www.raspberrypi.org/blog/raspberry-pi-3-on-sale/

Pi 3 intro'd yesterday or day before.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [FFE] LCM readiness for all deployment tasks

2016-03-01 Thread Szymon Banka
Hi All,

I’d like to request a Feature Freeze Exception for "LCM readiness for all 
deployment tasks” [1] until Mar 11.

We need additional 1.5 week to finish and merge necessary changes which will 
fix tasks in Fuel to be idempotent. That will be foundation and will enable 
further development of LCM features.

More details about work being done: [2]

[1] https://blueprints.launchpad.net/fuel/+spec/granular-task-lcm-readiness
[2] https://review.openstack.org/#/q/topic:bp/granular-task-idempotency,n,z

--
Thanks,
Szymon Bańka
Mirantis
http://www.mirantis.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] OpenStack Contributor Awards

2016-03-01 Thread Shamail


> On Mar 1, 2016, at 10:40 AM, Matt Jarvis  
> wrote:
> 
> +1 for custom T-shirts or hoodies. Actual medals/trophies would also be a 
> nice idea ;)
+1
> 
>> On 1 March 2016 at 15:30, Tom Fifield  wrote:
>> Excellent, excellent.
>> 
>> What's the best place to buy Raspberry Pis these days?
>> 
>>> On 22/02/16 21:09, Victoria Martínez de la Cruz wrote:
>>> Oh I missed this thread... really great initiative! It's time to
>>> recognize the effort of our fellow stackers :D
>>> 
>>> Raspi/Arduino kits or limited edition t-shirts are very cool goodies
>>> 
>>> Cheers,
>>> 
>>> V
>>> 
>>> 2016-02-22 0:21 GMT-03:00 Steve Martinelli >> >:
>>> 
>>> limited edition (and hilarious) t-shirts are always fun :)
>>> 
>>> ++ on raspberry pis, those are always a hit.
>>> 
>>> stevemar
>>> 
>>> Inactive hide details for Hugh Blemings ---2016/02/21 09:54:59
>>> PM---Hiya, On 16/02/2016 21:43, Tom Fifield wrote:Hugh Blemings
>>> ---2016/02/21 09:54:59 PM---Hiya, On 16/02/2016 21:43, Tom Fifield
>>> wrote:
>>> 
>>> From: Hugh Blemings >
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> >> >, OpenStack Operators
>>> >> >
>>> Date: 2016/02/21 09:54 PM
>>> Subject: Re: [openstack-dev] OpenStack Contributor Awards
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Hiya,
>>> 
>>> On 16/02/2016 21:43, Tom Fifield wrote:
>>>  > Hi all,
>>>  >
>>>  > I'd like to introduce a new round of community awards handed out
>>> by the
>>>  > Foundation, to be presented at the feedback session of the summit.
>>>  >
>>>  > Nothing flashy or starchy - the idea is that these are to be a little
>>>  > informal, quirky ... but still recognising the extremely valuable
>>> work
>>>  > that we all do to make OpenStack excel.
>>>  >
>>>  > [...]
>>>  >
>>>  > in the meantime, let's use this thread to discuss the fun part:
>>> goodies.
>>>  > What do you think we should lavish award winners with? Soft toys?
>>>  > Perpetual trophies? baseball caps ?
>>> 
>>> I can't help but think that given the scale of a typical OpenStack
>>> deployment and the desire for these awards to be a bit quirky, giving
>>> recipients something at the other end of the computing scale - an
>>> Arduino, or cluster of Raspberry Pis or similar could be kinda fun :)
>>> 
>>> Cheers,
>>> Hugh
>>> 
>>> 
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> DataCentred Limited registered in England and Wales no. 05611763
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] OpenStack Contributor Awards

2016-03-01 Thread Matt Jarvis
+1 for custom T-shirts or hoodies. Actual medals/trophies would also be a
nice idea ;)

On 1 March 2016 at 15:30, Tom Fifield  wrote:

> Excellent, excellent.
>
> What's the best place to buy Raspberry Pis these days?
>
> On 22/02/16 21:09, Victoria Martínez de la Cruz wrote:
>
>> Oh I missed this thread... really great initiative! It's time to
>> recognize the effort of our fellow stackers :D
>>
>> Raspi/Arduino kits or limited edition t-shirts are very cool goodies
>>
>> Cheers,
>>
>> V
>>
>> 2016-02-22 0:21 GMT-03:00 Steve Martinelli > >:
>>
>> limited edition (and hilarious) t-shirts are always fun :)
>>
>> ++ on raspberry pis, those are always a hit.
>>
>> stevemar
>>
>> Inactive hide details for Hugh Blemings ---2016/02/21 09:54:59
>> PM---Hiya, On 16/02/2016 21:43, Tom Fifield wrote:Hugh Blemings
>> ---2016/02/21 09:54:59 PM---Hiya, On 16/02/2016 21:43, Tom Fifield
>> wrote:
>>
>> From: Hugh Blemings >
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> > >, OpenStack Operators
>> > >
>> Date: 2016/02/21 09:54 PM
>> Subject: Re: [openstack-dev] OpenStack Contributor Awards
>>
>>
>> 
>>
>>
>>
>> Hiya,
>>
>> On 16/02/2016 21:43, Tom Fifield wrote:
>>  > Hi all,
>>  >
>>  > I'd like to introduce a new round of community awards handed out
>> by the
>>  > Foundation, to be presented at the feedback session of the summit.
>>  >
>>  > Nothing flashy or starchy - the idea is that these are to be a
>> little
>>  > informal, quirky ... but still recognising the extremely valuable
>> work
>>  > that we all do to make OpenStack excel.
>>  >
>>  > [...]
>>  >
>>  > in the meantime, let's use this thread to discuss the fun part:
>> goodies.
>>  > What do you think we should lavish award winners with? Soft toys?
>>  > Perpetual trophies? baseball caps ?
>>
>> I can't help but think that given the scale of a typical OpenStack
>> deployment and the desire for these awards to be a bit quirky, giving
>> recipients something at the other end of the computing scale - an
>> Arduino, or cluster of Raspberry Pis or similar could be kinda fun :)
>>
>> Cheers,
>> Hugh
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

-- 
DataCentred Limited registered in England and Wales no. 05611763
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Contributor Awards

2016-03-01 Thread Tom Fifield

Excellent, excellent.

What's the best place to buy Raspberry Pis these days?

On 22/02/16 21:09, Victoria Martínez de la Cruz wrote:

Oh I missed this thread... really great initiative! It's time to
recognize the effort of our fellow stackers :D

Raspi/Arduino kits or limited edition t-shirts are very cool goodies

Cheers,

V

2016-02-22 0:21 GMT-03:00 Steve Martinelli >:

limited edition (and hilarious) t-shirts are always fun :)

++ on raspberry pis, those are always a hit.

stevemar

Inactive hide details for Hugh Blemings ---2016/02/21 09:54:59
PM---Hiya, On 16/02/2016 21:43, Tom Fifield wrote:Hugh Blemings
---2016/02/21 09:54:59 PM---Hiya, On 16/02/2016 21:43, Tom Fifield
wrote:

From: Hugh Blemings >
To: "OpenStack Development Mailing List (not for usage questions)"
>, OpenStack Operators
>
Date: 2016/02/21 09:54 PM
Subject: Re: [openstack-dev] OpenStack Contributor Awards





Hiya,

On 16/02/2016 21:43, Tom Fifield wrote:
 > Hi all,
 >
 > I'd like to introduce a new round of community awards handed out
by the
 > Foundation, to be presented at the feedback session of the summit.
 >
 > Nothing flashy or starchy - the idea is that these are to be a little
 > informal, quirky ... but still recognising the extremely valuable
work
 > that we all do to make OpenStack excel.
 >
 > [...]
 >
 > in the meantime, let's use this thread to discuss the fun part:
goodies.
 > What do you think we should lavish award winners with? Soft toys?
 > Perpetual trophies? baseball caps ?

I can't help but think that given the scale of a typical OpenStack
deployment and the desire for these awards to be a bit quirky, giving
recipients something at the other end of the computing scale - an
Arduino, or cluster of Raspberry Pis or similar could be kinda fun :)

Cheers,
Hugh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [FFE] Add multipath disks support

2016-03-01 Thread Szymon Banka
Hi All,

I’d like to request a Feature Freeze Exception for "Add multipath disks 
support” until Mar 9.
This feature allows to use FC multipath disks in Fuel nodes – BP [1], spec [2].

Development is already done and we need following patches still to be merged:
review and merge multipath support in fuel-agent [3]
review and merge multipath support railgun-agent [4]
[1] https://blueprints.launchpad.net/fuel/+spec/multipath-disks-support
[2] https://review.openstack.org/#/c/276745/
[3] https://review.openstack.org/#/c/285340/
[4] https://review.openstack.org/#/c/282552/

--
Thanks,
Szymon Bańka
Mirantis
http://www.mirantis.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #72

2016-03-01 Thread Emilien Macchi


On 02/29/2016 08:58 AM, Emilien Macchi wrote:
> Hi,
> 
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting4.
> 
> https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack
> 
> As usual, free free to bring topics in this etherpad:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160301
> 
> We'll also have open discussion for bugs & reviews, so anyone is welcome
> to join.
> 
> See you there,

Very quick meeting this week, here are the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-03-01-15.00.html

Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [all] Changing Microversion Headers

2016-03-01 Thread Chris Dent


tl;dr: The API-WG would like to change the format of the headers
used for Microversions to make them more future proof, now, before
too many projects are using them. See this review for more details if
you are interested and/or want to express an opinion or question.

https://review.openstack.org/#/c/243414/

Longer, read please:

Recently, when creating the an api-wg guideline for header non
proliferation[1] it was realized that the headers used for
microversions could cause some problems when used in realistic
scenarios. Therefore after plenty of discussion, including with
existing deployed microversioned projects (Nova, Ironic and Manila)
we've decided to explore changing the basic header from either of:

X-OpenStack-Nova-API-Version: 2.11
OpenStack-Compute-API-Version: 2.11

to

OpenStack-API-Version: compute 2.11

This allows us to use one header name for multiple services and
avoids some of the problems described in [1].

This has become a somewhat more urgent issue than it had been because
Cinder has implemented microversions (woot!), but using a style of
header that is different from the old style, pending guidelines and this
newest idea. We need to make an agreement now before the end of the
cycle to get everybody working in the same direction.

If you're working on APIs and are using or planning to use
microversions, please have a look and leave your comments or
concerns. It's probably best to read [1] as well, for context.

Thanks very much for your input.

[1] https://review.openstack.org/#/c/280381/

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-03-01 Thread Anita Kuno
On 03/01/2016 05:08 AM, Eoghan Glynn wrote:
> 
>>> Current thinking would be to give preferential rates to access the main
>>> summit to people who are present to other events (like this new
>>> separated contributors-oriented event, or Ops midcycle(s)). That would
>>> allow for a wider definition of "active community member" and reduce
>>> gaming.
>>>
>>
>> I think reducing gaming is important. It is valuable to include those
>> folks who wish to make a contribution to OpenStack, I have confidence
>> the next iteration of entry structure will try to more accurately
>> identify those folks who bring value to OpenStack.
>
> There have been a couple references to "gaming" on this thread, which
> seem to imply a certain degree of dishonesty, in the sense of bending
> the rules.
>
> Can anyone who has used the phrase clarify:
>
>  (a) what exactly they mean by gaming in this context
>
> and:
>
>  (b) why they think this is a clear & present problem demanding a
>  solution?
>
> For the record, landing a small number of patches per cycle and thus
> earning an ATC summit pass as a result is not, IMO at least, gaming.
>
> Instead, it's called *contributing*.
>
> (on a small scale, but contributing none-the-less).
>
> Cheers,
> Eoghan

 Sure I can tell you what I mean.

 In Vancouver I happened to be sitting behind someone who stated "I'm
 just here for the buzz." Which is lovely for that person. The problem is
 that the buzz that person is there for is partially created by me and I
 create it and mean to offer it to people who will return it in kind, not
 just soak it up and keep it to themselves.

 Now I have no way of knowing who this person is and how they arrived at
 the event. But the numbers for people offering one patch to OpenStack
 (the bar for a summit pass) is significantly higher than the curve of
 people offering two, three or four patches to OpenStack (patches that
 are accepted and merged). So some folks are doing the minimum to get a
 summit pass rather than being part of the cohort that has their first
 patch to OpenStack as a means of offering their second patch to OpenStack.

 I consider it an honour and a privilege that I get to work with so many
 wonderful people everyday who are dedicated to making open source clouds
 available for whoever would wish to have clouds. I'm more than a little
 tired of having my energy drained by folks who enjoy feeding off of it
 while making no effort to return beneficial energy in kind.

 So when I use the phrase gaming, this is the dynamic to which I refer.
>>>
>>> Thanks for the response.
>>>
>>> I don't know if drive-by attendance at design summit sessions by under-
>>> qualified or uninformed summiteers is encouraged by the availability of
>>> ATC passes. But as long as those individuals aren't actively derailing
>>> the conversation in sessions, I wouldn't consider their buzz soakage as
>>> a major issue TBH.
>>>
>>> In any case, I would say that just meeting the bar for an ATC summit pass
>>> (by landing the required number of patches) is not bending the rules or
>>> misrepresenting in any way.
>>>
>>> Even if specifically motivated by the ATC pass (as opposed to scratching
>>> a very specific itch) it's still simply an honest and rational response
>>> to an incentive offered by the foundation.
>>>
>>> One could argue whether the incentive is mis-designed, but that doesn't
>>> IMO make a gamer of any contributor who simply meets the required threshold
>>> of activity.
>>>
>>> Cheers,
>>> Eoghan
>>>
>>
>> No I'm not saying that. I'm saying that the larger issue is one of
>> motivation.
>>
>> Folks who want to help (even if they don't know how yet) carry an energy
>> of intention with them which is nourishing to be around. Folks who are
>> trying to get in the door and not be expected to help and hope noone
>> notices carry an entirely different kind of energy with them. It is a
>> non-nourishing energy.
> 
> Personally I don't buy into that notion of the wrong sort of people
> sneaking in the door of summit, keeping their heads down and hoping
> no-one notices.
> 
> We have an open community that conducts its business in public. Not
> wanting folks with the wrong sort of energy to be around when that
> business is being done, runs counter to our open ethos IMO.

That's fine. I guess we have a differing definition of what open means.

> 
> There are a whole slew of folks who work fulltime on OpenStack but
> contribute mainly in the background: operating clouds, managing
> engineering teams, supporting customers, designing product roadmaps,
> training new users etc.

I think you have completely misunderstood my point if you are
interpreting my words to mean I am dishonouring users.

> TBH we should be flattered that the 

Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Adrian Otto
This issue involves what I refer to as "OS religion" operators have this WRT 
bay nodes, but users don't. I suppose this is a key reason why OpenStack does 
not have any concept of supported OS images today. Where I can see the value in 
offering various choices in Magnum, maintaining a reference implementation of 
an OS image has shown that it requires non-trivial resources, and expanding 
that to several will certainly require more. The question really comes down to 
the importance of this particular choice as a development team focus. Is it 
more important than a compelling network or storage integration with OpenStack 
services? I doubt it.

We all agree there should be a way to use an alternate OS image with Magnum. 
That has been our intent from the start. We are not discussing removing that 
option. However, rather than having multiple OS images the Magnum team 
maintains, maybe we could clearly articulate how to plug in to Magnum, and set 
up a third party CI, and allow various OS vendors to participate to make their 
options work with those requirements. If this approach works, then it may even 
reduce the need for a reference implementation at all if multiple upstream 
options result.

--
Adrian

On Mar 1, 2016, at 12:28 AM, Guz Egor 
> wrote:

Adrian,

I disagree, host OS is very important for operators because of integration with 
all internal tools/repos/etc.

I think it make sense to limit OS support in Magnum main source. But not sure 
that Fedora Atomic is right choice,
first of all there is no documentation about it and I don't think it's 
used/tested a lot by Docker/Kub/Mesos community.
It make sense to go with Ubuntu (I believe it's still most adopted platform in 
all three COEs and OpenStack deployments)
and CoreOS (is highly adopted/tested in Kub community and Mesosphere DCOS uses 
it as well).

We can implement CoreOS support as driver and users can use it as reference 
implementation.

---
Egor


From: Adrian Otto >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Monday, February 29, 2016 10:36 AM
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Consider this: Which OS runs on the bay nodes is not important to end users. 
What matters to users is the environments their containers execute in, which 
has only one thing in common with the bay node OS: the kernel. The linux 
syscall interface is stable enough that the various linux distributions can all 
run concurrently in neighboring containers sharing same kernel. There is really 
no material reason why the bay OS choice must match what distro the container 
is based on. Although I’m persuaded by Hongbin’s concern to mitigate risk of 
future changes WRT whatever OS distro is the prevailing one for bay nodes, 
there are a few items of concern about duality I’d like to zero in on:

1) Participation from Magnum contributors to support the CoreOS specific 
template features has been weak in recent months. By comparison, participation 
relating to Fedora/Atomic have been much stronger.

2) Properly testing multiple bay node OS distros (would) significantly increase 
the run time and complexity of our functional tests.

3) Having support for multiple bay node OS choices requires more extensive 
documentation, and more comprehensive troubleshooting details.

If we proceed with just one supported disto for bay nodes, and offer 
extensibility points to allow alternates to be used in place of it, we should 
be able to address the risk concern of the chosen distro by selecting an 
alternate when that change is needed, by using those extensibility points. 
These include the ability to specify your own bay image, and the ability to use 
your own associated Heat template.

I see value in risk mitigation, it may make sense to simplify in the short term 
and address that need when it becomes necessary. My point of view might be 
different if we had contributors willing and ready to address the variety of 
drawbacks that accompany the strategy of supporting multiple bay node OS 
choices. In absence of such a community interest, my preference is to simplify 
to increase our velocity. This seems to me to be a relatively easy way to 
reduce complexity around heat template versioning. What do you think?

Thanks,

Adrian

On Feb 29, 2016, at 8:40 AM, Hongbin Lu 
> wrote:

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to continue to support 2 
>different versions of the k8s template. 

[openstack-dev] [Fuel][FFE] Use RGW as a default object store instead of Swift

2016-03-01 Thread Konstantin Danilov
Colleagues,
I would like to request a feature freeze exception for
'Use RGW as a default object store instead of Swift' [1].

To merge the changes we need at most one week of time.

​[1]: https://review.openstack.org/#/c/286100/


Thanks
-- 

Kostiantyn Danilov aka koder.ua
Principal software engineer, Mirantis

skype:koder.ua
http://koder-ua.blogspot.com/
http://mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stack.sh keeps failing with RTNETLINK answers: Network is unreachable

2016-03-01 Thread Paul Carlton

Failing with different error now

see attached

On 29/02/16 18:42, Brian Haley wrote:

On 02/26/2016 03:48 AM, Paul Carlton wrote:

Sean

Don't think unstack failed, when I manually create the device ( sudo 
ovs-vsctl

--no-wait -- --may-exist add-br br-ex) unstack.sh removes it.

sudo ip route show
default via 172.18.20.1 dev eth0
169.254.169.254 via 172.18.20.2 dev eth0
172.18.20.0/24 dev eth0  proto kernel  scope link  src 172.18.20.23
192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1


Matt

config attached


So getting back to your original error:

$ sudo ip route replace 10.1.0.0/20 via 192.168.100.200
RTNETLINK answers: Network is unreachable

Looking at your local.conf:

FIXED_RANGE=10.1.0.0/20
FLOATING_RANGE=192.168.100.0/24
Q_FLOATING_ALLOCATION_POOL=start=192.168.100.200,end=192.168.100.254
PUBLIC_NETWORK_GATEWAY=192.168.100.1
PUBLIC_BRIDGE=br-ex

The 'ip route replace...' above is adding a route for the fixed IP 
range you've specified for your private subnet via what should be the 
neutron router for the network.  That router was allocated .200 when 
it attached to the external subnet, which is fine.


In order for this to work, the IP $PUBLIC_NETWORK_GATEWAY 
(192.168.100.1) needs to have been configured on br-ex 
($PUBLIC_BRIDGE) beforehand, since otherwise the network will be 
unreachable.  Does 'ip a s dev br-ex' show that IP?


It could be that some setting in your local.conf is causing some 
mis-configuration, pasting 'neutron port-list' and 'neutron 
subnet-list' might help track down what is wrong.


-Brian


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".

2016-03-01 14:03:15.449 | + sudo ip route replace 10.1.0.0/20 via 
192.168.100.200
2016-03-01 14:03:15.460 | + _neutron_set_router_id
2016-03-01 14:03:15.460 | + [[ True == \F\a\l\s\e ]]
2016-03-01 14:03:15.460 | + [[ 4+6 =~ .*6 ]]
2016-03-01 14:03:15.460 | + _neutron_configure_router_v6
2016-03-01 14:03:15.460 | + neutron router-interface-add 
13dca06b-958a-4990-b715-2efa9e794b07 b59083bf-70b8-4f3a-9946-79c249c392a7
2016-03-01 14:03:17.631 | Added interface 70654525-e6e1-4d66-ba0b-e626f1794d79 
to router 13dca06b-958a-4990-b715-2efa9e794b07.
2016-03-01 14:03:17.671 | ++ _neutron_create_public_subnet_v6 
2ae743d8-400a-4833-b556-acdee0678ce8
2016-03-01 14:03:17.672 | ++ local 'subnet_params=--ip_version 6 '
2016-03-01 14:03:17.672 | ++ subnet_params+='--gateway 2001:db8::2 '
2016-03-01 14:03:17.672 | ++ subnet_params+='--name ipv6-public-subnet '
2016-03-01 14:03:17.672 | ++ 
subnet_params+='2ae743d8-400a-4833-b556-acdee0678ce8 2001:db8::/64 '
2016-03-01 14:03:17.672 | ++ subnet_params+='-- --enable_dhcp=False'
2016-03-01 14:03:17.673 | +++ neutron subnet-create --ip_version 6 --gateway 
2001:db8::2 --name ipv6-public-subnet 2ae743d8-400a-4833-b556-acdee0678ce8 
2001:db8::/64 -- --enable_dhcp=False
2016-03-01 14:03:17.674 | +++ grep -e gateway_ip -e ' id '
2016-03-01 14:03:20.366 | ++ local 'ipv6_id_and_ext_gw_ip=| gateway_ip| 
2001:db8::2  |
2016-03-01 14:03:20.366 | | id| 
6dd3abe8-d296-4fe7-861b-5bd277615f0b |'
2016-03-01 14:03:20.367 | ++ die_if_not_set 1288 ipv6_id_and_ext_gw_ip 'Failure 
creating an IPv6 public subnet'
2016-03-01 14:03:20.367 | ++ local exitcode=0
2016-03-01 14:03:20.371 | ++ echo '|' gateway_ip '|' 2001:db8::2 '|' '|' id '|' 
6dd3abe8-d296-4fe7-861b-5bd277615f0b '|'
2016-03-01 14:03:20.371 | + local 'ipv6_id_and_ext_gw_ip=| gateway_ip | 
2001:db8::2 | | id | 6dd3abe8-d296-4fe7-861b-5bd277615f0b |'
2016-03-01 14:03:20.372 | ++ echo '|' gateway_ip '|' 2001:db8::2 '|' '|' id '|' 
6dd3abe8-d296-4fe7-861b-5bd277615f0b '|'
2016-03-01 14:03:20.373 | ++ get_field 2
2016-03-01 14:03:20.373 | ++ local data field
2016-03-01 14:03:20.373 | ++ read data
2016-03-01 14:03:20.374 | ++ '[' 2 -lt 0 ']'
2016-03-01 14:03:20.374 | ++ field='$3'
2016-03-01 14:03:20.374 | ++ echo '| gateway_ip | 2001:db8::2 | | id | 
6dd3abe8-d296-4fe7-861b-5bd277615f0b |'
2016-03-01 14:03:20.374 | ++ awk '-F[ \t]*\\|[ \t]*' '{print 

[openstack-dev] [Fuel] [FFE] Use packetary for building ISO

2016-03-01 Thread Vladimir Kozhukalov
Dear colleagues,

I'd like to request a feature freeze exception for "Use packetary for
building ISO". BP [0]

There is PR in packetary itself [1]. And one pull request in fuel-main [2].

For ISO build process this feature means we will use the same command 'make
iso' but another set of environment variables.

This change is going to make ISO build process fully data driven (yaml). It
will allow us to build ISO with various sets of repositories (upstream fuel
repo, custom repos, etc.) Also it is going to make ISO build process even
faster (about 5 minutes).

Risk to affect other teams is medium. Merge plan is as follows:

1) We will create a parallel ISO build job with new set of variables and
make it use fuel-main request (not merging it)
2) We will run this job for couple of days and run smoke and BVT tests
3) Once everything is ready and working we will merge fuel-main patch and
declare this new job as our product ISO job
4) We will create custom jobs with this new set of variables

[0] https://blueprints.launchpad.net/fuel/+spec/use-packetary-in-fuel
[1] https://review.openstack.org/#/c/286576
[2] https://review.openstack.org/#/c/283976


Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Nominating Eva Balycheva for Zaqar core

2016-03-01 Thread Ryan Brown

On 02/29/2016 07:47 PM, Fei Long Wang wrote:

Hi all,

I would like to propose adding Eva Balycheva(Eva-i) for the Zaqar core
team. Eva has been an awesome contributor since joining the Zaqar team.
She is currently the most active non-core reviewer on Zaqar projects for
the last 90 days[1]. During this time, she's been contributing to many
different areas:

1. Websocket binary support
2. Zaqar Configuration Reference docs
3. Zaqar client
4. Zaqar benchmarking

Eva has got an good eye for review and contributed a lot of wonderful
patches[2]. I think she would make an excellent addition to the team. If
no one objects, I'll proceed and add her in a week from now.


+1, cheers!

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solar] Weekly update

2016-03-01 Thread jnowak
Hello,

Fuel2Solar (f2s) is a experimental project which allows to use Solar
with Fuel. Using fuel-library tasks it generates Solar resource
definitions, using deployment info and deployment graph from nailgun
it creates and connects solar resources.
Using this data Solar is able to deploy OpenStack without Astute.

- Implemented ability to fetch nailgun graph and build solar resources from it
- Still designing how to work with non puppet tasks
- Decided that Solar will NOT be shipped as part of Fuel ISO

Proposed UX for using Fuel2Solar in Fuel 9.0 is:
- user will need to install 2 rpm packages: solar and fuel2solar from 
mos-master repository
- user will need to configure cluster in fuel-web, and then switch to solar CLI 
before clicking "deploy" button

Besides activity on Fuel2Solar https://github.com/Mirantis/f2s we also:
- cleaned up requirements (moved to more popular/smaller packages) [0] - [2]
- fixed problems with zerorpc + pyzmq > 13.0.2 [3], [4]
- prepared working rpm package for Centos 7 in mos-master repository (thanks to 
alexz and asilenkov)
- started packer + docker activity [5]
- started preparing centos7 as our default dev environment [6]
- fixed some minor bugs


[0] - https://review.openstack.org/283093
[1] - https://review.openstack.org/284096
[2] - https://review.openstack.org/285512
[3] - https://review.openstack.org/284262
[4] - https://review.openstack.org/284123
[5] - https://review.openstack.org/281767
[6] - https://review.openstack.org/284782
[7] - https://github.com/Mirantis/f2s


--
Warm regards
Jędrzej Nowak

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Steve Gordon
- Original Message -
> From: "Kai Qiang Wu" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: "Josh Berkus" 
> 
> We found some issue about atomic host run some docker volume plugin, while
> atomic and docker volume plugin both not sure what's the root cause of
> that.
> 
> here is the link
> https://github.com/docker/docker/issues/18005#issuecomment-190215862

Thanks for highlighting this PR, I'll add it to my list.

> Also I did not find atomic image update quickly, ( but k8s and docker both
> release quickly, which can lacks of new feature applied in our
> development), I think atomic have a gap for that.

This is definitely likely to continue to be a real issue particularly w.r.t. 
docker itself - containerization of k8s, flannel, and etcd will alleviate at 
least some of the pain though as does the fact that updates are now in fact 
being pushed out every couple of weeks. As it stands the official Fedora Atomic 
images appear to actually contain equivalent or newer components than the 
custom builds at https://fedorapeople.org/groups/magnum/ (?), e.g.:

Fedora-Cloud-Atomic-23-20160223.x86_64.qcow2:

docker-1.9.1
flannel-0.5.4
kubernetes-1.1.0
etcd-2.2.1

fedora-21-7.qcow2:

docker-1.9.1
flannel-0.5.0
kubernetes-1.1.0
etcd-2.0.13

I digress though, as I said in the follow up either way it doesn't seem to me 
like only having support for one image would be a win for users - it does make 
sense though to expect more of the work to support each image to come from the 
folks interested in maintaining that support though than being spread across 
the entire magnum team.

Thanks,

Steve

> 
> Kai Qiang Wu (吴开强  Kennan)
> IBM China System and Technology Lab, Beijing
> 
> E-mail: wk...@cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
>  No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
> 100193
> 
> Follow your heart. You are miracle!
> 
> 
> 
> From: Steve Gordon 
> To:   Guz Egor , "OpenStack Development Mailing
> List (not for usage questions)"
> 
> Cc:   Josh Berkus 
> Date: 01/03/2016 08:19 pm
> Subject:  Re: [openstack-dev] [magnum] Discussion of  supporting
> single/multiple   OS distro
> 
> 
> 
> - Original Message -
> > From: "Guz Egor" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> 
> >
> > Adrian,
> > I disagree, host OS is very important for operators because of
> integration
> > with all internal tools/repos/etc.
> > I think it make sense to limit OS support in Magnum main source. But not
> sure
> > that Fedora Atomic is right choice,first of all there is no documentation
> > about it and I don't think it's used/tested a lot by Docker/Kub/Mesos
> > community.
> 
> Project Atomic documentation for the most part lives here:
> 
> http://www.projectatomic.io/docs/
> 
> To help us improve it, it would be useful to know what you think is
> missing. E.g. I saw recently in the IRC channel it was discussed that there
> is no documentation on (re)building the image but this is the first hit in
> a Google search for same and it seems to largely match what has been copied
> into Magnum's docs for same:
> 
> 
> http://www.projectatomic.io/blog/2014/08/build-your-own-atomic-centos-or-fedora/
> 
> 
> I have no doubt that there are areas where the documentation is lacking,
> but it's difficult to resolve a claim that there is no documentation at
> all. I recently kicked off a thread over on the atomic list to try and
> relay some of the concerns that were raised on this list and in the IRC
> channel recently, it would be great if Magnum folks could chime in with
> more specifics:
> 
> 
> https://lists.projectatomic.io/projectatomic-archives/atomic/2016-February/thread.html#9
> 
> 
> Separately I had asked about containerization of kubernetes/etcd/flannel
> which remains outstanding:
> 
> 
> https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/XICO4NJCTPI43AWG332EIM2HNFYPZ6ON/
> 
> 
> Fedora Atomic builds do seem to be hitting their planned two weekly update
> cadence now though which may alleviate this concern at least somewhat in
> the interim:
> 
> 
> https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/CW5BQS3ODAVYJGAJGAZ6UA3XQMKEISVJ/
> 
> https://fedorahosted.org/cloud/ticket/139
> 
> Thanks,
> 
> Steve
> 
> > It make sense to go with Ubuntu (I believe it's still most adopted
> > platform in all three COEs and OpenStack deployments)     and CoreOS (is
> > highly 

[openstack-dev] [Fuel] [Plugins] Fuel Plugin Builder 4.0.0 released

2016-03-01 Thread Igor Kalnitsky
Hey Fuelers,

I want to announce that FPB (fuel plugin builder) v4.0.0 has been
released on PyPI [1].

New package version "4.0.0" includes but not limited to:

* New flag `is_hotpluggable` in `metadata.yaml` that allows to install
and use plugin on previously deployed environments.
* Plugin can specify settings group using "group" field in metadata in
environment_config.yaml file.
* New group `equipment` added to groups list in `metadata.yaml`.
* New `components.yaml` file that allows to declare new components.

Bugfixes:

* Fix of missing strategy parameter in V3 and V4 deployment tasks LP1522785 [2].

Thanks,
Igor

[1] https://pypi.python.org/pypi/fuel-plugin-builder/4.0.0
[2] https://bugs.launchpad.net/fuel/+bug/1522785

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] provisioning and deployment data pipeline

2016-03-01 Thread Sylwester Brzeczkowski
I'm glad to inform you that FFE is no longer needed here. The pipeline
implementation was finally merged. I also stated working on fixing random
failing test [0].

Regards

[0] https://review.openstack.org/#/c/286583/

On Tue, Mar 1, 2016 at 12:38 PM, Sylwester Brzeczkowski <
sbrzeczkow...@mirantis.com> wrote:

> Hi
>
> I want to request a feature freeze exception for "provisioning and
> deployment data pipeline"
>
> There is one patch with implementation [1] which is ready and already has
> had two '+2's and '+1' workflow yesterday but there still failing gate jobs
> because of some random test failures.
>
> Merging it is a matter of investigation this random failures or keep
> 'reverifying' it on gerrit, which I'm now doing simultaneously.
>
> Regards
>
> [0] https://blueprints.launchpad.net/fuel/+spec/data-pipeline
> [1] https://review.openstack.org/#/c/272977/
>
> --
> *Sylwester Brzeczkowski*
> Junior Python Software Engineer
> Product Development-Core : Product Engineering
>



-- 
*Sylwester Brzeczkowski*
Junior Python Software Engineer
Product Development-Core : Product Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Kai Qiang Wu
I tend to agree that multiple os  support is OK (we can limited to popular
ones first, like redhat os, ubuntu os) But we not tend to cover all OS, it
would much burden for maintain, and extra small requirements should be
maintain by 3-rd party if possible(through drivers).




Thanks



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steve Gordon 
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: Martin Andre , Josh Berkus

Date:   01/03/2016 08:25 pm
Subject:Re: [openstack-dev] [magnum]Discussion  of  
supporting
single/multiple OS distro



- Original Message -
> From: "Steve Gordon" 
> To: "Guz Egor" , "OpenStack Development Mailing List
(not for usage questions)"
> 
>
> - Original Message -
> > From: "Guz Egor" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> >
> > Adrian,
> > I disagree, host OS is very important for operators because of
integration
> > with all internal tools/repos/etc.
> > I think it make sense to limit OS support in Magnum main source. But
not
> > sure
> > that Fedora Atomic is right choice,first of all there is no
documentation
> > about it and I don't think it's used/tested a lot by Docker/Kub/Mesos
> > community.
>
> Project Atomic documentation for the most part lives here:
>
> http://www.projectatomic.io/docs/
>
> To help us improve it, it would be useful to know what you think is
missing.
> E.g. I saw recently in the IRC channel it was discussed that there is no
> documentation on (re)building the image but this is the first hit in a
> Google search for same and it seems to largely match what has been copied
> into Magnum's docs for same:
>
>
http://www.projectatomic.io/blog/2014/08/build-your-own-atomic-centos-or-fedora/

>
> I have no doubt that there are areas where the documentation is lacking,
but
> it's difficult to resolve a claim that there is no documentation at all.
I
> recently kicked off a thread over on the atomic list to try and relay
some
> of the concerns that were raised on this list and in the IRC channel
> recently, it would be great if Magnum folks could chime in with more
> specifics:
>
>
https://lists.projectatomic.io/projectatomic-archives/atomic/2016-February/thread.html#9

>
> Separately I had asked about containerization of kubernetes/etcd/flannel
> which remains outstanding:
>
>
https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/XICO4NJCTPI43AWG332EIM2HNFYPZ6ON/

>
> Fedora Atomic builds do seem to be hitting their planned two weekly
update
> cadence now though which may alleviate this concern at least somewhat in
the
> interim:
>
>
https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/CW5BQS3ODAVYJGAJGAZ6UA3XQMKEISVJ/

> https://fedorahosted.org/cloud/ticket/139
>
> Thanks,
>
> Steve

I meant to add, I don't believe choosing a single operating system image to
support - regardless of which it is - is the right move for users and
largely agree with what Ton Ngo put forward in his most recent post in the
thread. I'm simply highlighting that there are folks willing/able to work
on improving things from the Atomic side and we are endeavoring to provide
them actionable feedback from the Magnum community to do so.

Thanks,

Steve

> > It make sense to go with Ubuntu (I believe it's still most adopted
> > platform in all three COEs and OpenStack deployments) and CoreOS
(is
> > highly adopted/tested in Kub community and Mesosphere DCOS uses it as
> > well).
> >  We can implement CoreOS support as driver and users can use it as
> >  reference
> > implementation.
>
>
> > --- Egor
> >   From: Adrian Otto 
> >  To: OpenStack Development Mailing List (not for usage questions)
> >  
> >  Sent: Monday, February 29, 2016 10:36 AM
> >  Subject: Re: [openstack-dev] [magnum] Discussion of supporting
> >  single/multiple OS distro
> >
> > Consider this: Which OS runs on the bay nodes is not important to end
> > users.
> > What matters to users is the environments their containers execute in,
> > which
> > has only one thing in common with the bay node OS: the kernel. The
linux
> > syscall interface is stable enough that the various linux 

Re: [openstack-dev] [Nova] notification subteam meeting

2016-03-01 Thread Balázs Gibizer
Hi, 

The next short meeting of the nova notification subteam will happen 2016-03-01 
Tuesday _17:00_ UTC [1] on #openstack-meeting-alt on freenode 

Agenda:
- Status of the outstanding specs and code reviews
- AOB
See you there.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160301T17  
[2] https://wiki.openstack.org/wiki/Meetings/NovaNotification


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Kai Qiang Wu
We found some issue about atomic host run some docker volume plugin, while
atomic and docker volume plugin both not sure what's the root cause of
that.

here is the link
https://github.com/docker/docker/issues/18005#issuecomment-190215862


Also I did not find atomic image update quickly, ( but k8s and docker both
release quickly, which can lacks of new feature applied in our
development), I think atomic have a gap for that.




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steve Gordon 
To: Guz Egor , "OpenStack Development Mailing
List (not for usage questions)"

Cc: Josh Berkus 
Date:   01/03/2016 08:19 pm
Subject:Re: [openstack-dev] [magnum] Discussion of  supporting
single/multiple OS distro



- Original Message -
> From: "Guz Egor" 
> To: "OpenStack Development Mailing List (not for usage questions)"

>
> Adrian,
> I disagree, host OS is very important for operators because of
integration
> with all internal tools/repos/etc.
> I think it make sense to limit OS support in Magnum main source. But not
sure
> that Fedora Atomic is right choice,first of all there is no documentation
> about it and I don't think it's used/tested a lot by Docker/Kub/Mesos
> community.

Project Atomic documentation for the most part lives here:

http://www.projectatomic.io/docs/

To help us improve it, it would be useful to know what you think is
missing. E.g. I saw recently in the IRC channel it was discussed that there
is no documentation on (re)building the image but this is the first hit in
a Google search for same and it seems to largely match what has been copied
into Magnum's docs for same:


http://www.projectatomic.io/blog/2014/08/build-your-own-atomic-centos-or-fedora/


I have no doubt that there are areas where the documentation is lacking,
but it's difficult to resolve a claim that there is no documentation at
all. I recently kicked off a thread over on the atomic list to try and
relay some of the concerns that were raised on this list and in the IRC
channel recently, it would be great if Magnum folks could chime in with
more specifics:


https://lists.projectatomic.io/projectatomic-archives/atomic/2016-February/thread.html#9


Separately I had asked about containerization of kubernetes/etcd/flannel
which remains outstanding:


https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/XICO4NJCTPI43AWG332EIM2HNFYPZ6ON/


Fedora Atomic builds do seem to be hitting their planned two weekly update
cadence now though which may alleviate this concern at least somewhat in
the interim:


https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/CW5BQS3ODAVYJGAJGAZ6UA3XQMKEISVJ/

https://fedorahosted.org/cloud/ticket/139

Thanks,

Steve

> It make sense to go with Ubuntu (I believe it's still most adopted
> platform in all three COEs and OpenStack deployments)     and CoreOS (is
> highly adopted/tested in Kub community and Mesosphere DCOS uses it as
well).
>  We can implement CoreOS support as driver and users can use it as
reference
> implementation.


> --- Egor
>   From: Adrian Otto 
>  To: OpenStack Development Mailing List (not for usage questions)
>  
>  Sent: Monday, February 29, 2016 10:36 AM
>  Subject: Re: [openstack-dev] [magnum] Discussion of supporting
>  single/multiple OS distro
>
> Consider this: Which OS runs on the bay nodes is not important to end
users.
> What matters to users is the environments their containers execute in,
which
> has only one thing in common with the bay node OS: the kernel. The linux
> syscall interface is stable enough that the various linux distributions
can
> all run concurrently in neighboring containers sharing same kernel. There
is
> really no material reason why the bay OS choice must match what distro
the
> container is based on. Although I’m persuaded by Hongbin’s concern to
> mitigate risk of future changes WRT whatever OS distro is the prevailing
one
> for bay nodes, there are a few items of concern about duality I’d like to
> zero in on:
> 1) Participation from Magnum contributors to support the CoreOS specific
> template features has been weak in recent months. By comparison,
> participation relating to Fedora/Atomic have been much stronger.
> 2) Properly testing multiple bay node OS 

Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching and force detach

2016-03-01 Thread D'Angelo, Scott
Matt, changing Nova to store the connector info at volume attach time does 
help. Where the gap will remain is after Nova evacuation or live migration, 
when that info will need to be updated in Cinder. We need to change the Cinder 
API to have some mechanism to allow this.
We'd also like Cinder to store the appropriate info to allow a force-detach for 
the cases where Nova cannot make the call to Cinder.
Ongoing work for this and related issues is tracked and discussed here:
https://etherpad.openstack.org/p/cinder-nova-api-changes

Scott D'Angelo (scottda)

From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
Sent: Monday, February 29, 2016 7:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching 
and force detach

On 2/22/2016 4:08 PM, Walter A. Boring IV wrote:
> On 02/22/2016 11:24 AM, John Garbutt wrote:
>> Hi,
>>
>> Just came up on IRC, when nova-compute gets killed half way through a
>> volume attach (i.e. no graceful shutdown), things get stuck in a bad
>> state, like volumes stuck in the attaching state.
>>
>> This looks like a new addition to this conversation:
>> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082683.html
>>
>> And brings us back to this discussion:
>> https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova
>>
>> What if we move our attention towards automatically recovering from
>> the above issue? I am wondering if we can look at making our usually
>> recovery code deal with the above situation:
>> https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24c79f4bf615/nova/compute/manager.py#L934
>>
>>
>> Did we get the Cinder APIs in place that enable the force-detach? I
>> think we did and it was this one?
>> https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force-detach-needs-cinderclient-api
>>
>>
>> I think diablo_rojo might be able to help dig for any bugs we have
>> related to this. I just wanted to get this idea out there before I
>> head out.
>>
>> Thanks,
>> John
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> .
>>
> The problem is a little more complicated.
>
> In order for cinder backends to be able to do a force detach correctly,
> the Cinder driver needs to have the correct 'connector' dictionary
> passed in to terminate_connection.  That connector dictionary is the
> collection of initiator side information which is gleaned here:
> https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connector.py#L99-L144
>
>
> The plan was to save that connector information in the Cinder
> volume_attachment table.  When a force detach is called, Cinder has the
> existing connector saved if Nova doesn't have it.  The problem was live
> migration.  When you migrate to the destination n-cpu host, the
> connector that Cinder had is now out of date.  There is no API in Cinder
> today to allow updating an existing attachment.
>
> So, the plan at the Mitaka summit was to add this new API, but it
> required microversions to land, which we still don't have in Cinder's
> API today.
>
>
> Walt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Regarding storing off the initial connector information from the attach,
does this [1] help bridge the gap? That adds the connector dict to the
connection_info dict that is serialized and stored in the nova
block_device_mappings table, and then in that patch is used to pass it
to terminate_connection in the case that the host has changed.

[1] https://review.openstack.org/#/c/266095/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] FFE request for ConfigDB service

2016-03-01 Thread Oleg Gelbukh
Greetings,

As you might know, we are working on centralised storage for
deployment configuration data in Fuel. Such store will allow external
3rd-party services to consume the entirety of settings provided by
Fuel to deployment mechanisms on target nodes. It will also allow to
manage and override the settings via simple client application.

This change is required to enable Puppet Master based LCM solution.

We request a FFE for this feature for 3 weeks, until Mar 24. By that
time, we will provide tested solution in accordance with the following
specifications [1] [2]

The feature includes 3 main components:
1. Extension to Nailgun API with separate DB structure to store serialized data
2. Backend library for Hiera to consume the API in question to lookup
values of the certain parameters
3. Astute task to download all serialized data from nodes and upload
them to ConfigDB API upon successful deployment of cluster

Since introduction of stevedore-based extensions [3], we could develop
extensions in separate code repos. This makes change to Nailgun
non-intrusive to core code.
Backend library will be implemented in fuel-library code tree and
packaged as a sub-package. This change also doesn't require changes in
the core code.
Astute task will add a task in the flow. We will make this task
configurable, i.e. normally this code path won't be used at all. It
also won't touch core code of Astute.

Overall, I consider this change as low risk for integrity and timeline
of the release.

Please, consider our request and share concerns so we could properly
resolve them.

[1] 
https://blueprints.launchpad.net/fuel/+spec/upload-deployment-facts-to-configdb
[2] https://blueprints.launchpad.net/fuel/+spec/serialized-facts-nailgun-api
[3] https://blueprints.launchpad.net/fuel/+spec/stevedore-extensions-discovery

--
Best regards,
Oleg Gelbukh
Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack Cinder - Wishlist

2016-03-01 Thread D'Angelo, Scott
I like the idea of a Cinder Wishlist or perhaps titled: Cinder Future Design 
and Architecture list. I think the Cinder community could benefit if we 
continued to refine the work we did on this Wishlist at the Mitaka midcycle and 
spent a few minutes each cycle going over the list, prioritizing, and 
commenting.
This seems useful for new contributors, such as Mohammed and his team, as well 
as others who wish to plan work.

Note: Future work for Cinder <-> Nova API changes are tracked and discussed 
here:
https://etherpad.openstack.org/p/cinder-nova-api-changes

Scott D'Angelo (scottda)

From: Michał Dulko [michal.du...@intel.com]
Sent: Tuesday, March 01, 2016 5:48 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Openstack Cinder - Wishlist

On 03/01/2016 11:31 AM, mohammed.asha...@wipro.com wrote:
>
> Hi,
>
>
>
> Would like to know if there’s  feature wish list/enhancement request
> for Open stack Cinder  I.e. a list of features that we would like to
> add to Cinder Block Storage ; but hasn’t been taken up for development
> yet.
>
> We have couple  developers who are interested to work on OpenStack
> Cinder... Hence would like to take a look at those wish list…
>
>
>
> Thanks ,
>
> Ashraf
>
>

Hi!

At the Cinder Midcycle Meetup in January we've created a list of
developers answers to "if you would have time what would you want to
sort out in Cinder?". The list can be find at the bottom of etherpad
[1]. It may seem a little vague for someone not into Cinder's internals,
so I can provide some highlights:

* Quotas - Cinder have issues with quota management. Right now there are
efforts to sort this out.
* Notifications - we do not version or standardize notifications sent
over RPC. That's a problem if someone relies on them.
* A/A HA - there are ongoing efforts to make cinder-volume service
scalable in A/A manner.
* Cinder/Nova API - the way Nova talks with Cinder needs revisiting as
the limitations of current design are blocking us.
* State management - the way Cinder resources states are handled isn't
strongly defined. We may need some kind of state machine for that? (this
one is controversial ;)).
* Objectification - we've started converting Cinder to use
oslo.versionedobjects back in Kilo cycle. This still needs to be finished.
* Adding CI that tests rolling upgrades - starting from Mitaka we have
tech preview of upgrades without downtime. To get this feature out of
experimental stage we need a CI that will test it in gate.
* Tempest testing - we should increase our integration tests coverage.

If you're interested in any of these items feel free to ask me on IRC
(dulek on freenode) so I can point you to correct people for details.

Apart from that you can look through the blueprint list [2]. Note that a
lot of items there may be outdated and not fitting well into current
state of Cinder.

[1] https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-3
[2] https://blueprints.launchpad.net/cinder

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack Cinder - Wishlist

2016-03-01 Thread Michał Dulko
On 03/01/2016 11:31 AM, mohammed.asha...@wipro.com wrote:
>
> Hi,
>
>  
>
> Would like to know if there’s  feature wish list/enhancement request
> for Open stack Cinder  I.e. a list of features that we would like to
> add to Cinder Block Storage ; but hasn’t been taken up for development
> yet.
>
> We have couple  developers who are interested to work on OpenStack
> Cinder... Hence would like to take a look at those wish list…
>
>  
>
> Thanks ,
>
> Ashraf
>
>

Hi!

At the Cinder Midcycle Meetup in January we've created a list of
developers answers to "if you would have time what would you want to
sort out in Cinder?". The list can be find at the bottom of etherpad
[1]. It may seem a little vague for someone not into Cinder's internals,
so I can provide some highlights:

* Quotas - Cinder have issues with quota management. Right now there are
efforts to sort this out.
* Notifications - we do not version or standardize notifications sent
over RPC. That's a problem if someone relies on them.
* A/A HA - there are ongoing efforts to make cinder-volume service
scalable in A/A manner.
* Cinder/Nova API - the way Nova talks with Cinder needs revisiting as
the limitations of current design are blocking us.
* State management - the way Cinder resources states are handled isn't
strongly defined. We may need some kind of state machine for that? (this
one is controversial ;)).
* Objectification - we've started converting Cinder to use
oslo.versionedobjects back in Kilo cycle. This still needs to be finished.
* Adding CI that tests rolling upgrades - starting from Mitaka we have
tech preview of upgrades without downtime. To get this feature out of
experimental stage we need a CI that will test it in gate.
* Tempest testing - we should increase our integration tests coverage.

If you're interested in any of these items feel free to ask me on IRC
(dulek on freenode) so I can point you to correct people for details.

Apart from that you can look through the blueprint list [2]. Note that a
lot of items there may be outdated and not fitting well into current
state of Cinder.

[1] https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-3
[2] https://blueprints.launchpad.net/cinder

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Steve Gordon
- Original Message -
> From: "Steve Gordon" 
> To: "Guz Egor" , "OpenStack Development Mailing List (not 
> for usage questions)"
> 
> 
> - Original Message -
> > From: "Guz Egor" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > 
> > Adrian,
> > I disagree, host OS is very important for operators because of integration
> > with all internal tools/repos/etc.
> > I think it make sense to limit OS support in Magnum main source. But not
> > sure
> > that Fedora Atomic is right choice,first of all there is no documentation
> > about it and I don't think it's used/tested a lot by Docker/Kub/Mesos
> > community.
> 
> Project Atomic documentation for the most part lives here:
> 
> http://www.projectatomic.io/docs/
> 
> To help us improve it, it would be useful to know what you think is missing.
> E.g. I saw recently in the IRC channel it was discussed that there is no
> documentation on (re)building the image but this is the first hit in a
> Google search for same and it seems to largely match what has been copied
> into Magnum's docs for same:
> 
> 
> http://www.projectatomic.io/blog/2014/08/build-your-own-atomic-centos-or-fedora/
> 
> I have no doubt that there are areas where the documentation is lacking, but
> it's difficult to resolve a claim that there is no documentation at all. I
> recently kicked off a thread over on the atomic list to try and relay some
> of the concerns that were raised on this list and in the IRC channel
> recently, it would be great if Magnum folks could chime in with more
> specifics:
> 
> 
> https://lists.projectatomic.io/projectatomic-archives/atomic/2016-February/thread.html#9
> 
> Separately I had asked about containerization of kubernetes/etcd/flannel
> which remains outstanding:
> 
> 
> https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/XICO4NJCTPI43AWG332EIM2HNFYPZ6ON/
> 
> Fedora Atomic builds do seem to be hitting their planned two weekly update
> cadence now though which may alleviate this concern at least somewhat in the
> interim:
> 
> 
> https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/CW5BQS3ODAVYJGAJGAZ6UA3XQMKEISVJ/
> https://fedorahosted.org/cloud/ticket/139
> 
> Thanks,
> 
> Steve

I meant to add, I don't believe choosing a single operating system image to 
support - regardless of which it is - is the right move for users and largely 
agree with what Ton Ngo put forward in his most recent post in the thread. I'm 
simply highlighting that there are folks willing/able to work on improving 
things from the Atomic side and we are endeavoring to provide them actionable 
feedback from the Magnum community to do so.

Thanks,

Steve

> > It make sense to go with Ubuntu (I believe it's still most adopted
> > platform in all three COEs and OpenStack deployments) and CoreOS (is
> > highly adopted/tested in Kub community and Mesosphere DCOS uses it as
> > well).
> >  We can implement CoreOS support as driver and users can use it as
> >  reference
> > implementation.
> 
> 
> > --- Egor
> >   From: Adrian Otto 
> >  To: OpenStack Development Mailing List (not for usage questions)
> >  
> >  Sent: Monday, February 29, 2016 10:36 AM
> >  Subject: Re: [openstack-dev] [magnum] Discussion of supporting
> >  single/multiple OS distro
> >
> > Consider this: Which OS runs on the bay nodes is not important to end
> > users.
> > What matters to users is the environments their containers execute in,
> > which
> > has only one thing in common with the bay node OS: the kernel. The linux
> > syscall interface is stable enough that the various linux distributions can
> > all run concurrently in neighboring containers sharing same kernel. There
> > is
> > really no material reason why the bay OS choice must match what distro the
> > container is based on. Although I’m persuaded by Hongbin’s concern to
> > mitigate risk of future changes WRT whatever OS distro is the prevailing
> > one
> > for bay nodes, there are a few items of concern about duality I’d like to
> > zero in on:
> > 1) Participation from Magnum contributors to support the CoreOS specific
> > template features has been weak in recent months. By comparison,
> > participation relating to Fedora/Atomic have been much stronger.
> > 2) Properly testing multiple bay node OS distros (would) significantly
> > increase the run time and complexity of our functional tests.
> > 3) Having support for multiple bay node OS choices requires more extensive
> > documentation, and more comprehensive troubleshooting details.
> > If we proceed with just one supported disto for bay nodes, and offer
> > extensibility points to allow alternates to be used in place of it, we
> > should be able to address the risk 

  1   2   >