Re: [openstack-dev] [kuryr] Did kuryr need to know about Docker's cluster store?

2016-03-03 Thread Antoni Segura Puimedon
On Fri, Mar 4, 2016 at 5:01 AM, Vikas Choudhary
 wrote:
> Since libnetwork talks to cluster store independent of plugin, I think no
> changes are required on Kuryr side.

That's right. Docker handles all the KV storage interaction from the
data it receives from
Kuryr. In fact, it is expressly forbidden by libnetwork for plugins to
access the KV store
for resources that are in-flight, even.

>
>
> Regards
> Vikas
>
> On Thu, Mar 3, 2016 at 9:54 PM, Mike Spreitzer  wrote:
>>
>> On Feb 5 I was given a tarchive of kuryr with an install script that
>> configures the docker daemon to use consul as its cluster store.  If I
>> modify the config of docker to use etcd instead then do I need to change
>> anything in Kuryr?
>>
>> Thanks,
>> Mike
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Unable to get ceilometer events for instances running on demo project

2016-03-03 Thread Qiming Teng
This is a usage question not supposed to appear on this -dev list. But
anyway, you may want to check if you have the following lines in your
ceilometer.conf file:

  [notification]
  store_events = True


Regards,
  Qiming

On Wed, Mar 02, 2016 at 10:56:53PM +0500, Umar Yousaf wrote:
> I have a single node configuration for devstack liberty working and I want
> to record all the *ceilometer events* like compute.instance.start,
> compute.instance.end, compute.instance.update etc occurred recently.
> I am unable to get any event occurred for instances running for demo
> project i.e when I try *ceilometer event-list* I end up with an empty list
> but I could fortunately get all the necessary events occurred for instances
> running for admin project/tenant with the same command.
> In addition to this I want to get these through python client so if someone
> could provide me with the equivalent python call, that would be more than
> handy.
> Thanks in advance :)
> Regard,
> Umar

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Many same "region_name" configuration really meaingful for Multi-region customers?

2016-03-03 Thread Kai Qiang Wu





Hi,

From my previous 3K+ nodes 6+ regions OpenStack operations experience in
Yandex, I found useless to have Cinder and Neutron services in cross-region
manner.

BTW, nova-neutron cross-region interactions are still legitimate use case:
you may utilize one neutron for many nova regions.

  >>> Sorry not quite understand here.  Do you mean you want this feature
cross-region(nova-neutron) ? Or it has been supported now in Openstack
nova, neutron ?




--
With best regards,
Vladimir Eremin,
Fuel Deployment Engineer,
Mirantis, Inc.



  On Mar 4, 2016, at 7:43 AM, Dolph Mathews 
  wrote:

  Unless someone on the operations side wants to speak up and defend
  cross-region nova-cinder or nova-neutron interactions as being a
  legitimate use case, I'd be in favor of a single region identifier.

  However, both of these configuration blocks should ultimately be used
  to configure keystoneauth, so I would be in favor of whatever
  solution simplifies configuration for keystoneauth.

  On Tue, Mar 1, 2016 at 10:01 PM, Kai Qiang Wu 
  wrote:
Hi All,


Right now, we found that nova.conf have many places for region_name
configuration. Check below:

nova.conf

***
[cinder]
os_region_name = ***

[neutron]
region_name= ***



***


From some mult-region environments observation, those two regions
would always config same value.
Question 1: Does nova support config different regions in
nova.conf ? Like below

[cinder]

os_region_name = RegionOne

[neutron]
region_name= RegionTwo


From Keystone point, I suspect those regions can access from each
other.


Question 2: If all need to config with same value, why we not use
single region_name in nova.conf ? (instead of create many
region_name in same file )

Is it just for code maintenance or else consideration ?



Could nova and keystone community members help this question ?


Thanks


Best Wishes,



Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193



Follow your heart. You are miracle!


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org
  ?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
(See attached file: signature.asc)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] Fwd: [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-03 Thread Stephen Balukoff
Hello Lana!

Thank you for your prompt reply--  I found it extremely helpful!  Comments
inline:


> > So in the intervening days I've been going through the openstack-manuals,
> > openstack-doc-tools, and other repositories, trying to figure out where I
> > make my edits. I found both the CLI and API reference in the
> > openstack-manuals repository... but when I went to edit these files, I
> > noticed that there's a comment at the top stating they are auto-generated
> > and shouldn't be edited? It seemed odd to me that the results of
> something
> > auto-generated should be checked into a git repository instead of the
> > configuration which creates the auto-generated output... but it's not my
> > project, right?
>
> Believe me, we know. At the moment, this is the best we can do with what
> we have to work with.
>

Yep! I get it.



> > Anyway, so then I went to try to figure out how I get this auto-generated
> > output updated, and haven't found much (ha!) documented on the process...
> > when I sought help from Sam-I-Am, I was told that these essentially get
> > generated once per release by "somebody." So...  I'm done, right?
>
> That 'somebody', to be more specific, is the speciality team in charge of
> the book in question. We also do a full refresh of the scripts before each
> release.
>
>
Cool! Is that process documented anywhere? Or better yet, is there a way
for developers to do a "check experimental" or similar operation on any
given patch so we can see what the manual will look like after our API /
CLI updates (presumably in said patch)?



> > Well... I'm not so sure. Yes, if the CLI and API documentation gets
> > auto-generated from the right sources, we should be good to go on that
> > front, but how can I be sure the automated process is pulling this
> > information from the right place? Shouldn't there be some kind of
> > continuous integration or jenkins check which tests this that I can look
> > at? (And if such a thing exists, how am I supposed to find out about it?)
> >
> > Also, the new feature I've added is somewhat involved, and it could
> > probably use another document describing its intended use beyond the CLI
> /
> > API ref. Heck, we already created on in the OpenStack wiki... but I'm
> also
> > being told that we're trying to not rely on the wiki as much, per se, and
> > that anything in the wiki really ought to be moved into the "official"
> > documentation canon.
>
> Depending on the feature and project in question, I would usually
> recommend you add it to the appropriate documentation in your project repo.
> These are then published to
> http://docs.openstack.org/developer/[yourproject] and are considered
> official OpenStack documentation.
>
> If you want it added to the broader OpenStack documentation (the top level
> on the docs.openstack.org), then I suggest you open a bug, wait for a
> docs person to triage it (we can help advise on book/chapter, etc), and
> then create a patch against the book in the same way as you do for your
> project. If you don't want to write it yourself, that's fine. Open the bug,
> give as much detail as you can, and we'll take it from there.
>
>
Aah--  so I knew that the Octavia documentation we've committed thus far
showed up that way but I'd been given the impression that we were doing the
wrong thing: That it was generally not considered a good thing to have your
documentation live in your own custom non-openstack-manual space and that
you should instead work to get all of it moved into the openstack manual.

In any case it's good to know about how you prefer people contribute
changes to the manual: I expected y'all to want full editorial control over
everything the goes into the manual, but I didn't know you'd just go write
excerpts yourself on features that developers add and then open
documentation bug reports for. That sounds like a lot of work for you!



> >
> > So I'm at a loss. I'm a big fan of documentation as a communication
> > tool, and I'm an experienced OpenStack developer, but when I look in the
> > manual for how to contribute to the OpenStack documentation, I find a
> guide
> > that wants to walk me through setting up gerrit... and very little
> targeted
> > toward someone who already knows that, but just needs to know the actual
> > process for updating the manual (and which part of the manual should be
> > updated).
>
> There's not a lot of content here to share. You commit docs in exactly the
> same way as you commit code. If you already have the skills to commit code
> to an OpenStack project, then you know everything you need to know to
> commit to docs.
>

...except for the stuff that's in there right now that's auto-generated. :)
I wonder if there's a better way to communicate that developers generally
won't have to worry about updating API / CLI references manually as this is
all auto-generated... other than having been here long enough to know
that's the case. (FWIW, I dislike relying on tribal knowledge...)

Re: [openstack-dev] [magnum][heat] spawn a group of nodes on different availability zones

2016-03-03 Thread Qiming Teng
On Fri, Mar 04, 2016 at 01:09:26PM +0800, Qiming Teng wrote:
> Another option is to try out senlin service. What you need to do is
> something like below:
> 
> 1. Create a heat template you want to deploy as a group, say,
> node_template.yaml
> 
> 2. Create a senlin profile spec (heat_stack.yaml) which may look
> like, for example:
> 
>   type: os.heat.stack
>   version: 1.0
>   properties:
> name: node_template
> template: node_template.yaml
> environment: shared_env.yaml
> 
> 3. Register the profile to senlin:
> 
>$ senlin profile-create -s heat_stack.yaml stack_profile
> 
>After this step, you can create individual instances (nodes) out of
> this profile.
> 
> 4. Create a cluster using the profile:
> 
>   $ senlin cluster-create -p stack_profile my_cluster
> 
> 5. Create a zone placement policy spec (zone_placement.yaml), which
> may look like:
> 
>   type: senlin.policy.zone_placement
>   version: 1.0
>   properties:
> zones:
>   - name: zone1
> weight: 100
>   - name: zone2
> weight: 50
> 
> 6. Initialize a policy object, which can be attaced to any clusters:
> 
>   $ senlin policy-create -s zone_placement.yaml zone_policy
> 
> 7. Attach the above policy to your cluster:
> 
>   $ senlin cluster-policy-attach -p zone_policy my_cluster

Oh, forgot to mention, this won't work at the moment because we are not
so sure that a stack as a whole can be placed into a single NOVA
AVAILABILITY ZONE, there are other availability zones as well. The above
example works if the profile is about an os.nova.server type.

Anyway, this example hopefully showed you how things are done with
senlin service.

Regards,
  Qiming
 
> Now, you can change your clusters size at will, and the zone placement
> policy will be enforced when new nodes are added or existing nodes are
> removed. For example:
> 
>   $ senlin cluster-scale-out -c 10 my_cluster
> 
> This will add 10 nodes to your cluster and the nodes will be spread
> across the availability zones based on the weight you specified. When
> you scale in your cluster, the zone distribution is also evaluated.
> 
> If any help needed, please stop by the #senlin channel IRC. We are more
> than happy to provide supports.
> 
> Regards,
>   Qiming
> 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Many same "region_name" configuration really meaingful for Multi-region customers?

2016-03-03 Thread Vladimir Eremin
Hi,

From my previous 3K+ nodes 6+ regions OpenStack operations experience in 
Yandex, I found useless to have Cinder and Neutron services in cross-region 
manner.

BTW, nova-neutron cross-region interactions are still legitimate use case: you 
may utilize one neutron for many nova regions.

--
With best regards,
Vladimir Eremin,
Fuel Deployment Engineer,
Mirantis, Inc.



> On Mar 4, 2016, at 7:43 AM, Dolph Mathews  wrote:
> 
> Unless someone on the operations side wants to speak up and defend 
> cross-region nova-cinder or nova-neutron interactions as being a legitimate 
> use case, I'd be in favor of a single region identifier.
> 
> However, both of these configuration blocks should ultimately be used to 
> configure keystoneauth, so I would be in favor of whatever solution 
> simplifies configuration for keystoneauth.
> 
> On Tue, Mar 1, 2016 at 10:01 PM, Kai Qiang Wu  > wrote:
> Hi All,
> 
> 
> Right now, we found that nova.conf have many places for region_name 
> configuration. Check below:
> 
> nova.conf
> 
> ***
> [cinder]
> os_region_name = ***
> 
> [neutron]
> region_name= ***
> 
> 
> 
> ***
> 
> 
> From some mult-region environments observation, those two regions would 
> always config same value.
> Question 1: Does nova support config different regions in nova.conf ? Like 
> below
> 
> [cinder]
> 
> os_region_name = RegionOne
> 
> [neutron]
> region_name= RegionTwo
> 
> 
> From Keystone point, I suspect those regions can access from each other.
> 
> 
> Question 2: If all need to config with same value, why we not use single 
> region_name in nova.conf ? (instead of create many region_name in same file )
> 
> Is it just for code maintenance or else consideration ?
> 
> 
> 
> Could nova and keystone community members help this question ?
> 
> 
> Thanks
> 
> 
> Best Wishes,
> 
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
> 
> E-mail: wk...@cn.ibm.com 
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
> 
> Follow your heart. You are miracle!
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] spawn a group of nodes on different availability zones

2016-03-03 Thread Qiming Teng
Another option is to try out senlin service. What you need to do is
something like below:

1. Create a heat template you want to deploy as a group, say,
node_template.yaml

2. Create a senlin profile spec (heat_stack.yaml) which may look
like, for example:

  type: os.heat.stack
  version: 1.0
  properties:
name: node_template
template: node_template.yaml
environment: shared_env.yaml

3. Register the profile to senlin:

   $ senlin profile-create -s heat_stack.yaml stack_profile

   After this step, you can create individual instances (nodes) out of
this profile.

4. Create a cluster using the profile:

  $ senlin cluster-create -p stack_profile my_cluster

5. Create a zone placement policy spec (zone_placement.yaml), which
may look like:

  type: senlin.policy.zone_placement
  version: 1.0
  properties:
zones:
  - name: zone1
weight: 100
  - name: zone2
weight: 50

6. Initialize a policy object, which can be attaced to any clusters:

  $ senlin policy-create -s zone_placement.yaml zone_policy

7. Attach the above policy to your cluster:

  $ senlin cluster-policy-attach -p zone_policy my_cluster

Now, you can change your clusters size at will, and the zone placement
policy will be enforced when new nodes are added or existing nodes are
removed. For example:

  $ senlin cluster-scale-out -c 10 my_cluster

This will add 10 nodes to your cluster and the nodes will be spread
across the availability zones based on the weight you specified. When
you scale in your cluster, the zone distribution is also evaluated.

If any help needed, please stop by the #senlin channel IRC. We are more
than happy to provide supports.

Regards,
  Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Many same "region_name" configuration really meaingful for Multi-region customers?

2016-03-03 Thread Kai Qiang Wu
Hi Dolph,
It seems use one configuration could simply like below:

nova.conf:

**
client_region_name = RegionOne


All clients would use that region instead of create many different
section/properties(what nova do now) for that.



But I'd like to hear what's nova/keystone developers option for that. Why
we design like that ? :)




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Dolph Mathews 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   04/03/2016 12:46 pm
Subject:Re: [openstack-dev] [keystone][nova] Many same "region_name"
configuration really meaingful for Multi-region customers?



Unless someone on the operations side wants to speak up and defend
cross-region nova-cinder or nova-neutron interactions as being a legitimate
use case, I'd be in favor of a single region identifier.

However, both of these configuration blocks should ultimately be used to
configure keystoneauth, so I would be in favor of whatever solution
simplifies configuration for keystoneauth.

On Tue, Mar 1, 2016 at 10:01 PM, Kai Qiang Wu  wrote:
  Hi All,


  Right now, we found that nova.conf have many places for region_name
  configuration. Check below:

  nova.conf

  ***
  [cinder]
  os_region_name = ***

  [neutron]
  region_name= ***



  ***


  From some mult-region environments observation, those two regions would
  always config same value.
  Question 1: Does nova support config different regions in nova.conf ?
  Like below

  [cinder]

  os_region_name = RegionOne

  [neutron]
  region_name= RegionTwo


  From Keystone point, I suspect those regions can access from each other.


  Question 2: If all need to config with same value, why we not use single
  region_name in nova.conf ? (instead of create many region_name in same
  file )

  Is it just for code maintenance or else consideration ?



  Could nova and keystone community members help this question ?


  Thanks


  Best Wishes,
  


  Kai Qiang Wu (吴开强 Kennan)
  IBM China System and Technology Lab, Beijing

  E-mail: wk...@cn.ibm.com
  Tel: 86-10-82451647
  Address: Building 28(Ring Building), ZhongGuanCun Software Park,
  No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
  


  Follow your heart. You are miracle!

  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][requirements] global requirements update squash for milestone

2016-03-03 Thread Doug Hellmann
We have a handful of requirements changes for community releases
that we need to land this week before fully freezing the repo. We're
starting to see merge conflicts, so I've combined them all into one
commit to make it easier to land the changes quickly.

https://review.openstack.org/288249 Updates for Mitaka 3 releases

replaces:

https://review.openstack.org/288220 - zaqar client
https://review.openstack.org/287751 - glance client
https://review.openstack.org/287963 - swift client
https://review.openstack.org/288219 - ironic client
https://review.openstack.org/263598 - senlin client

If I missed any, please follow up with other suggestions. We should
review and land these before approving any other changes.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-03 Thread Tom Fifield

On 03/03/16 23:12, Jonathan Proulx wrote:


To go a little further down my wish list I'd really like to do be able
to offer a standard selection of security groups for my site no tjust
'default', but that may be a bit off this topic.  Briefly my
motivation is that 'internal' here includes a number of differnt
netblock some with pretty weird masks so users tend to use 0.0.0.0/0
when they don't really meant to just to save some rather tedious
typing at setup time.


+1 - I've seen some clouds do this as part of their account creation 
process. bug ?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 4 March 2016

2016-03-03 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

This week I've been concentrating on getting ready for the release, and I'm 
very pleased to announce that we now have not one but two release managers for 
Mitaka. Please welcome Brian and Olga, thanks to you both for stepping up to 
the challenge! I'm looking forward to working with you both to get this out the 
door. We also completed the docs core team review, with no changes for this 
month. Install Guide testing is now well underway, but we can always use more 
hands, so please consider getting involved. If you need help getting started, 
please contact Matt Kassawara, or any docs core.

== Progress towards Mitaka ==

33 days to go!

465 bugs closed so far for this release. There is a global bug smash event next 
week to try and hit as many Mitaka bugs as possible. You can join an in-person 
group near you, or participate remotely. Details here: 
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka

Docs Testing
* Volunteers required!
* https://wiki.openstack.org/wiki/Documentation/MitakaDocTesting

API Docs
I'm very pleased to announce that Anne and the API docs team have managed to 
get HTML generation working directly from Swagger now. This is a massive leap 
forward for the API docs conversion. Well done to everyone involved!

Release Managers:
We now have two release managers for Mitaka: Brian Moss, and Olga Gusarenko. 
They'll be making sure we stay on track as we hurtle towards 7 April, and will 
be backed up by Anne, Andreas, myself, and of course our lovely core team.

== The Road to Austin ==

* The final round of ATC passes have now gone out.
* I have requested workrooms and fishbowls for the Austin Design Summit. I'll 
let you know when we get our allocation.
* You should be starting to think about booking travel and accommodation soon! 
If you need a visa to travel to the United States, there's more info here: 
https://www.openstack.org/summit/austin-2016/austin-and-travel/#visa

== Core Team reviews ==

I completed the core team review for March this week, and we decided to make no 
changes. We did, however, have some discussion about the stats we use to 
determine the core team membership. I currently use the 30 and 90 day 
russellbryant stats for determining review participation, and Stackalytics for 
commit participation, with emphasis being on the top 12 participants in each 
category. Feedback on the core team selection process is, as always, welcome. 
It is documented here: 
http://docs.openstack.org/contributor-guide/docs-review.html#achieving-a-core-reviewer-status

== Speciality Teams ==

'''HA Guide - Bogdan Dobrelya'''
No update this week.

'''Installation Guide - Matt Kassawara'''
More patches and testing for Mitaka.

'''Networking Guide - Edgar Magana'''
No update this week.

'''Security Guide - Nathaniel Dillon'''
No update this week.

'''User Guides - Joseph Robinson'''
A patch for the reorganisation is under review - shifting the command line 
content in the admin user guide to the cloud admin guide. Following this patch 
merging, the next step is moving down the task list, and completing edits to 
the content in the Cloud Admin guide to improve the document.

'''Ops and Arch Guides - Shilla Saebi'''
The work items for the Architecture Design Guide converted to bugs: 
https://bugs.launchpad.net/openstack-manuals/+bugs?field.tag=arch-guide We 
still need people to confirm the bugs.
Call for volunteers email went out to the ops and docs ML from Devon 
Boatwright. We are still looking for help, not getting many responses. 
Considering doing a swarm or work session at the summit in Austin for the Arch 
guide

'''API Docs - Anne Gentle'''
Patch fixing three of remaining 7 WADL bugs here: 
https://review.openstack.org/#/c/283114/
Patch that builds HTML from Swagger files here: 
https://review.openstack.org/#/c/286659/ This gets us over a big hurdle of 
making fairy-slipper "feature complete" while also ensuring the migration is 
complete. Great week, huge shoutout and thanks to Michael Krotscheck for the 
node/npm solution for building HTML.

'''Config Ref - Gauvain Pocentek'''
No update this week.

'''Training labs - Pranav Salunke, Roger Luethi'''
No update this week.

'''Training Guides - Matjaz Pancur'''
"Getting started" module for Training guides is published 
(http://docs.openstack.org/draft/training-guides/). Added "Local trainings" 
chapter to the Upstream training archive 
(http://docs.openstack.org/upstream-training/upstream-archives.html), cleanup 
patches (unused extensions, text redundancies and cleanup). Team meetings will 
be held on 1st and 3rd week in the month.

'''Hypervisor Tuning Guide - Joe Topjian'''
No update this week.

'''UX/UI Docs Guidelines - Linette Williams'''
Reviews continue for proposed UI panels in Invision

== Doc team meeting ==

Next meetings:

The US meeting was held this week, at the new time. You can read the minutes 
here: 

Re: [openstack-dev] [keystone][nova] Many same "region_name" configuration really meaingful for Multi-region customers?

2016-03-03 Thread Dolph Mathews
Unless someone on the operations side wants to speak up and defend
cross-region nova-cinder or nova-neutron interactions as being a legitimate
use case, I'd be in favor of a single region identifier.

However, both of these configuration blocks should ultimately be used to
configure keystoneauth, so I would be in favor of whatever solution
simplifies configuration for keystoneauth.

On Tue, Mar 1, 2016 at 10:01 PM, Kai Qiang Wu  wrote:

> Hi All,
>
>
> Right now, we found that nova.conf have many places for region_name
> configuration. Check below:
>
> nova.conf
>
> ***
> [cinder]
> os_region_name = ***
>
> [neutron]
> region_name= ***
>
>
>
> ***
>
>
> From some mult-region environments observation, those two regions would
> always config same value.
> *Question 1: Does nova support config different regions in nova.conf ?
> Like below*
>
> [cinder]
>
> os_region_name = RegionOne
>
> [neutron]
> region_name= RegionTwo
>
>
> From Keystone point, I suspect those regions can access from each other.
>
>
> *Question 2: If all need to config with same value, why we not use single
> region_name in nova.conf ?* (instead of create many region_name in same
> file )
>
> Is it just for code maintenance or else consideration ?
>
>
>
> Could nova and keystone community members help this question ?
>
>
> Thanks
>
>
> Best Wishes,
>
> 
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wk...@cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>
> 
> Follow your heart. You are miracle!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Did kuryr need to know about Docker's cluster store?

2016-03-03 Thread Vikas Choudhary
Since libnetwork talks to cluster store independent of plugin, I think no
changes are required on Kuryr side.


Regards
Vikas

On Thu, Mar 3, 2016 at 9:54 PM, Mike Spreitzer  wrote:

> On Feb 5 I was given a tarchive of kuryr with an install script that
> configures the docker daemon to use consul as its cluster store.  If I
> modify the config of docker to use etcd instead then do I need to change
> anything in Kuryr?
>
> Thanks,
> Mike
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-03 Thread Armando M.
On 3 March 2016 at 18:35, Stephen Balukoff  wrote:

> Hi Armando,
>
> Please rest assured that I really am a fan of requiring. I realize that
> sarcasm doesn't translate to text, so you'll have to trust me when I say
> that I am not being sarcastic by saying that.
>
> However, I am not a fan of being given nebulous requirements and then
> being accused of laziness or neglect when I ask for help. More on that
> later.
>

> Also, the intent of my e-mail wasn't to call you out and I think you are
> right to require that new features be documented. I would do the same in
> your position.
>
> To start off, my humble suggestion would be to be kind and provide a TL;DR
>> before going in such depth, otherwise there's a danger of missing the
>> opportunity to reach out the right audience. I read this far because I felt
>> I was called into question (after all I am the one 'imposing' the
>> documentation requirement on the features we are delivering in Mitaka)!
>>
>
>> That said, If you are a seasoned LBaaS developer and you don't know where
>> LBaaS doc is located or how to contribute, that tells me that LBaaS docs
>> are chronically neglected, and the links below are a proof of my fear.
>>
>
> Yes, obviously. I am not interested in shoveling out blame. I'm interested
> in solutions to the problem.
>
> Also, how is telling us "wow, your documentation sucks" in any way helpful
> in an e-mail thread where I'm asking, essentially, "How do I go about
> fixing the documentation?"  If nothing else, it should provide evidence
> that there is a problem (which I am trying to point out in this e-mail
> thread!)
>
> In a nutshell, this is a rather disastrous. Other Neutron developers
>> already contribute successfully to user docs [5]. Most of it is already
>> converted to rst and the tools you're familiar with are the ones used to
>> produce content (like specs).
>>
>
> Really? Which tools? tox? Are there templates somewhere? (I know there are
> spec templates... but what about openstack manual templates?)  If there are
> templates, where are they? Also, evidence that others are making
> contributions to the manual is not necessarily evidence that they're doing
> it correctly or consistently.
>
> You're referring to what is essentially tribal knowledge. This is not a
> good way to proceed if you want things to be consistent and done the best
> way.
>
> I have been doing this for a while (obviously not as long as some), and
> I've seen it done in many different ways in different projects. Where are
> the usable best practices guides?
>
>
>> My suggestion would be to forge your own path, and identify a place in
>> the networking-guide where to locate some relevant content that describe
>> LBaaS/Octavia: deployment architecture, features, etc. This is a long
>> journey, that requires help from all parties, but I believe that the
>> initiative needs to be driven from the LBaaS team as they are the custodian
>> of the knowledge.
>>
>>
> Again, the "figure it out" approach means you are going to get
> inconsistent results (like the current poor documentation that you linked).
> What I'm asking for in this e-mail is a guide on how it *should* be done
> that is consistent across OpenStack, that is actually consumable without
> having to read the whole of the OpenStack manual cover-to-cover. This needs
> to not be tribal knowledge if we are going to hold people accountable for
> not complying with an unwritten standard.
>
> You're not seeing a lack of initiative. Heck, the Neutron-LBaaS and
> Octavia projects have some of the most productive people working on them
> that I've seen anywhere. You're seeing lack of meaningful guidance, and
> lack of standards.
>
> I fear that if none of the LBaaS core members steps up and figure this
>> out, LBaaS will continue to be something everyone would like to adopt but
>> no-one knows how to, at least not by tapping directly at the open source
>> faucet.
>>
>
> Exactly what I fear as well. Please note that it is offensive to accuse a
> team of not stepping up when what I am doing in this very e-mail should be
> pretty good evidence that we are trying to step up.
>

There's no reason to be offended.

Rest assured that I have no interest on laying blame on anyone, that's not
how one get positive results. I commend your desire to see this done
consistently, and I agree that we lack exhaustive documentation to produce
documentation! I was simply expressing the fear that the lack of guidance
and standards as you point out may end up deterring people from covering an
area (LBaaS documentation) that is in desperate need of attention, today.
To the risk of leading to the same ill effects I'd rather have inconsistent
documentation than no documentation at all, but that's just my opinion with
which you don't have to agree with.


>
> Stephen
>
> --
> Stephen Balukoff
> Principal Technologist
> Blue Box, An IBM Company
> www.blueboxcloud.com
> sbaluk...@blueboxcloud.com
> 

Re: [openstack-dev] [telemetry][aodh] "aodh alarm list" vs "aodh alarm search"

2016-03-03 Thread Qiming Teng
On Fri, Mar 04, 2016 at 09:57:35AM +0800, liusheng wrote:
> Hi folks,
> Currently, we have supported "aodh alarm list" and "aodh alarm
> search" commands to query alarms.  They both need mandatory "--type"
> parameter, and I want to drop the limitation[1]. if we agree that,
> the "alarm list"  will only used to list all alarms and don't
> support any query pamareters, it will be equal to "alarm search"
> command without any --query parameter specified.  The "alarm search"
> command is designed to support complex query which can perform
> almost all the filtering query, the complex query has been
> supportted in Gnocchi.  IRC meeting disscussions [3].
> 
> So we don't need two overlapping interfaces and want to drop one,
> "alarm list" or "alarm search" ?
> 
> i. The "alarm search" need users to post a expression in JSON format
> to perform spedific query, it is not easy to use and it is unlike a
> customary practice (I mean the most common usages of filtering query
> of other openstack projects), compare to "alarm list --filter
> xxx=zzz" usage.
> 
> ii. we don't have a strong requirement to support *complex* query
> scenarios of alarms, we only have alarms and alarm history records
> in aodh.
> 
> iii. I personally think it is enough to support filtering query with
> "--filter" which is easy to implement [2], and, we have plan to
> support pagination query in aodh.

+100
 
Really concerned that some projects keep inventing new things without
notion of cross-project guidelines. There have been a consensus to
remove individual CLIs [1], "advanced" filtering options [2]. If there
is really such a need to do ' resource search', maybe it is not
just an aodh thing.

Just two cents from a user's perspective.


[1]
http://specs.openstack.org/openstack/openstack-specs/specs/deprecate-cli.html
[2]
http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Docker cannot find the created network with kuryr driver

2016-03-03 Thread Mars Ma
Hi Liping,
I apply your method, it works, so cool, much thanks!珞

Thanks & Best regards !
Mars Ma


On Thu, Mar 3, 2016 at 11:45 PM, Liping Mao (limao)  wrote:

> Hi Mars,
>
> I get similar problem before,  it is because I capability_scope is global.
> After I modify it to local, it works.
>
> But it should already be fixed in the following patch:
> https://review.openstack.org/#/c/264653/
>
> You may check your capability_scope config or use latest code to have a
> try.
>
> Regards,
> Liping Mao
>
>
> From: Mars Ma 
> Reply-To: OpenStack List 
> Date: 2016年3月3日 星期四 下午5:49
> To: OpenStack List 
> Subject: [openstack-dev] [kuryr] Docker cannot find the created network
> with kuryr driver
>
>
> Thanks & Best regards !
> Mars Ma
> 
>
> -- Forwarded message --
> From: Gal Sagie 
> Date: Thu, Mar 3, 2016 at 5:41 PM
> Subject: Re: [kuryr] Docker cannot find the created network with kuryr
> driver
> To: Mars Ma 
>
>
> Hello Mars Ma,
>
> Please email this question to the OpenStack mailing list, i am sure
> someone will
> be able to help you there and that way it will also be public by others
>
>
> On Thu, Mar 3, 2016 at 11:22 AM, Mars Ma  wrote:
>
>> hi Gal Sagie,
>>
>> Found your blog about kuryr and ovn integration :
>> http://galsagie.github.io/2015/10/10/kuryr-ovn/
>>
>> it's very useful , very appreciate your sharing.
>>
>> I still have a problem, can you share me any helpful debug info ?
>>
>> After use devstack to install kuryr,  the kuryr, docker services are Ok.
>> and succeed to create network with docker cmd:
>> $ docker network create --driver=kuryr --ipam-driver=kuryr --subnet
>> 10.10.1.0/24 --gateway 10.10.1.1 --ip-range 10.10.1.0/24 foo
>> 4c15b6cf31bf55a1c17e6cbf30040a40cc5326b9a5b0c8ba696930dfbdf91c7c
>> $ docker network inspect
>> 4c15b6cf31bf55a1c17e6cbf30040a40cc5326b9a5b0c8ba696930dfbdf91c7c
>> []
>> Error: No such network:
>> 4c15b6cf31bf55a1c17e6cbf30040a40cc5326b9a5b0c8ba696930dfbdf91c7c
>>
>> so why docker cannot list the network ? seems neutron create network
>> successfully.
>>
>> Thanks in advance !
>>
>> Thanks & Best regards !
>> Mars Ma
>> 
>>
>
>
>
> --
> Best Regards ,
>
> The G.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-03 Thread Zhenyu Zheng
Hm, I found out the reason:
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L1139-L1145
here we filtered out parameters like "deleted", and that's why the API
behavior is like above mentioned.

So should we simple add "deleted" to the tuple or a microversion is needed?

On Fri, Mar 4, 2016 at 10:27 AM, Zhenyu Zheng 
wrote:

> Anyway, I updated the bug report:
> https://bugs.launchpad.net/nova/+bug/1552071
>
> and I will start to working on the bug first.
>
> On Fri, Mar 4, 2016 at 9:29 AM, Zhenyu Zheng 
> wrote:
>
>> Yes, so you are suggest fixing the return data of non-admin user use
>> 'nova list --deleted' but leave non-admin using 'nova list
>> --status=deleted' as is. Or it would be better to also submit a BP for next
>> cycle to add support for non-admin using '--status=deleted' with
>> microversions. Because in my opinion, if we allow non-admin use "nova list
>> --deleted", there will be no reason for us to limit the use of
>> "--status=deleted".
>>
>> On Fri, Mar 4, 2016 at 12:37 AM, Matt Riedemann <
>> mrie...@linux.vnet.ibm.com> wrote:
>>
>>>
>>>
>>> On 3/3/2016 10:02 AM, Matt Riedemann wrote:
>>>


 On 3/3/2016 2:55 AM, Zhenyu Zheng wrote:

> Yes, I agree with you guys, I'm also OK for non-admin users to list
> their own instances no matter what status they are.
>
> My question is this:
> I have done some tests, yet we have 2 different ways to list deleted
> instances (not counting using changes-since):
>
> 1.
> "GET
> /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?status=deleted
> HTTP/1.1"
> (nova list --status deleted in CLI)
> 2. REQ: curl -g -i -X GET
>
> http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
> (nova
> list --deleted in CLI)
>
> for admin user, we can all get deleted instances(after the fix of
> Matt's
> patch).
>
> But for non-admin users, #1 is restricted here:
>
> https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n350
>
> and it will return 403 error:
> RESP BODY: {"forbidden": {"message": "Only administrators may list
> deleted instances", "code": 403}}
>

 This is part of the API so if we were going to allow non-admins to query
 for deleted servers using status=deleted, it would have to be a
 microversion change. [1] I could also see that being policy-driven.

 It does seem odd and inconsistent though that non-admins can't query
 with status=deleted but they can query with deleted=True in the query
 options.


> and for #2 it will strangely return servers that are not in deleted
> status:
>

 This seems like a bug. I tried looking for something obvious in the code
 but I'm not seeing the issue, I'd suspect something down in the DB API
 code that's doing the filtering.


> DEBUG (connectionpool:387) "GET
> /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
> HTTP/1.1" 200 3361
> DEBUG (session:235) RESP: [200] Content-Length: 3361
> X-Compute-Request-Id: req-bd073750-982a-4ef7-864a-a5db03e59a68 Vary:
> X-OpenStack-Nova-API-Version Connection: keep-alive
> X-Openstack-Nova-Api-Version: 2.1 Date: Thu, 03 Mar 2016 08:43:17 GMT
> Content-Type: application/json
> RESP BODY: {"servers": [{"status": "ACTIVE", "updated":
> "2016-02-29T06:24:16Z", "hostId":
> "56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7",
> "addresses":
> {"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32",
> "version":
> 4, "addr": "10.0.0.14", "OS-EXT-IPS:type": "fixed"},
> {"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version": 6, "addr":
> "fdb7:5d7b:6dcd:0:f816:3eff:fe4f:1b32", "OS-EXT-IPS:type": "fixed"}]},
> "links": [{"href":
> "
> http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70
> ",
>
> "rel": "self"}, {"href":
> "
> http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70
> ",
>
> "rel": "bookmark"}], "key_name": null, "image": {"id":
> "6455625c-a68d-4bd3-ac2e-07382ac5cbf4", "links": [{"href":
> "
> http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/images/6455625c-a68d-4bd3-ac2e-07382ac5cbf4
> ",
>
> "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null,
> "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at":
> "2016-02-29T06:24:16.00", "flavor": {"id": "1", "links": [{"href":
> "http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/flavors/1;,
> "rel": "bookmark"}]}, "id": "ee8907c7-0730-4051-8426-64be44300e70",
> "security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at":
> null, 

[openstack-dev] [trove][release] Releases for Trove and python-troveclient

2016-03-03 Thread Amrith Kumar
Members of the Trove team,

Earlier today I tagged Trove release 5.0.0.0b3[1] and python-troveclient 
version 2.1.0[2]. A request has also been submitted to update constraints to 
reflect the new client version[3].

Thanks to all who submitted code for features in this release. I know that 
there are a small number of FFE's for features that we would like to have in 
the Mitaka release. Let's continue to focus on those and get them done quickly 
so we can have them reviewed and merged soon.

I will be speaking with Craig when he is back online, and we will be working 
with the release team to get another version of the client with some of the 
client side changes that are required as part of these FFE's.

Thanks,

-amrith

[1] https://review.openstack.org/288199
[2] https://review.openstack.org/288200
[3] https://review.openstack.org/288201


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] Fwd: [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-03 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 04/03/16 11:41, Stephen Balukoff wrote:
> Hah! Didn't realize that 'docs' had its own mailing list. XD Anyway, please
> see the e-mail below:

Thanks for drawing this to our attention.

> 
> -- Forwarded message --
> From: Stephen Balukoff 
> Date: Thu, Mar 3, 2016 at 4:56 PM
> Subject: [Neutron][LBaaS][Octavia][Docs] Need experienced contributor
> documentation best-practices and how-tos
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> 
> 
> Hello!
> 
> I have a problem I'm hoping someone can help with: I have gone through the
> task of completing a shiny new feature for an openstack project, and now
> I'm trying to figure out how to get that last all-important documentation
> step done so that people will know about this new feature and use it. But
> I'm having no luck figuring out how I actually go about doing this...
> 
> This started when I was told that in order to consider the feature
> "complete," I needed to make sure that it was documented in the openstack
> official documentation. I wholeheartedly agree with this: If it's not
> documented, very few people will know about it, let alone use it. And few
> things make an open-source contributor more sad than the idea that the work
> they've spent months or years completing isn't getting used.
> 
> So... No problem! I'm an experienced OpenStack developer, and I just spent
> months getting this major new feature through my project's gauntlet of an
> approval process. How hard could documenting it be, right?
> 
> So in the intervening days I've been going through the openstack-manuals,
> openstack-doc-tools, and other repositories, trying to figure out where I
> make my edits. I found both the CLI and API reference in the
> openstack-manuals repository... but when I went to edit these files, I
> noticed that there's a comment at the top stating they are auto-generated
> and shouldn't be edited? It seemed odd to me that the results of something
> auto-generated should be checked into a git repository instead of the
> configuration which creates the auto-generated output... but it's not my
> project, right?

Believe me, we know. At the moment, this is the best we can do with what we 
have to work with.

> 
> Anyway, so then I went to try to figure out how I get this auto-generated
> output updated, and haven't found much (ha!) documented on the process...
> when I sought help from Sam-I-Am, I was told that these essentially get
> generated once per release by "somebody." So...  I'm done, right?

That 'somebody', to be more specific, is the speciality team in charge of the 
book in question. We also do a full refresh of the scripts before each release.

> 
> Well... I'm not so sure. Yes, if the CLI and API documentation gets
> auto-generated from the right sources, we should be good to go on that
> front, but how can I be sure the automated process is pulling this
> information from the right place? Shouldn't there be some kind of
> continuous integration or jenkins check which tests this that I can look
> at? (And if such a thing exists, how am I supposed to find out about it?)
> 
> Also, the new feature I've added is somewhat involved, and it could
> probably use another document describing its intended use beyond the CLI /
> API ref. Heck, we already created on in the OpenStack wiki... but I'm also
> being told that we're trying to not rely on the wiki as much, per se, and
> that anything in the wiki really ought to be moved into the "official"
> documentation canon.

Depending on the feature and project in question, I would usually recommend you 
add it to the appropriate documentation in your project repo. These are then 
published to http://docs.openstack.org/developer/[yourproject] and are 
considered official OpenStack documentation.

If you want it added to the broader OpenStack documentation (the top level on 
the docs.openstack.org), then I suggest you open a bug, wait for a docs person 
to triage it (we can help advise on book/chapter, etc), and then create a patch 
against the book in the same way as you do for your project. If you don't want 
to write it yourself, that's fine. Open the bug, give as much detail as you 
can, and we'll take it from there.

> 
> So I'm at a loss. I'm a big fan of documentation as a communication
> tool, and I'm an experienced OpenStack developer, but when I look in the
> manual for how to contribute to the OpenStack documentation, I find a guide
> that wants to walk me through setting up gerrit... and very little targeted
> toward someone who already knows that, but just needs to know the actual
> process for updating the manual (and which part of the manual should be
> updated).

There's not a lot of content here to share. You commit docs in exactly the same 
way as you commit code. If you already have the skills to commit code to an 
OpenStack project, 

[openstack-dev] [TripleO] IPv4 network isolation testing update

2016-03-03 Thread Dan Prince
Some progress today:

After rebuilding the test-env workers nodes in our CI rack to support
multiple-nics we got a stack to go into CREATE_COMPLETE with network
isolation enabled in CI.

https://review.openstack.org/#/c/288163/

Have a look at the Ceph result here:

http://logs.openstack.org/63/288163/1/check-tripleo/gate-tripleo-ci-f22
-ceph/a284105/console.html

After the stack goes to CREATE_COMPLETE it then failed promptly with a
HTTP 503 error. I think this might have been from python-
keystoneclient's init code (perhaps trying to get keystone endpoints or
something). I haven't quite tracked this down yet.

Anyways, this is progress. The stack completed, validations passed, but
it failed during post deployment configuration somewhere. If anyone has
ideas in the meantime please feel free to comment here or on the
patches.

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Midcycle summary part 3/6

2016-03-03 Thread Fujita, Daisuke
Hi, Lucas,

Thank you for your correspondence.

> I think we meant UEFI >= 2.4 there. 
That make sence!

> Apparently in the UEFI version 2.4 some feature was introduced that is 
> required for the boot from volume
> case.
>
Which function of UEFI 2.4 do you point to?
It would be very helpful if you could point me which part of specifications[1] 
it is.

Is it 10.11/10.12 or(and) others of the specifications[1]?

[1] http://www.uefi.org/sites/default/files/resources/UEFI_2.4.pdf

Best regards
Daisuke Fujita


> -Original Message-
> From: Lucas Alvares Gomes [mailto:lucasago...@gmail.com]
> Sent: Thursday, March 03, 2016 7:59 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [ironic] Midcycle summary part 3/6
> 
> Hi Daisuke,
> 
> On Thu, Mar 3, 2016 at 10:53 AM, Fujita, Daisuke
>  wrote:
> > Hi, Jim, Julia, and Ironicers,
> >
> > I have some questions about the BFV.
> >
> >> > * Hardware supports the UEFI 2.4 spec
> >
> > Could you please explain the reason that chose version not 2.5/2.6/2.3.1 
> > but 2.4?
> > In the etherpad[1], I was not able to read the reason why ironicers chose 
> > version 2.4.
> >  [1] https://etherpad.openstack.org/p/ironic-mitaka-midcycle
> >
> 
> I think we meant UEFI >= 2.4 there. Apparently in the UEFI version 2.4
> some feature was introduced that is required for the boot from volume
> case.
> 
> Hope that helps,
> Lucas
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-03 Thread Stephen Balukoff
Hi Armando,

Please rest assured that I really am a fan of requiring. I realize that
sarcasm doesn't translate to text, so you'll have to trust me when I say
that I am not being sarcastic by saying that.

However, I am not a fan of being given nebulous requirements and then being
accused of laziness or neglect when I ask for help. More on that later.

Also, the intent of my e-mail wasn't to call you out and I think you are
right to require that new features be documented. I would do the same in
your position.

To start off, my humble suggestion would be to be kind and provide a TL;DR
> before going in such depth, otherwise there's a danger of missing the
> opportunity to reach out the right audience. I read this far because I felt
> I was called into question (after all I am the one 'imposing' the
> documentation requirement on the features we are delivering in Mitaka)!
>

> That said, If you are a seasoned LBaaS developer and you don't know where
> LBaaS doc is located or how to contribute, that tells me that LBaaS docs
> are chronically neglected, and the links below are a proof of my fear.
>

Yes, obviously. I am not interested in shoveling out blame. I'm interested
in solutions to the problem.

Also, how is telling us "wow, your documentation sucks" in any way helpful
in an e-mail thread where I'm asking, essentially, "How do I go about
fixing the documentation?"  If nothing else, it should provide evidence
that there is a problem (which I am trying to point out in this e-mail
thread!)

In a nutshell, this is a rather disastrous. Other Neutron developers
> already contribute successfully to user docs [5]. Most of it is already
> converted to rst and the tools you're familiar with are the ones used to
> produce content (like specs).
>

Really? Which tools? tox? Are there templates somewhere? (I know there are
spec templates... but what about openstack manual templates?)  If there are
templates, where are they? Also, evidence that others are making
contributions to the manual is not necessarily evidence that they're doing
it correctly or consistently.

You're referring to what is essentially tribal knowledge. This is not a
good way to proceed if you want things to be consistent and done the best
way.

I have been doing this for a while (obviously not as long as some), and
I've seen it done in many different ways in different projects. Where are
the usable best practices guides?


> My suggestion would be to forge your own path, and identify a place in the
> networking-guide where to locate some relevant content that describe
> LBaaS/Octavia: deployment architecture, features, etc. This is a long
> journey, that requires help from all parties, but I believe that the
> initiative needs to be driven from the LBaaS team as they are the custodian
> of the knowledge.
>
>
Again, the "figure it out" approach means you are going to get inconsistent
results (like the current poor documentation that you linked). What I'm
asking for in this e-mail is a guide on how it *should* be done that is
consistent across OpenStack, that is actually consumable without having to
read the whole of the OpenStack manual cover-to-cover. This needs to not be
tribal knowledge if we are going to hold people accountable for not
complying with an unwritten standard.

You're not seeing a lack of initiative. Heck, the Neutron-LBaaS and Octavia
projects have some of the most productive people working on them that I've
seen anywhere. You're seeing lack of meaningful guidance, and lack of
standards.

I fear that if none of the LBaaS core members steps up and figure this out,
> LBaaS will continue to be something everyone would like to adopt but no-one
> knows how to, at least not by tapping directly at the open source faucet.
>

Exactly what I fear as well. Please note that it is offensive to accuse a
team of not stepping up when what I am doing in this very e-mail should be
pretty good evidence that we are trying to step up.

Stephen

-- 
Stephen Balukoff
Principal Technologist
Blue Box, An IBM Company
www.blueboxcloud.com
sbaluk...@blueboxcloud.com
206-607-0660 x807
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-03 Thread Zhenyu Zheng
Anyway, I updated the bug report:
https://bugs.launchpad.net/nova/+bug/1552071

and I will start to working on the bug first.

On Fri, Mar 4, 2016 at 9:29 AM, Zhenyu Zheng 
wrote:

> Yes, so you are suggest fixing the return data of non-admin user use 'nova
> list --deleted' but leave non-admin using 'nova list --status=deleted' as
> is. Or it would be better to also submit a BP for next cycle to add support
> for non-admin using '--status=deleted' with microversions. Because in my
> opinion, if we allow non-admin use "nova list --deleted", there will be no
> reason for us to limit the use of "--status=deleted".
>
> On Fri, Mar 4, 2016 at 12:37 AM, Matt Riedemann <
> mrie...@linux.vnet.ibm.com> wrote:
>
>>
>>
>> On 3/3/2016 10:02 AM, Matt Riedemann wrote:
>>
>>>
>>>
>>> On 3/3/2016 2:55 AM, Zhenyu Zheng wrote:
>>>
 Yes, I agree with you guys, I'm also OK for non-admin users to list
 their own instances no matter what status they are.

 My question is this:
 I have done some tests, yet we have 2 different ways to list deleted
 instances (not counting using changes-since):

 1.
 "GET
 /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?status=deleted
 HTTP/1.1"
 (nova list --status deleted in CLI)
 2. REQ: curl -g -i -X GET

 http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
 (nova
 list --deleted in CLI)

 for admin user, we can all get deleted instances(after the fix of Matt's
 patch).

 But for non-admin users, #1 is restricted here:

 https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n350

 and it will return 403 error:
 RESP BODY: {"forbidden": {"message": "Only administrators may list
 deleted instances", "code": 403}}

>>>
>>> This is part of the API so if we were going to allow non-admins to query
>>> for deleted servers using status=deleted, it would have to be a
>>> microversion change. [1] I could also see that being policy-driven.
>>>
>>> It does seem odd and inconsistent though that non-admins can't query
>>> with status=deleted but they can query with deleted=True in the query
>>> options.
>>>
>>>
 and for #2 it will strangely return servers that are not in deleted
 status:

>>>
>>> This seems like a bug. I tried looking for something obvious in the code
>>> but I'm not seeing the issue, I'd suspect something down in the DB API
>>> code that's doing the filtering.
>>>
>>>
 DEBUG (connectionpool:387) "GET
 /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
 HTTP/1.1" 200 3361
 DEBUG (session:235) RESP: [200] Content-Length: 3361
 X-Compute-Request-Id: req-bd073750-982a-4ef7-864a-a5db03e59a68 Vary:
 X-OpenStack-Nova-API-Version Connection: keep-alive
 X-Openstack-Nova-Api-Version: 2.1 Date: Thu, 03 Mar 2016 08:43:17 GMT
 Content-Type: application/json
 RESP BODY: {"servers": [{"status": "ACTIVE", "updated":
 "2016-02-29T06:24:16Z", "hostId":
 "56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7", "addresses":
 {"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version":
 4, "addr": "10.0.0.14", "OS-EXT-IPS:type": "fixed"},
 {"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version": 6, "addr":
 "fdb7:5d7b:6dcd:0:f816:3eff:fe4f:1b32", "OS-EXT-IPS:type": "fixed"}]},
 "links": [{"href":
 "
 http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70
 ",

 "rel": "self"}, {"href":
 "
 http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70
 ",

 "rel": "bookmark"}], "key_name": null, "image": {"id":
 "6455625c-a68d-4bd3-ac2e-07382ac5cbf4", "links": [{"href":
 "
 http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/images/6455625c-a68d-4bd3-ac2e-07382ac5cbf4
 ",

 "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null,
 "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at":
 "2016-02-29T06:24:16.00", "flavor": {"id": "1", "links": [{"href":
 "http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/flavors/1;,
 "rel": "bookmark"}]}, "id": "ee8907c7-0730-4051-8426-64be44300e70",
 "security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at":
 null, "OS-EXT-AZ:availability_zone": "nova", "user_id":
 "da935c024dc1444abb7b32390eac4e0b", "name": "test_inject", "created":
 "2016-02-29T06:24:08Z", "tenant_id": "62bfb653eb0d4d5cabdf635dd8181313",
 "OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached":
 [], "accessIPv4": "", "accessIPv6": "", "progress": 0,
 "OS-EXT-STS:power_state": 1, "config_drive": "True", "metadata": {}},
 {"status": "ACTIVE", "updated": "2016-02-29T06:21:22Z", "hostId":
 

[openstack-dev] [telemetry][aodh] "aodh alarm list" vs "aodh alarm search"

2016-03-03 Thread liusheng

Hi folks,
Currently, we have supported "aodh alarm list" and "aodh alarm search" 
commands to query alarms.  They both need mandatory "--type" parameter, 
and I want to drop the limitation[1]. if we agree that, the "alarm 
list"  will only used to list all alarms and don't support any query 
pamareters, it will be equal to "alarm search" command without any 
--query parameter specified.  The "alarm search" command is designed to 
support complex query which can perform almost all the filtering query, 
the complex query has been supportted in Gnocchi.  IRC meeting 
disscussions [3].


So we don't need two overlapping interfaces and want to drop one, "alarm 
list" or "alarm search" ?


i. The "alarm search" need users to post a expression in JSON format to 
perform spedific query, it is not easy to use and it is unlike a 
customary practice (I mean the most common usages of filtering query of 
other openstack projects), compare to "alarm list --filter xxx=zzz" usage.


ii. we don't have a strong requirement to support *complex* query 
scenarios of alarms, we only have alarms and alarm history records in aodh.


iii. I personally think it is enough to support filtering query with 
"--filter" which is easy to implement [2], and, we have plan to support 
pagination query in aodh.


any thoughts ?

[1] https://review.openstack.org/#/c/283958/
[2] https://review.openstack.org/#/c/283959/
[3] 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2016-03-03.log.html#t2016-03-03T15:21:14




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-03 Thread Armando M.
On 3 March 2016 at 16:56, Stephen Balukoff  wrote:

> Hello!
>
> I have a problem I'm hoping someone can help with: I have gone through the
> task of completing a shiny new feature for an openstack project, and now
> I'm trying to figure out how to get that last all-important documentation
> step done so that people will know about this new feature and use it. But
> I'm having no luck figuring out how I actually go about doing this...
>
> This started when I was told that in order to consider the feature
> "complete," I needed to make sure that it was documented in the openstack
> official documentation. I wholeheartedly agree with this: If it's not
> documented, very few people will know about it, let alone use it. And few
> things make an open-source contributor more sad than the idea that the work
> they've spent months or years completing isn't getting used.
>
> So... No problem! I'm an experienced OpenStack developer, and I just spent
> months getting this major new feature through my project's gauntlet of an
> approval process. How hard could documenting it be, right?
>
> So in the intervening days I've been going through the openstack-manuals,
> openstack-doc-tools, and other repositories, trying to figure out where I
> make my edits. I found both the CLI and API reference in the
> openstack-manuals repository... but when I went to edit these files, I
> noticed that there's a comment at the top stating they are auto-generated
> and shouldn't be edited? It seemed odd to me that the results of something
> auto-generated should be checked into a git repository instead of the
> configuration which creates the auto-generated output... but it's not my
> project, right?
>
> Anyway, so then I went to try to figure out how I get this auto-generated
> output updated, and haven't found much (ha!) documented on the process...
> when I sought help from Sam-I-Am, I was told that these essentially get
> generated once per release by "somebody." So...  I'm done, right?
>
> Well... I'm not so sure. Yes, if the CLI and API documentation gets
> auto-generated from the right sources, we should be good to go on that
> front, but how can I be sure the automated process is pulling this
> information from the right place? Shouldn't there be some kind of
> continuous integration or jenkins check which tests this that I can look
> at? (And if such a thing exists, how am I supposed to find out about it?)
>
> Also, the new feature I've added is somewhat involved, and it could
> probably use another document describing its intended use beyond the CLI /
> API ref. Heck, we already created on in the OpenStack wiki... but I'm also
> being told that we're trying to not rely on the wiki as much, per se, and
> that anything in the wiki really ought to be moved into the "official"
> documentation canon.
>
> So I'm at a loss. I'm a big fan of documentation as a communication
> tool, and I'm an experienced OpenStack developer, but when I look in the
> manual for how to contribute to the OpenStack documentation, I find a guide
> that wants to walk me through setting up gerrit... and very little targeted
> toward someone who already knows that, but just needs to know the actual
> process for updating the manual (and which part of the manual should be
> updated).
>
> When I went back to Sam-I-Am about this, this spawned a much larger
> discussion and he suggested I bring this up on the mailing list because
> there might be some "big picture" issues at play that should get a larger
> discussion. So... here I am.
>
> Here's what I think the problem is:
>
> * We want developers to document the features they add or modify
> * We want developers to provide good user, operator, etc. documentation
> that actual users, operators, etc. can use to understand and use the
> software we're writing.
> * We even go so far as to say that a feature is not complete unless it has
> this documentation (which I agree with)
>

If you agree with this, why do you bring it up twice? :)


> * With a rather small openstack-docs contributor team, we want to automate
> as much as possible, and rely on the docs team to *edit* documentation
> written by developers instead of writing the docs themselves (which is more
> time consuming for the docs team to do, and may miss important things only
> the developers know about.)
>
> But:
>
> * We don't actually provide much help to the developers to know how to do
> this. We have plenty for people who are new to OpenStack to get started
> with gerrit--  but there doesn't seem to be much practical help on where to
> get started, as an experienced contributor to other projects, on the actual
> task of updating the manual.
>
> And I would wager:
>
> * We don't seem to have many automated tools that tie into the jenkins
> gate checks to make sure that new features are properly documented.
> * We need something better than the 'APIImpact' and 'DocImpact' flags you
> can add to a commit message which 

Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-03 Thread Zhenyu Zheng
Yes, so you are suggest fixing the return data of non-admin user use 'nova
list --deleted' but leave non-admin using 'nova list --status=deleted' as
is. Or it would be better to also submit a BP for next cycle to add support
for non-admin using '--status=deleted' with microversions. Because in my
opinion, if we allow non-admin use "nova list --deleted", there will be no
reason for us to limit the use of "--status=deleted".

On Fri, Mar 4, 2016 at 12:37 AM, Matt Riedemann 
wrote:

>
>
> On 3/3/2016 10:02 AM, Matt Riedemann wrote:
>
>>
>>
>> On 3/3/2016 2:55 AM, Zhenyu Zheng wrote:
>>
>>> Yes, I agree with you guys, I'm also OK for non-admin users to list
>>> their own instances no matter what status they are.
>>>
>>> My question is this:
>>> I have done some tests, yet we have 2 different ways to list deleted
>>> instances (not counting using changes-since):
>>>
>>> 1.
>>> "GET
>>> /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?status=deleted
>>> HTTP/1.1"
>>> (nova list --status deleted in CLI)
>>> 2. REQ: curl -g -i -X GET
>>>
>>> http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
>>> (nova
>>> list --deleted in CLI)
>>>
>>> for admin user, we can all get deleted instances(after the fix of Matt's
>>> patch).
>>>
>>> But for non-admin users, #1 is restricted here:
>>>
>>> https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n350
>>>
>>> and it will return 403 error:
>>> RESP BODY: {"forbidden": {"message": "Only administrators may list
>>> deleted instances", "code": 403}}
>>>
>>
>> This is part of the API so if we were going to allow non-admins to query
>> for deleted servers using status=deleted, it would have to be a
>> microversion change. [1] I could also see that being policy-driven.
>>
>> It does seem odd and inconsistent though that non-admins can't query
>> with status=deleted but they can query with deleted=True in the query
>> options.
>>
>>
>>> and for #2 it will strangely return servers that are not in deleted
>>> status:
>>>
>>
>> This seems like a bug. I tried looking for something obvious in the code
>> but I'm not seeing the issue, I'd suspect something down in the DB API
>> code that's doing the filtering.
>>
>>
>>> DEBUG (connectionpool:387) "GET
>>> /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
>>> HTTP/1.1" 200 3361
>>> DEBUG (session:235) RESP: [200] Content-Length: 3361
>>> X-Compute-Request-Id: req-bd073750-982a-4ef7-864a-a5db03e59a68 Vary:
>>> X-OpenStack-Nova-API-Version Connection: keep-alive
>>> X-Openstack-Nova-Api-Version: 2.1 Date: Thu, 03 Mar 2016 08:43:17 GMT
>>> Content-Type: application/json
>>> RESP BODY: {"servers": [{"status": "ACTIVE", "updated":
>>> "2016-02-29T06:24:16Z", "hostId":
>>> "56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7", "addresses":
>>> {"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version":
>>> 4, "addr": "10.0.0.14", "OS-EXT-IPS:type": "fixed"},
>>> {"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version": 6, "addr":
>>> "fdb7:5d7b:6dcd:0:f816:3eff:fe4f:1b32", "OS-EXT-IPS:type": "fixed"}]},
>>> "links": [{"href":
>>> "
>>> http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70
>>> ",
>>>
>>> "rel": "self"}, {"href":
>>> "
>>> http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70
>>> ",
>>>
>>> "rel": "bookmark"}], "key_name": null, "image": {"id":
>>> "6455625c-a68d-4bd3-ac2e-07382ac5cbf4", "links": [{"href":
>>> "
>>> http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/images/6455625c-a68d-4bd3-ac2e-07382ac5cbf4
>>> ",
>>>
>>> "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null,
>>> "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at":
>>> "2016-02-29T06:24:16.00", "flavor": {"id": "1", "links": [{"href":
>>> "http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/flavors/1;,
>>> "rel": "bookmark"}]}, "id": "ee8907c7-0730-4051-8426-64be44300e70",
>>> "security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at":
>>> null, "OS-EXT-AZ:availability_zone": "nova", "user_id":
>>> "da935c024dc1444abb7b32390eac4e0b", "name": "test_inject", "created":
>>> "2016-02-29T06:24:08Z", "tenant_id": "62bfb653eb0d4d5cabdf635dd8181313",
>>> "OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached":
>>> [], "accessIPv4": "", "accessIPv6": "", "progress": 0,
>>> "OS-EXT-STS:power_state": 1, "config_drive": "True", "metadata": {}},
>>> {"status": "ACTIVE", "updated": "2016-02-29T06:21:22Z", "hostId":
>>> "56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7", "addresses":
>>> {"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:63:b0:12", "version":
>>> 4, "addr": "10.0.0.13", "OS-EXT-IPS:type": "fixed"},
>>> {"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:63:b0:12", "version": 6, "addr":
>>> "fdb7:5d7b:6dcd:0:f816:3eff:fe63:b012", "OS-EXT-IPS:type": "fixed"}]},
>>> "links": 

Re: [openstack-dev] [Fuel] [murano] [yaql] yaql.js

2016-03-03 Thread Kirill Zaitsev
(and thus port YAQL to JS)

FYI, you’re not the first one to have that idea. =)

We have https://review.openstack.org/#/c/159905/3 an initial draft of how YAQL 
may look on JS. It’s outdated, but most certainly can be revived and finished 
if you have interest in helping us make it happen. =)

-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 2 March 2016 at 14:01:57, Vitaly Kramskikh (vkramsk...@mirantis.com) wrote:

Oh, so there is a spec. I was worried that this patch has 
"WIP-no-bprint-assigned-yet" string in the commit message, so I thought there 
is no spec for it. So the commit message should be updated to avoid such 
confusion.

It's really good I've seen this spec. There are plans to overhaul UI data 
format description which we use for cluster and node settings to solve some 
issues and implement long-awaited features like nested structures, so we might 
also want to deprecate our expression language and also switch to YAQL (and 
thus port YAQL to JS).

2016-03-02 17:17 GMT+07:00 Vladimir Kuklin :
Vitaly

Thanks for bringing this up. Actually the spec has been on review for almost 2 
weeks: https://review.openstack.org/#/c/282695/. Essentially, this is not 
introducing new DSL but replacing the existing one with more powerful 
extendable language which is being actively developed within OpenStack and is 
already a part of other projects (Murano, Mistral), which has much more 
contributors, can return not only boolean but any arbitrary collections. So it 
means that we want to deprecate current Expression language that you wrote and 
replace it with YAQL due to those reasons. You are not going to extend this 
Expression-based language in 3 weeks up to level of support of extensions, 
method overloading, return of arbitrary collections (e.g. we also want to 
calculate cross-depends and requires fields on the fly which require for it to 
return list of dicts) and support of this stuff on your own, are you?

On Wed, Mar 2, 2016 at 10:09 AM, Vitaly Kramskikh  
wrote:
I think it's not a part of best practices to introduce changes like 
https://review.openstack.org/#/c/279714/ (adding yet another DSL to the 
project) without a blueprint and review and discussion of the spec.

2016-03-02 2:19 GMT+07:00 Alexey Shtokolov :
Fuelers,

I would like to request a feature freeze exception for "Unlock settings tab" 
feature [0]

This feature being combined with Task-based deployment [1] and LCM-readiness 
for Fuel deployment tasks [2] unlocks Basic LCM in Fuel. We conducted a 
thorough redesign of this feature and splitted it into several granular changes 
[3]-[6] to allow users to change settings on deployed, partially deployed, 
stopped or erred clusters and further run redeployment using a particular graph 
(custom or calculated based on expected changes stored in DB) and with new 
parameters.

We need 3 weeks after FF to finish this feature.  
Risk of not delivering it after 3 weeks is low.

Patches on review or in progress:
https://review.openstack.org/#/c/284139/
https://review.openstack.org/#/c/279714/
https://review.openstack.org/#/c/286754/
https://review.openstack.org/#/c/286783/

Specs:
https://review.openstack.org/#/c/286713/
https://review.openstack.org/#/c/284797/
https://review.openstack.org/#/c/282695/
https://review.openstack.org/#/c/284250/


[0] https://blueprints.launchpad.net/fuel/+spec/unlock-settings-tab
[1] https://blueprints.launchpad.net/fuel/+spec/enable-task-based-deployment
[2] https://blueprints.launchpad.net/fuel/+spec/granular-task-lcm-readiness
[3] https://blueprints.launchpad.net/fuel/+spec/computable-task-fields-yaql
[4] https://blueprints.launchpad.net/fuel/+spec/store-deployment-tasks-history
[5] https://blueprints.launchpad.net/fuel/+spec/custom-graph-execution
[6] https://blueprints.launchpad.net/fuel/+spec/save-deployment-info-in-database

--
---
WBR, Alexey Shtokolov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com
www.mirantis.ru
vkuk...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-03 Thread Stephen Balukoff
Hello!

I have a problem I'm hoping someone can help with: I have gone through the
task of completing a shiny new feature for an openstack project, and now
I'm trying to figure out how to get that last all-important documentation
step done so that people will know about this new feature and use it. But
I'm having no luck figuring out how I actually go about doing this...

This started when I was told that in order to consider the feature
"complete," I needed to make sure that it was documented in the openstack
official documentation. I wholeheartedly agree with this: If it's not
documented, very few people will know about it, let alone use it. And few
things make an open-source contributor more sad than the idea that the work
they've spent months or years completing isn't getting used.

So... No problem! I'm an experienced OpenStack developer, and I just spent
months getting this major new feature through my project's gauntlet of an
approval process. How hard could documenting it be, right?

So in the intervening days I've been going through the openstack-manuals,
openstack-doc-tools, and other repositories, trying to figure out where I
make my edits. I found both the CLI and API reference in the
openstack-manuals repository... but when I went to edit these files, I
noticed that there's a comment at the top stating they are auto-generated
and shouldn't be edited? It seemed odd to me that the results of something
auto-generated should be checked into a git repository instead of the
configuration which creates the auto-generated output... but it's not my
project, right?

Anyway, so then I went to try to figure out how I get this auto-generated
output updated, and haven't found much (ha!) documented on the process...
when I sought help from Sam-I-Am, I was told that these essentially get
generated once per release by "somebody." So...  I'm done, right?

Well... I'm not so sure. Yes, if the CLI and API documentation gets
auto-generated from the right sources, we should be good to go on that
front, but how can I be sure the automated process is pulling this
information from the right place? Shouldn't there be some kind of
continuous integration or jenkins check which tests this that I can look
at? (And if such a thing exists, how am I supposed to find out about it?)

Also, the new feature I've added is somewhat involved, and it could
probably use another document describing its intended use beyond the CLI /
API ref. Heck, we already created on in the OpenStack wiki... but I'm also
being told that we're trying to not rely on the wiki as much, per se, and
that anything in the wiki really ought to be moved into the "official"
documentation canon.

So I'm at a loss. I'm a big fan of documentation as a communication
tool, and I'm an experienced OpenStack developer, but when I look in the
manual for how to contribute to the OpenStack documentation, I find a guide
that wants to walk me through setting up gerrit... and very little targeted
toward someone who already knows that, but just needs to know the actual
process for updating the manual (and which part of the manual should be
updated).

When I went back to Sam-I-Am about this, this spawned a much larger
discussion and he suggested I bring this up on the mailing list because
there might be some "big picture" issues at play that should get a larger
discussion. So... here I am.

Here's what I think the problem is:

* We want developers to document the features they add or modify
* We want developers to provide good user, operator, etc. documentation
that actual users, operators, etc. can use to understand and use the
software we're writing.
* We even go so far as to say that a feature is not complete unless it has
this documentation (which I agree with)
* With a rather small openstack-docs contributor team, we want to automate
as much as possible, and rely on the docs team to *edit* documentation
written by developers instead of writing the docs themselves (which is more
time consuming for the docs team to do, and may miss important things only
the developers know about.)

But:

* We don't actually provide much help to the developers to know how to do
this. We have plenty for people who are new to OpenStack to get started
with gerrit--  but there doesn't seem to be much practical help on where to
get started, as an experienced contributor to other projects, on the actual
task of updating the manual.

And I would wager:

* We don't seem to have many automated tools that tie into the jenkins gate
checks to make sure that new features are properly documented.
* We need something better than the 'APIImpact' and 'DocImpact' flags you
can add to a commit message which generate docs project bug reports These
are post-hoc back-filling at best, and as I understand it, often mean that
some poor schmuck on the docs team will probably be the one who ends up
writing the docs for the feature the developer added, probably without the
developer's help.

Please understand: I 

Re: [openstack-dev] [ceilometer] Unable to get IPMI meter readings

2016-03-03 Thread Lu, Lianhao
Hi Kapil,

Currenlyt, the ipmi pollsters can only get the ipmi data from system bus due to 
the security concerns. So you have the make sure the ceilometer-agent-ipmi is 
running on the same machine you want get the hardware.ipmi.node.power metric 
from. Also you should make sure your machine have NodeManager features and 
enabled that  in your bios settings, otherwise the the hardware.ipmi.node.power 
pollster won't be loaded because it will checks whether your machine support 
that during load time.

-Lianhao Lu

> -Original Message-
> From: Kapil [mailto:kapil6...@gmail.com]
> Sent: Friday, March 04, 2016 2:34 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [ceilometer] Unable to get IPMI meter
> readings
> 
> So, we upgraded our openstack install from Juno to Kilo 2015.1.1
> 
> Not sure if this fixed some stuff, but I can now get samples for
> hardware.ipmi.(fan|temperature). However, I want to get
> hardware.ipmi.node.power samples and I get the following error in
> the ceilometer log-
> 
> ERROR ceilometer.agent.base [-] Skip loading extension for
> hardware.ipmi.node.power
> 
> 
> I edited pipeline.yaml as follows-
> sources:
> - name: meter_ipmi
>   interval: 10
>   resources:
>   - "ipmi://"
>   meters:
>   - "hardware.ipmi.node.power"
>   sinks:
>   - ipmi_sink
> sinks:
>  - name: ipmi_sink
>   transformers:
>   publishers:
>   - notifier://?per_meter_topic=1
> 
> 
> I also checked "rabbitmqctl list_queues | grep metering" and all the
> queues are empty.
> 
> 
> Do I need to change anything in ceilometer.conf or on the controller
> nodes ? Currently, I am working only with the compute node and only
> running ceilometer queries from controller node.
> 
> 
> Thanks
> 
> 
> Regards,
> Kapil Agarwal
> 
> On Thu, Feb 25, 2016 at 12:20 PM, gordon chung  wrote:
> 
> 
>   at quick glance, it seems like data is being generated[1]. if you
> check
>   your queues (rabbitmqctl list_queues for rabbit), do you see
> any items
>   sitting on notification.sample queue or metering.sample
> queue? do you
>   receive other meters fine? maybe you can query db directly to
> verify
>   it's not a permission issue.
> 
>   [1] see: 2016-02-25 13:36:58.909 21226 DEBUG
> ceilometer.pipeline [-]
>   Pipeline meter_sink: Transform sample
>at 0x7f6b3630ae50> from 0 transformer _publish_samples
>   /usr/lib/python2.7/dist-packages/ceilometer/pipeline.py:296
> 
>   On 25/02/2016 8:43 AM, Kapil wrote:
>   > Below is the output of ceilometer-agent-ipmi in debug mode
>   >
>   > http://paste.openstack.org/show/488180/
>   > ᐧ
>   >
>   > Regards,
>   > Kapil Agarwal
>   >
>   > On Wed, Feb 24, 2016 at 8:18 PM, Lu, Lianhao
>  
>   > > wrote:
>   >
>   > On Feb 25, 2016 06:18, Kapil wrote:
>   >  > Hi
>   >  >
>   >  >
>   >  > I discussed this problem with gordc on the telemetry IRC
> channel
>   > but I
>   >  > am still facing issues.
>   >  >
>   >  > I am running the ceilometer-agent-ipmi on the compute
> nodes, I
>   > changed
>   >  > pipeline.yaml of the compute node to include the ipmi
> meters and
>   >  > resource as "ipmi://localhost".
>   >  >
>   >  > - name: meter_ipmi
>   >  >   interval: 60
>   >  >   resources:
>   >  >   - ipmi://localhost meters: - "hardware.ipmi.node*"
> -
>   >  >   "hardware.ipmi*" - "hardware.degree*" sinks: -
> meter_sink I
>   >  > have ipmitool installed on the compute nodes and
> restarted the
>   >  > ceilometer services on compute and controller nodes.
> Yet, I am not
>   >  > receiving any ipmi meters when I run "ceilometer meter-
> list". I also
>   >  > tried passing the hypervisor IP address and the ipmi
> address I get
>   >  > when I run "ipmitool lan print" to resources but to no
> avail.
>   >  >
>   >  >
>   >  > Please help in this regard.
>   >  >
>   >  >
>   >  > Thanks
>   >  > Kapil Agarwal
>   >
>   > Hi Kapil,
>   >
>   > Would you please turn on debug/verbose configurations
> and paste the
>   > log of ceilometer-agent-ipmi on
> http://paste.openstack.org ?
>   >
>   > -Lianhao Lu
>   >
> __
> 
>   > OpenStack Development Mailing List (not for usage
> questions)
>   > Unsubscribe:
>   > OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> 
>   >  requ...@lists.openstack.org?subject:unsubscribe>
>   > http://lists.openstack.org/cgi-
> 

Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-03-03 Thread Jay Pipes
Hi again, Yingxin, sorry for the delayed response... been traveling. 
Comments inline :)


On 03/01/2016 12:34 AM, Cheng, Yingxin wrote:

Hi,

I have simulated the distributed resource management with the incremental update model based on Jay's 
benchmarking framework: https://github.com/cyx1231st/placement-bench/tree/shared-state-demonstration. The 
complete result lies at http://paste.openstack.org/show/488677/. It's ran by a VM with 4 cores and 4GB RAM, 
and the mysql service is using the default settings with the "innodb_buffer_pool_size" setting to 
"2G". The number of simulated compute nodes are set to "300".


A few things.

First, in order to make any predictions or statements about a potential 
implementation's scaling characteristics, you need to run the benchmarks 
with increasing levels of compute nodes. The results you show give us 
only a single dimension of scaling (300 compute nodes). What you want to 
do is run the benchmarks at 100, 200, 400, 800 and 1600 compute node 
scales. You don't need to run *all* of the different permutations of 
placement/partition/workers scenarios, of course. I'd suggest just 
running the none partition strategy and the pack placement strategy at 8 
worker processes. Those results will give you (and us!) the data points 
that will indicate the scaling behaviour of the shared-state-scheduler 
implementation proposal as the number of compute nodes in the deployment 
increases. The "none" partitioning strategy represents the reality of 
the existing scheduler implementation, which does not shard the 
deployment into partitions but retrieves all compute nodes for the 
entire deployment on every request to the scheduler's 
select_destinations() method.


Secondly, and I'm not sure if you intended this, the code in your 
compute_node.py file in the placement-bench project is not thread-safe. 
In other words, your code assumes that only a single process on each 
compute node could ever actually run the database transaction that 
inserts allocation records at any time. If you want more than a single 
process on the compute node to be able to handle claims of resources, 
you will need to modify that code to use a compare-and-update strategy, 
checking a "generation" attribute on the inventory record to ensure that 
another process on the compute node hasn't simultaneously updated the 
allocations information for that compute node.


Third, you have your scheduler workers consuming messages off the 
request queue using get_nowait(), while you left the original placement 
scheduler using the blocking get() call. :) Probably best to compare 
apples to apples and have them both using the blocking get() call.




First, the conclusions from the result of the eventually consistent scheduler state 
simulation(i.e. rows that "do claim in compute?" = Yes):
#accuracy
1. The final decision accuracy is 100%: No resource usage will exceed the real 
capacity by examining the rationality of db records at the end of each run.


Again, with your simulation, this assumes only a single thread will ever 
attempt a claim on each compute node at any given time.



2. The schedule decision accuracy is 100% if there's only one scheduler: The successful scheduler 
decisions are all succeeded in compute nodes, thus no retries recorded, i.e. "Count of 
requests processed" = "Placement query count". See 
http://paste.openstack.org/show/488696/


Yep, no disagreement here :)


3. The schedule decision accuracy is 100% if "Partition strategy" is set to 
"modulo", no matter how many scheduler processes. See 
http://paste.openstack.org/show/488697/
#racing


Yep, modulo partitioning eliminates the race conditions when the number 
of partitions == the number of worker processes. However, this isn't 
representative of the existing scheduler system which processes every 
compute node in the deployment on every call to select_destinations().


What happens in the shared-state-scheduler approach when you want to 
scale the scheduler process out with more scheduler processes handling 
more load? What about having two scheduler processes handling the 
scheduling to the same partition (i.e. highly-available scheduling)? 
Both of these situations will introduce contention into the scheduling 
process and introduce races that will manifest themselves on the compute 
nodes instead of in the scheduler processes themselves where the total 
deadlock and retry time can be limited.



4. No racing is happened if there's only one scheduler process or the "Partition 
strategy" is set to "modulo", explained by 2. 3.


Yes, no disagreement.


5. The multiple-schedulers racing rate is extremely low using the "spread" or "random" placement strategy used by legacy filter 
scheduler: This rate is 3.0% using "spread" strategy and 0.15% using "random" strategy, note that there are 8 workers in 
processing about 12000 requests within 20 seconds. The result is even better than resource-provider scheduler(rows that "do claim in 

Re: [openstack-dev] [Cinder] Status of cinder-list bug delay with 1000's of volumes

2016-03-03 Thread Tom Barron


On 03/03/2016 06:38 PM, Walter A. Boring IV wrote:
> Adam,
>   As the bug shows, it was fixed in the Juno release.  The icehouse
> release is no longer supported.  I would recommend upgrading your
> deployment if possible or looking at the patch and see if it can work
> against your Icehouse codebase.
> 
> https://review.openstack.org/#/c/96548/
> 
> Walt

Actually, it looks like we also backported this one to icehouse [1] [2].

You should be able to pull it directly from upstream, or if you are
using an openstack distribution there is a good chance they will have it
in a maintenance release based on icehouse.

-- Tom

[1] https://review.openstack.org/102476
[2]
https://git.openstack.org/cgit/openstack/cinder/commit/?id=fe37a6ee1d85ce07e672f94e395edee81fac80db


> 
> On 03/03/2016 03:12 PM, Adam Lawson wrote:
>> Hey all (hi John),
>>
>> What's the status of this [1]? We're experiencing this behavior in
>> Icehouse - wondering where it was addressed and if so, when. I always
>> get confused when I look at the launchpad/review portals.
>>
>> [1] https://bugs.launchpad.net/cinder/+bug/1317606
>>
>> */
>> Adam Lawson/*
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel 9.0/Mitaka is now in Feature Freeze

2016-03-03 Thread Dmitry Borodaenko
Following feature freeze exceptions were granted, ordered by their merge
deadline. See linked emails for additonal conditions attached to some of
these exceptions.

UCA, 3/10:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088309.html

Multipath disks, 3/10:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088282.html

LCM readyness for all deployment tasks, 3/15:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088310.html

HugePages, 3/16:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088292.html

Numa, 3/16:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088292.html

SR-IOV, 3/16:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088307.html

Decouple Fuel and OpenStack tasks, 3/20:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088297.html

Remove conflicting openstack module parts, 3/20:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088298.html

DPDK, 3/24:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088291.html

Unlock "Settings" Tab, 3/24:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088305.html

ConfigDB, 3/24:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088279.html

Osnailyfacter refactoring for Puppet Master compatibility, 3/24:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088308.html

All other feature changes will have to wait until Soft Code Freeze.

See IRC meeting minutes and log from #fuel-dev for more details:
http://eavesdrop.openstack.org/meetings/fuel/2016/fuel.2016-03-03-16.00.html
http://irclog.perlgeek.de/fuel-dev/2016-03-03#i_12133112

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 10:31:09PM -0800, Dmitry Borodaenko wrote:
> Feature Freeze [0] for Fuel 9.0/Mitaka is now in effect. From this
> moment and until stable/mitaka branch is created at Soft Code Freeze,
> please do not merge feature related changes that have not received a
> feature freeze exception.
> 
> [0] https://wiki.openstack.org/wiki/FeatureFreeze
> 
> We will discuss all outstanding feature freeze exception requests in our
> weekly IRC meeting tomorrow [1]. If that discussion takes longer than
> the 1 hour time slot we have booked on #openstack-meeting-alt, we'll
> move the discussion to #fuel-dev and finish it there.
> 
> [1] https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
> 
> The list of exceptions requested so far is exceedingly long and it is
> likely that most of these exceptions will be rejected. If you want your
> exception to be approved, please have the following information ready
> for the meeting:
> 
> 1) Link to design spec in fuel-specs, spec review status;
> 
> 2) Links to all outstanding commits for the feature;
> 
> 3) Dependencies between your change and other features: what will be
> broken or useless if your change is not merged, what else has to be
> merged for your change to work;
> 
> 4) Analysis of impact and risks mitigation plan: which components are
> affected by the change, what can break, how can impact be verified, how
> can the change be isolated;
> 
> 5) Status of test coverage: what can be tested, what's covered by
> automated tests, what's been tested so far (with links to test results).
> 
> -- 
> Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [FFE] LCM readiness for all deployment tasks

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline March 15.

-- 
Dmitry Borodaenko


On Tue, Mar 01, 2016 at 05:03:11PM +0100, Szymon Banka wrote:
> Hi All,
> 
> I’d like to request a Feature Freeze Exception for "LCM readiness for all 
> deployment tasks” [1] until Mar 11.
> 
> We need additional 1.5 week to finish and merge necessary changes which will 
> fix tasks in Fuel to be idempotent. That will be foundation and will enable 
> further development of LCM features.
> 
> More details about work being done: [2]
> 
> [1] https://blueprints.launchpad.net/fuel/+spec/granular-task-lcm-readiness
> [2] https://review.openstack.org/#/q/topic:bp/granular-task-idempotency,n,z
> 
> --
> Thanks,
> Szymon Bańka
> Mirantis
> http://www.mirantis.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Enable UCA repositories for deployment

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline March 10.

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 06:27:32PM +0300, Matthew Mosesohn wrote:
> Hi all,
> 
> I would like to request a feature freeze exception for "Deploy with
> UCA packages" feature.
> 
> I anticipate 2 more days to get tests green and add some depth to the
> existing test.
> 
> https://blueprints.launchpad.net/fuel/+spec/deploy-with-uca-packages
> 
> The impact to BVT stability is quite small because it only touches 1
> task in OpenStack deployment, and by default it is not enabled.
> 
> Open reviews:
> https://review.openstack.org/#/c/281762/
> https://review.openstack.org/#/c/279556/
> https://review.openstack.org/#/c/279542/
> https://review.openstack.org/#/c/284584/
> 
> Best Regards,
> Matthew Mosesohn
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] FFE request for osnailyfacter refactoring for Puppet Master compatibility

2016-03-03 Thread Dmitry Borodaenko
Granted conditionally, design consensus deadline March 10, merge
deadline March 16 for patches that do not conflict with
fuel-openstack-tasks and fuel-remove-conflict-openstack, March 24 for
remaining patches.

If design consensus is not reached by March 10, the exception will be
revoked.

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 07:01:02AM -0700, Scott Brimhall wrote:
> This is not possible to move to 10. This is a critical feature that
> our 2 largest customers are dependent on for deployment at the end of
> May. Puppet Master flat out will not work with a Fuel deployed
> environment without doing this unless we were to create our own
> composition layer, which would leave us with two separate code bases
> to maintain.  That isn't an option and this pretty much has to happen
> in 9.
> 
> ---
> Scott Brimhall, Cloud Architect
> Mirantis - Pure Play Openstack
> 
> > On Mar 2, 2016, at 02:01, Aleksandr Didenko  wrote:
> > 
> > Hi,
> > 
> > > Merging this code is relatively non-intrusive to core Fuel Library code
> > > as it is merely re-organizing the file structure of the osnailyfacter
> > > module to be compatible with Puppet Master. 
> > 
> > It looks like super-intrusive to me. Modular manifests are,
> > actually, the core of Fuel Library. And the majority of changes we
> > introduce in Fuel Library are proposed for those manifests. So if
> > you're going to move those manifests into "osnailyfacter::*" classes
> > then it will basically conflict with the 90% of other patches for
> > Fuel Library. This may slow down development of other features as
> > well as bug fixing.
> > 
> > Also I see no patches on review and spec is not yet accepted. I
> > think starting such an intrusive feature after FF is too risky,
> > let's move it to 10.0.
> > 
> > Regards,
> > Alex
> > 
> >> On Wed, Mar 2, 2016 at 1:21 AM, Scott Brimhall  
> >> wrote:
> >> Greetings,
> >> 
> >> As you might know, we are working on integrating a 3rd party
> >> configuration management platform (Puppet Master) with Fuel.
> >> This integration will provide the capability for state enforcement
> >> and will further enable "day 2" operations of a Fuel-deployed site.
> >> We must refactor the 'osnailyfacter' module in Fuel Library to be
> >> compatible with both a masterful and masterless Puppet approach.
> >> 
> >> This change is required to enable a Puppet Master based LCM
> >> solution.
> >> 
> >> We request a FFE for this feature for 3 weeks, until Mar 24.  By that
> >> time, we will provide a tested solution in accordance with the following
> >> specifications [1].
> >> 
> >> The feature includes the following components:
> >> 
> >> 1. Refactor 'osnailyfacter' Fuel Library module to be compatible with
> >> Puppet Master by becoming a valid and compliant Puppet module.
> >> This involves moving manifests into the proper manifests directory
> >> and moving the contents into classes that can be included by Puppet
> >> Master.
> >> 2. Update deployment tasks to update their manifest path to the new
> >> location.
> >> 
> >> Merging this code is relatively non-intrusive to core Fuel Library code
> >> as it is merely re-organizing the file structure of the osnailyfacter
> >> module to be compatible with Puppet Master.  Upon updating the
> >> deployment tasks to reflect the new location of manifests, this feature
> >> remains compatible with the masterless puppet apply approach that
> >> Fuel uses while providing the ability to integrate a Puppet Master
> >> based LCM solution.
> >> 
> >> Overall, I consider this change as low risk for integrity and timeline of
> >> the release and it is a critical feature for the ability to integrate an 
> >> LCM
> >> solution using Puppet Master.
> >> 
> >> Please consider our request and share concerns so we can properly
> >> resolve them.
> >> 
> >> [1] 
> >> https://blueprints.launchpad.net/fuel/+spec/fuel-refactor-osnailyfacter-for-puppet-master-compatibility
> >> 
> >> ---
> >> Best Regards,
> >> 
> >> Scott Brimhall
> >> Systems Architect
> >> Mirantis Inc
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [Fuel] [FFE] Unlock Settings Tab

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline March 24, task history part of the feature is to
be excluded from this exception grant unless a consensus is reached by
March 10.

Relevant part of the meeting log starts at:
http://eavesdrop.openstack.org/meetings/fuel/2016/fuel.2016-03-03-16.00.log.html#l-198

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 06:00:40PM +0700, Vitaly Kramskikh wrote:
> Oh, so there is a spec. I was worried that this patch has
> "WIP-no-bprint-assigned-yet" string in the commit message, so I thought
> there is no spec for it. So the commit message should be updated to avoid
> such confusion.
> 
> It's really good I've seen this spec. There are plans to overhaul UI data
> format description which we use for cluster and node settings to solve some
> issues and implement long-awaited features like nested structures, so we
> might also want to deprecate our expression language and also switch to
> YAQL (and thus port YAQL to JS).
> 
> 2016-03-02 17:17 GMT+07:00 Vladimir Kuklin :
> 
> > Vitaly
> >
> > Thanks for bringing this up. Actually the spec has been on review for
> > almost 2 weeks: https://review.openstack.org/#/c/282695/. Essentially,
> > this is not introducing new DSL but replacing the existing one with more
> > powerful extendable language which is being actively developed within
> > OpenStack and is already a part of other projects (Murano, Mistral), which
> > has much more contributors, can return not only boolean but any arbitrary
> > collections. So it means that we want to deprecate current Expression
> > language that you wrote and replace it with YAQL due to those reasons. You
> > are not going to extend this Expression-based language in 3 weeks up to
> > level of support of extensions, method overloading, return of arbitrary
> > collections (e.g. we also want to calculate cross-depends and requires
> > fields on the fly which require for it to return list of dicts) and support
> > of this stuff on your own, are you?
> >
> > On Wed, Mar 2, 2016 at 10:09 AM, Vitaly Kramskikh  > > wrote:
> >
> >> I think it's not a part of best practices to introduce changes like
> >> https://review.openstack.org/#/c/279714/ (adding yet another DSL to the
> >> project) without a blueprint and review and discussion of the spec.
> >>
> >> 2016-03-02 2:19 GMT+07:00 Alexey Shtokolov :
> >>
> >>> Fuelers,
> >>>
> >>> I would like to request a feature freeze exception for "Unlock settings
> >>> tab" feature [0]
> >>>
> >>> This feature being combined with Task-based deployment [1] and
> >>> LCM-readiness for Fuel deployment tasks [2] unlocks Basic LCM in Fuel. We
> >>> conducted a thorough redesign of this feature and splitted it into several
> >>> granular changes [3]-[6] to allow users to change settings on deployed,
> >>> partially deployed, stopped or erred clusters and further run redeployment
> >>> using a particular graph (custom or calculated based on expected changes
> >>> stored in DB) and with new parameters.
> >>>
> >>> We need 3 weeks after FF to finish this feature.
> >>> Risk of not delivering it after 3 weeks is low.
> >>>
> >>> Patches on review or in progress:
> >>> 
> >>> https://review.openstack.org/#/c/284139/
> >>> https://review.openstack.org/#/c/279714/
> >>> https://review.openstack.org/#/c/286754/
> >>> https://review.openstack.org/#/c/286783/
> >>>
> >>> Specs:
> >>> https://review.openstack.org/#/c/286713/
> >>> https://review.openstack.org/#/c/284797/
> >>> https://review.openstack.org/#/c/282695/
> >>> https://review.openstack.org/#/c/284250/
> >>>
> >>>
> >>> [0] https://blueprints.launchpad.net/fuel/+spec/unlock-settings-tab
> >>> [1]
> >>> https://blueprints.launchpad.net/fuel/+spec/enable-task-based-deployment
> >>> [2]
> >>> https://blueprints.launchpad.net/fuel/+spec/granular-task-lcm-readiness
> >>> [3]
> >>> https://blueprints.launchpad.net/fuel/+spec/computable-task-fields-yaql
> >>> [4]
> >>> https://blueprints.launchpad.net/fuel/+spec/store-deployment-tasks-history
> >>> [5] https://blueprints.launchpad.net/fuel/+spec/custom-graph-execution
> >>> [6]
> >>> https://blueprints.launchpad.net/fuel/+spec/save-deployment-info-in-database
> >>>
> >>> --
> >>> ---
> >>> WBR, Alexey Shtokolov
> >>>
> >>>
> >>> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>
> >>
> >> --
> >> Vitaly Kramskikh,
> >> Fuel UI Tech Lead,
> >> Mirantis, Inc.
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> 

Re: [openstack-dev] [kolla][security][release] Obtaining the vulnerability:managed tag

2016-03-03 Thread Jeremy Stanley
On 2016-03-03 23:57:04 + (+), Steven Dake (stdake) wrote:
[...]
> If anything in this email is wrong, feel free to correct me and
> get us on the right track.
[...]

Sounds on track to me. The goal of having some guidelines for this
was mainly just to try and avoid the VMT taking responsibility for a
project and then immediately having it become a huge burden due to
obvious latent vulnerabilities, lack of subject matter expert
developers available to triage those which do get reported, et
cetera. It's an attempt to ensure some up-front due diligence so
that we're not taking on more than we can reasonably handle, since
the VMT is by design a constrained team centrally assigning
identifiers, tracking the state of outstanding embargoes and
privately curating impact descriptions for later inclusion in public
advisories.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [FFE] FF exception request for SR-IOV

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline March 16, feature to be marked experimental
until QA has signed off that it's fully tested and stable. 

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 01:40:26PM +0200, Aleksey Kasatkin wrote:
> > And we need to write a new patch for:
> https://blueprints.launchpad.net/fuel/+spec/nailgun-should-serialize-sriov
> 
> It is on review now: https://review.openstack.org/286704
> 
> Complete list of patches on review:
> https://review.openstack.org/#/q/status:open++branch:master+topic:bp/support-sriov
> 
> 
> 
> Aleksey Kasatkin
> 
> 
> On Tue, Mar 1, 2016 at 6:27 PM, Aleksandr Didenko 
> wrote:
> 
> > Hi,
> >
> > I'd like to to request a feature freeze exception for "Support for SR-IOV
> > for improved networking performance" feature [0].
> >
> > Part of this feature is already merged [1]. We have the following patches
> > in work / on review:
> >
> > https://review.openstack.org/280782
> > https://review.openstack.org/284603
> > https://review.openstack.org/286633
> >
> > And we need to write a new patch for:
> > https://blueprints.launchpad.net/fuel/+spec/nailgun-should-serialize-sriov
> >
> > We need 2 weeks at most after FF to accomplish this.
> > Risk of not delivering it after 2 weeks is very low.
> >
> > Regards,
> > Alex
> >
> > [0] https://blueprints.launchpad.net/fuel/+spec/support-sriov
> > [1]
> > https://review.openstack.org/#/q/status:merged+branch:master+topic:bp/support-sriov
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-03 Thread Fox, Kevin M
Ok. Thanks for the info.

Kevin

From: Brandon Logan [brandon.lo...@rackspace.com]
Sent: Thursday, March 03, 2016 2:42 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Just for clarity, V2 did not reuse tables, all the tables it uses are
only for it.  The main problem is that v1 and v2 both have a pools
resource, but v1 and v2's pool resource have different attributes.  With
the way neutron wsgi works, if both v1 and v2 are enabled, it will
combine both sets of attributes into the same validation schema.

The other problem with v1 and v2 running together was only occurring
when the v1 agent driver and v2 agent driver were both in use at the
same time.  This may actually have been fixed with some agent updates in
neutron, since that is common code.  It needs to be tested out though.

Thanks,
Brandon

On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
> Just because you had thought no one was using it outside of a PoC doesn't 
> mean folks aren''t using it in production.
>
> We would be happy to migrate to Octavia. We were planning on doing just that 
> by running both v1 with haproxy namespace, and v2 with Octavia and then pick 
> off upgrading lb's one at a time, but the reuse of the v1 tables really was 
> an unfortunate decision that blocked that activity.
>
> We're still trying to figure out a path forward.
>
> We have an outage window next month. after that, it could be about 6 months 
> before we could try a migration due to production load picking up for a 
> while. I may just have to burn out all the lb's switch to v2, then rebuild 
> them by hand in a marathon outage :/
>
> And then there's this thingy that also critically needs fixing:
> https://bugs.launchpad.net/neutron/+bug/1457556
>
> Thanks,
> Kevin
> 
> From: Eichberger, German [german.eichber...@hpe.com]
> Sent: Thursday, March 03, 2016 12:47 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
> Kevin,
>
>  If we are offering a migration tool it would be namespace -> namespace (or 
> maybe Octavia since [1]) — given the limitations nobody should be using the 
> namespace driver outside a PoC so I am a bit confused why customers can’t 
> self migrate. With 3rd party Lbs I would assume vendors proving those scripts 
> to make sure their particular hardware works with those. If you indeed need a 
> migration from LBaaS V1 namespace -> LBaaS V2 namespace/Octavia please file 
> an RfE with your use case so we can discuss it further…
>
> Thanks,
> German
>
> [1] https://review.openstack.org/#/c/286380
>
> From: "Fox, Kevin M" >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> >
> Date: Wednesday, March 2, 2016 at 5:17 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> >
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
> no removal without an upgrade path. I've got v1 LB's and there still isn't a 
> migration script to go from v1 to v2.
>
> Thanks,
> Kevin
>
>
> 
> From: Stephen Balukoff [sbaluk...@bluebox.net]
> Sent: Wednesday, March 02, 2016 4:49 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
> I am also on-board with removing LBaaS v1 as early as possible in the Newton 
> cycle.
>
> On Wed, Mar 2, 2016 at 9:44 AM, Samuel Bercovici 
> > wrote:
> Thank you all for your response.
>
> In my opinion given that UI/HEAT will make Mitaka and will have one cycle to 
> mature, it makes sense to remove LBaaS v1 in Newton.
> Do we want do discuss an upgrade process in the summit?
>
> -Sam.
>
>
> From: Bryan Jones [mailto:jone...@us.ibm.com]
> Sent: Wednesday, March 02, 2016 5:54 PM
> To: 
> openstack-dev@lists.openstack.org
>
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
> And as for the Heat support, the resources have made Mitaka, with additional 
> functional tests on the way soon.
>
> blueprint: https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport
> gerrit topic: https://review.openstack.org/#/q/topic:bp/lbaasv2-suport
> BRYAN M. JONES
> Software Engineer - OpenStack Development
> Phone: 1-507-253-2620
> E-mail: jone...@us.ibm.com
>
>
> - Original message -
> From: Justin Pomeroy 
> >
> To: 
> 

Re: [openstack-dev] [Fuel] [FFE] Component registry improvements

2016-03-03 Thread Dmitry Borodaenko
Denied.

A fairly large patch with potentially intrusive refactoring that is not
required for any other features. We can safely postpone this until
Newton.

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 11:05:52AM +0200, Andriy Popovych wrote:
> Hi,
> 
> I would like to request a feature freeze exception for "Component
> registry improvements" feature [0]
> 
> It's related only with components and hasn't any impact on other Fuel
> parts. We have 2 patches [1], [2] which currently on review but
> blocked with CI issue due separation of fuel-web and fuel-ui parts.
> 
> We need no more than 1 week to finish review.
> 
> [0] 
> https://blueprints.launchpad.net/fuel/+spec/component-registry-improvements
> [1] https://review.openstack.org/#/c/282911/
> [2] https://review.openstack.org/#/c/286547/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Upgrade][FFE] Reassigning Nodes without Re-Installation

2016-03-03 Thread Dmitry Borodaenko
Denied.

This came in very late (patch remained in WIP until 1 day before FF),
covers a corner case, there was not enough risk analysis, it wasn't
represented in the IRC meeting earlier today, and the spec for the
high-level feature is sitting with a -1 from fuel-python component lead
since 1.5 weeks ago.

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 12:02:17AM -0600, Ilya Kharin wrote:
> I'd like to request a feature freeze exception for Reassigning Nodes
> without Re-Installation [1].
> 
> This feature is very important to several upgrade strategies that re-deploy
> control plane nodes, alongside of re-using some already deployed nodes,
> such as computes nodes or storage nodes. These changes affect only the
> upgrade part of Nailgun that mostly implemented in the cluster_upgrade
> extension and do not affect both the provisioning and the deployment.
> 
> I need one week to finish implementation and testing.
> 
> [1] https://review.openstack.org/#/c/280067/ (review in progress)
> 
> Best regards,
> Ilya Kharin.
> Mirantis, Inc.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][security][release] Obtaining the vulnerability:managed tag

2016-03-03 Thread Steven Dake (stdake)
Tristan,

Flying a bit by the seat of my pants here.  I can't find a simple
check-list of how exactly you get a project managed by the VMT :)  If
anything in this email is wrong, feel free to correct me and get us on the
right track.

The kolla-coresec team consists of the following folks:
Martin Andre
Steven Dake

Ryan Hallisey
Michal Jastrzebski
Michal Rostecki
Sam Yaple

That is one more person then the guidelines recommend, but they are
guidelines not hard and fast rules.  I was not able to include everyone
that asked to be included.  I'd ask for these folks to be active on the
bug triage for security for Kolla.

The next step is for us to locate a security expert to do a security audit
of the codebase including potential security issues with how we use
dependencies.  I'll be reaching out to the security team for guidance, but
have someone in mind (Dave Mccowan) who is a security expert and knows a
bit about containers and Kolla as well :)  If the security team would find
this acceptable and Dave would as well, we can proceed down that path, or
we could take recommendations from the security team instead.  Also Red
Hat has a great infosec team that audits every bit of code that goes into
Red Hat products, so perhaps Ryan or Mandre can reach out to them to audit
our code base in their copious spare time. :)

If the security audit turns up anything existing in the code base, we will
have to fix the bugs and attach them to the bug triage tool as * PRIVATE *
bugs and attachments.  I'll be seeking more guidance from the security
team as to how to proceed prior, during, and after ODS.  The long term
goal is to obtain the vulnerability:managed tag in the governance repo.
After that is achieved, this kolla-coresec team will still be responsible
for fixing problems found in the codebase and working with the OpenStack
VMT (vulnerability management team) to  release the changes in a
synchronized fashion.

Regards,
-steve

On 3/1/16, 12:11 PM, "Steven Dake (stdake)"  wrote:

>
>
>On 3/1/16, 10:47 AM, "Tristan Cacqueray"  wrote:
>
>>On 03/01/2016 05:12 PM, Ryan Hallisey wrote:
>>> Hello,
>>> 
>>> I have experience writing selinux policy. My plan was to write the
>>>selinux policy for Kolla in the next cycle.  I'd be interested in
>>>joining if that fits the criteria here.
>>> 
>>
>>Hello Ryan,
>>
>>While knowing howto write SELinux policy is a great asset for a coresec
>>team member, it's not a requirement. Such team purpose isn't to
>>implement core security features, but rather be responsive about private
>>security bug to confirm the issue and discuss the scope of any
>>vulnerability along with potential solutions.
>>
>>
>>
>>> Thanks,
>>> -Ryan
>>> 
>>> - Original Message -
>>> From: "Steven Dake (stdake)" 
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>>
>>> Sent: Tuesday, March 1, 2016 11:55:55 AM
>>> Subject: [openstack-dev] [kolla][security] Obtaining
>>>the  vulnerability:managed tag
>>> 
>>> Core reviewers,
>>> 
>>> Please review this document:
>>> 
>>>https://github.com/openstack/governance/blob/master/reference/tags/vulne
>>>r
>>>ability_managed.rst
>>> 
>>> It describes how vulnerability management is handled at a high level
>>>for Kolla. When we are ready, I want the kolla delivery repos
>>>vulnerabilities to be managed by the VMT team. By doing this, we
>>>standardize with other OpenStack processes for handling security
>>>vulnerabilities.
>>> 
>>For reference, the full process is described here:
>>https://security.openstack.org/vmt-process.html
>>
>>> The first step is to form a kolla-coresec team, and create a separate
>>>kolla-coresec tracker. I have already created the tracker for
>>>kolla-coresec and the kolla-coresec team in launchpad:
>>> 
>>> https://launchpad.net/~kolla-coresec
>>> 
>>> https://launchpad.net/kolla-coresec
>>> 
>>> I have a history of security expertise, and the PTL needs to be on the
>>>team as an escalation point as described in the VMT tagging document
>>>above. I also need 2-3 more volunteers to join the team. You can read
>>>the requirements of the job duties in the vulnerability:managed tag.
>>> 
>>> If your interested in joining the VMT team, please respond on this
>>>thread. If there are more then 4 individuals interested in joining this
>>>team, I will form the team from the most active members based upon
>>>liberty + mitaka commits, reviews, and PDE spent.
>>> 
>>Note that the VMT team is global to openstack, I guess you are referring
>>to the Kolla VMT team (now known as kolla-coresec).
>
>Yes that is correct.  Thanks Tristan for clarifying.
>>
>>
>>Regards,
>>-Tristan
>>
>>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Cinder] Status of cinder-list bug delay with 1000's of volumes

2016-03-03 Thread Walter A. Boring IV

Adam,
  As the bug shows, it was fixed in the Juno release.  The icehouse 
release is no longer supported.  I would recommend upgrading your 
deployment if possible or looking at the patch and see if it can work 
against your Icehouse codebase.


https://review.openstack.org/#/c/96548/

Walt

On 03/03/2016 03:12 PM, Adam Lawson wrote:

Hey all (hi John),

What's the status of this [1]? We're experiencing this behavior in 
Icehouse - wondering where it was addressed and if so, when. I always 
get confused when I look at the launchpad/review portals.


[1] https://bugs.launchpad.net/cinder/+bug/1317606

*/
Adam Lawson/*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Nested quota -1 limit race condition

2016-03-03 Thread Ryan McNair
Hey all,

Nested quotas has officially started fighting back - while writing Tempest
tests for the nested quota support [1], I hit a race-condition related to
the nested quota -1 support that has me stumped. I opened a bug for it [2]
with more details, but basically the issue occurs when if you change a
limit to or from -1 for a project while at the same time creating volumes
on the project (or on it's subtree if child is using -1 also).

Trying to figure out how to best handle this. Whether suggestions for how
to fix this, thoughts on how critical this situation is, whether we should
disable -1 support for now, etc. - all input is welcome.

Thanks,
Ryan McNair (mc_nair)

[1] https://review.openstack.org/#/c/285640/2
[2] https://bugs.launchpad.net/cinder/+bug/1552944
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Remove conflicting openstack module parts

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline 3/20.

-- 
Dmitry Borodaenko

On Tue, Mar 01, 2016 at 11:26:57PM +, Andrew Woodward wrote:
> I'd like to request a feature freeze exception for the Remove conflicting
> openstack module parts feature [0]
> 
> This is necessary to make the feature Decouple Fuel and OpenStack tasks
> feature useful [1] , some of the patches are ready for review and some
> still need to be written [2]
> 
> We need 2 - 3 weeks after FF to finish this feature. Risk of not delivering
> it after 3 weeks is low.
> 
> [0]
> https://blueprints.launchpad.net/fuel/+spec/fuel-remove-conflict-openstack
> [1] https://blueprints.launchpad.net/fuel/+spec/fuel-openstack-tasks
> [2]
> https://review.openstack.org/#/q/topic:bp/fuel-remove-conflict-openstack,n,z
> -- 
> 
> --
> 
> Andrew Woodward
> 
> Mirantis
> 
> Fuel Community Ambassador
> 
> Ceph Community

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Decouple Fuel and OpenStack tasks

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline 3/20.

-- 
Dmitry Borodaenko


On Tue, Mar 01, 2016 at 10:07:55PM +, Andrew Woodward wrote:
> I'd like to request a feature freeze exception for Decouple Fuel and
> OpenStack tasks feature [0].
> 
> While the code change [1] is ready and usually passing CI we have too much
> churn in the tasks currently which puts the patch set in conflict
> constantly so it has to be rebased multiple times a day.
> 
> We need more review and feedback on the change, and a quiet period to merge
> it
> 
> [0] https://blueprints.launchpad.net/fuel/+spec/fuel-openstack-tasks
> [1] https://review.openstack.org/#/c/283332/
> -- 
> 
> --
> 
> Andrew Woodward
> 
> Mirantis
> 
> Fuel Community Ambassador
> 
> Ceph Community

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] FF exception request for non-root accounts on slave nodes

2016-03-03 Thread Dmitry Borodaenko
Denied.

Most of this feature has landed before FF as expected, the rest can wait
until Newton. At least, operators who want to disable root access to
target nodes are now able to do so, with some exceptions and some
additional manual tweaking that we should clean up in the next release.

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 12:28:31AM +0300, Dmitry Nikishov wrote:
> Hello,
> 
> I'd like to request a FF exception for "Run Fuel slave nodes as non-root"
> feature[1].
> 
> Current status:
> larger part of the feature is already merged[2] and some more
> patches[3][4][5][6] are expected to land before the FF.
> 
> When these patches are in the master, Fuel 9.0 will be able to create
> non-root accounts on target nodes, however, root SSH will still be enabled.
> To change that we'll need actually to
> - Fix fuel-qa to be able to use non-root accounts [7].
> - Fix ceph deployment by either merging [8] or waiting for community ceph
> module support.
> - Disable root SSH [9].
> 
> For that, we need 2.5 weeks after the FF to finish the feature. Risk of not
> delivering the feature after 2.5 weeks is low.
> 
> Thanks.
> 
> [1] https://blueprints.launchpad.net/fuel/+spec/fuel-nonroot-openstack-nodes
> [2]
> https://review.openstack.org/#/q/status:merged+topic:bp-fuel-nonsuperuser
> [3] https://review.openstack.org/258200
> [4] https://review.openstack.org/284682
> [5] https://review.openstack.org/285299
> [6] https://review.openstack.org/258671
> [7] https://review.openstack.org/281776
> [8] https://review.openstack.org/278953
> [9] https://review.openstack.org/278954
> -- 
> Dmitry Nikishov,
> Deployment Engineer,
> Mirantis, Inc.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Status of cinder-list bug delay with 1000's of volumes

2016-03-03 Thread Adam Lawson
Hey all (hi John),

What's the status of this [1]? We're experiencing this behavior in Icehouse
- wondering where it was addressed and if so, when. I always get confused
when I look at the launchpad/review portals.

[1] https://bugs.launchpad.net/cinder/+bug/1317606


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Multi-release packages

2016-03-03 Thread Dmitry Borodaenko
Denied.

This change is likely to require a Nailgun DB schema change, and can
benefit from more design discussions.

-- 
Dmitry Borodaenko

On Tue, Mar 01, 2016 at 11:00:24PM +0300, Alexey Shtokolov wrote:
> Fuelers,
> 
> I would like to request a feature freeze exception for "Multi-release
> packages" feature [0][1]
> 
> This feature extends already existing multi-release support in Fuel Plugins.
> We would like to allow plugin developers to specify all plugin components
> per release and distribute different deployment graphs, partitioning
> schemas, network and node roles for each release in one package.
> 
> This feature is not blocker for us, but it provides very important
> improvement of users and plugin developers experience.
> 
> We need 3 weeks after FF to finish this feature.
> Risk of not delivering it after 3 weeks is low.
> 
> [0] https://blueprints.launchpad.net/fuel/+spec/plugins-v5
> [1] https://review.openstack.org/#/c/271417
> ---
> WBR, Alexey Shtokolov

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] FF exception request for Numa and CPU pinning

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline March 16, feature to be marked experimental
until QA has signed off that it's fully tested and stable.

-- 
Dmitry Borodaenko


On Tue, Mar 01, 2016 at 10:23:08PM +0300, Dmitry Klenov wrote:
> Hi,
> 
> I'd like to to request a feature freeze exception for "Add support for
> NUMA/CPU pinning features" feature [0].
> 
> Part of this feature is already merged [1]. We have the following patches
> in work / on review:
> 
> https://review.openstack.org/#/c/281802/
> https://review.openstack.org/#/c/285282/
> https://review.openstack.org/#/c/284171/
> https://review.openstack.org/#/c/280624/
> https://review.openstack.org/#/c/280115/
> https://review.openstack.org/#/c/285309/
> 
> No new patches are expected.
> 
> We need 2 weeks after FF to finish this feature.
> Risk of not delivering it after 2 weeks is low.
> 
> Regards,
> Dmitry
> 
> [0] https://blueprints.launchpad.net/fuel/+spec/support-numa-cpu-pinning
> [1]
> https://review.openstack.org/#/q/status:merged+topic:bp/support-numa-cpu-pinning

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] FF exception request for HugePages

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline March 16, feature to be marked experimental
until QA has signed off that it's fully tested and stable.

-- 
Dmitry Borodaenko


On Tue, Mar 01, 2016 at 10:23:06PM +0300, Dmitry Klenov wrote:
> Hi,
> 
> I'd like to to request a feature freeze exception for "Support for Huge
> pages for improved performance" feature [0].
> 
> Part of this feature is already merged [1]. We have the following patches
> in work / on review:
> 
> https://review.openstack.org/#/c/286628/
> https://review.openstack.org/#/c/282367/
> https://review.openstack.org/#/c/286495/
> 
> And we need to write new patches for the following parts of this feature:
> https://blueprints.launchpad.net/fuel/+spec/support-hugepages
> 
> We need 1.5 weeks after FF to finish this feature.
> Risk of not delivering it after 1.5 weeks is low.
> 
> Regards,
> Dmitry
> 
> [0] https://blueprints.launchpad.net/fuel/+spec/support-hugepages
> [1]
> https://review.openstack.org/#/q/status:merged+topic:bp/support-hugepages

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [FFE] FF exception request for DPDK

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline March 24, feature to be marked experimental
until QA has signed off that it's fully tested and stable.

-- 
Dmitry Borodaenko


On Thu, Mar 03, 2016 at 05:27:11PM +0300, Vladimir Eremin wrote:
> Hi,
> 
> All patches for DPDK feature [1] are on review, new patches are not expected:
> * https://review.openstack.org/286611 Support for DPDK enablement on node 
> interfaces
> * https://review.openstack.org/284283 Add DPDK support for node interfaces
> 
> For DPDK bonding [2]:
> * https://review.openstack.org/287410 Added dpdkovs provider for bond
> 
> For network verifications [3]:
> * https://review.openstack.org/287806 Network verification for DPDK enabled 
> interfaces
> 
> [1]: https://blueprints.launchpad.net/fuel/+spec/support-dpdk
> [2]: https://blueprints.launchpad.net/fuel/+spec/support-dpdk-bond
> [3]: https://blueprints.launchpad.net/fuel/+spec/network-verification-dpdk
> 
> -- 
> With best regards,
> Vladimir Eremin,
> Fuel Deployment Engineer,
> Mirantis, Inc.
> 
> 
> 
> > On Mar 1, 2016, at 7:27 PM, Aleksandr Didenko  wrote:
> > 
> > Hi,
> > 
> > I'd like to to request a feature freeze exception for "Support for DPDK for 
> > improved networking performance" feature [0].
> > 
> > Part of this feature is already merged [1]. We have the following patches 
> > in work / on review:
> > 
> > https://review.openstack.org/281827
> > https://review.openstack.org/283044
> > https://review.openstack.org/286595
> > https://review.openstack.org/284285
> > https://review.openstack.org/284283
> > https://review.openstack.org/286611
> > 
> > And we need to write new patches for the following parts of this feature:
> > https://blueprints.launchpad.net/fuel/+spec/network-verification-dpdk
> > https://blueprints.launchpad.net/fuel/+spec/support-dpdk-bond
> > 
> > We need 3 weeks after FF to finish this feature.
> > Risk of not delivering it after 3 weeks is low.
> > 
> > Regards,
> > Alex
> > 
> > [0] https://blueprints.launchpad.net/fuel/+spec/support-dpdk
> > [1] 
> > https://review.openstack.org/#/q/status:merged+branch:master+topic:bp/support-dpdk
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-03 Thread Brandon Logan
Just for clarity, V2 did not reuse tables, all the tables it uses are
only for it.  The main problem is that v1 and v2 both have a pools
resource, but v1 and v2's pool resource have different attributes.  With
the way neutron wsgi works, if both v1 and v2 are enabled, it will
combine both sets of attributes into the same validation schema.

The other problem with v1 and v2 running together was only occurring
when the v1 agent driver and v2 agent driver were both in use at the
same time.  This may actually have been fixed with some agent updates in
neutron, since that is common code.  It needs to be tested out though.

Thanks,
Brandon

On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
> Just because you had thought no one was using it outside of a PoC doesn't 
> mean folks aren''t using it in production.
> 
> We would be happy to migrate to Octavia. We were planning on doing just that 
> by running both v1 with haproxy namespace, and v2 with Octavia and then pick 
> off upgrading lb's one at a time, but the reuse of the v1 tables really was 
> an unfortunate decision that blocked that activity.
> 
> We're still trying to figure out a path forward.
> 
> We have an outage window next month. after that, it could be about 6 months 
> before we could try a migration due to production load picking up for a 
> while. I may just have to burn out all the lb's switch to v2, then rebuild 
> them by hand in a marathon outage :/
> 
> And then there's this thingy that also critically needs fixing:
> https://bugs.launchpad.net/neutron/+bug/1457556
> 
> Thanks,
> Kevin
> 
> From: Eichberger, German [german.eichber...@hpe.com]
> Sent: Thursday, March 03, 2016 12:47 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
> 
> Kevin,
> 
>  If we are offering a migration tool it would be namespace -> namespace (or 
> maybe Octavia since [1]) — given the limitations nobody should be using the 
> namespace driver outside a PoC so I am a bit confused why customers can’t 
> self migrate. With 3rd party Lbs I would assume vendors proving those scripts 
> to make sure their particular hardware works with those. If you indeed need a 
> migration from LBaaS V1 namespace -> LBaaS V2 namespace/Octavia please file 
> an RfE with your use case so we can discuss it further…
> 
> Thanks,
> German
> 
> [1] https://review.openstack.org/#/c/286380
> 
> From: "Fox, Kevin M" >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> >
> Date: Wednesday, March 2, 2016 at 5:17 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> >
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
> 
> no removal without an upgrade path. I've got v1 LB's and there still isn't a 
> migration script to go from v1 to v2.
> 
> Thanks,
> Kevin
> 
> 
> 
> From: Stephen Balukoff [sbaluk...@bluebox.net]
> Sent: Wednesday, March 02, 2016 4:49 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
> 
> I am also on-board with removing LBaaS v1 as early as possible in the Newton 
> cycle.
> 
> On Wed, Mar 2, 2016 at 9:44 AM, Samuel Bercovici 
> > wrote:
> Thank you all for your response.
> 
> In my opinion given that UI/HEAT will make Mitaka and will have one cycle to 
> mature, it makes sense to remove LBaaS v1 in Newton.
> Do we want do discuss an upgrade process in the summit?
> 
> -Sam.
> 
> 
> From: Bryan Jones [mailto:jone...@us.ibm.com]
> Sent: Wednesday, March 02, 2016 5:54 PM
> To: 
> openstack-dev@lists.openstack.org
> 
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
> 
> And as for the Heat support, the resources have made Mitaka, with additional 
> functional tests on the way soon.
> 
> blueprint: https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport
> gerrit topic: https://review.openstack.org/#/q/topic:bp/lbaasv2-suport
> BRYAN M. JONES
> Software Engineer - OpenStack Development
> Phone: 1-507-253-2620
> E-mail: jone...@us.ibm.com
> 
> 
> - Original message -
> From: Justin Pomeroy 
> >
> To: 
> openstack-dev@lists.openstack.org
> Cc:
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are we ready?
> Date: Wed, Mar 2, 2016 9:36 AM
> 
> As for the horizon support, much of it will make Mitaka.  See the 

Re: [openstack-dev] [Fuel] Feature Freeze Exception Request - switching to CentOS-7.2

2016-03-03 Thread Aleksandra Fedorova
As we agreed, we have switched ISO builds to latest CentOS 7.2 snapshots.

You can see now that each ISO build (see for ex. [1]) produces several
*_id.txt artifacts.
Note that centos_mirror_id.txt points to CentOS snapshot at
http://mirror.fuel-infra.org/pkgs/

BVT test is stable, see [2], and nightly system tests results are
basically the same as they were before.


[1] https://ci.fuel-infra.org/job/9.0-community.all/3868/
[2]
https://ci.fuel-infra.org/view/ISO/job/9.0.fuel_community.ubuntu.bvt_2/

On Thu, Mar 3, 2016 at 3:46 AM, Dmitry Borodaenko
 wrote:
> Thanks for the detailed explanation, very helpful!
>
> Considering that this change is atomic and easily revertable, lets
> proceed with the change, the sooner we do that the more time we'll have
> to confirm that there is no impact and revert if necessary.
>
> --
> Dmitry Borodaenko
>
> On Thu, Mar 03, 2016 at 03:40:22AM +0300, Aleksandra Fedorova wrote:
>> Hi,
>>
>> let me add some details about the change:
>>
>> 1) There are two repositories used to build Fuel ISO: base OS
>> repository [1], and mos repository [2], where we put Fuel dependencies
>> and packages we rebuild due to certain version requirements.
>>
>> The CentOS 7.2 feature is related to the upstream repo only. Packages
>> like RabbitMQ, MCollective, Puppet, MySQL and PostgreSQL live in mos
>> repository, which has higher priority then upstream.
>>
>> I think we need to setup a separate discussion about our policy
>> regarding these packages, but for now they are fixed and won't be
>> updated by CentOS 7.2 switch.
>>
>> 2) This change doesn't affect Fuel codebase.
>>
>> The upstream mirror we use for ISO build is controlled by environment
>> variable which is set via Jenkins [3] and can be changed anytime.
>>
>> As we have daily snapshots of CentOS repository available at [4], in
>> case of regression in upstream we can pin our builds to stable
>> snapshot and work on the issue without blocking the main development
>> flow.
>
> Please make sure that the current snapshot of CentOS 7.1 is not rotated
> away so that we don't loose the point we can revert to.
>
>> 3) The "improve snapshotting" work item which is at the moment in
>> progress, will prevent any possibility to "accidentally" migrate to
>> CentOS 7.3, when it becomes available.
>> Thus the only changes which we can fetch from upstream are changes
>> which are published to updates/ component of CentOS 7.2 repo.
>>
>>
>> As latest BVT on master is green
>>https://ci.fuel-infra.org/job/9.0.fuel_community.ubuntu.bvt_2/69/
>> I think we should proceed with Jenkins reconfiguration [5] and switch
>> to latest snapshots by default.
>>
>> [1] currently http://vault.centos.org/7.1.1503/
>> [2] 
>> http://mirror.fuel-infra.org/mos-repos/centos/mos9.0-centos7-fuel/os/x86_64/
>> [3] 
>> https://github.com/fuel-infra/jenkins-jobs/blob/76b5cdf1828b7db1957f7967180d20be099b0c63/common/scripts/all.sh#L84
>> [4] http://mirror.fuel-infra.org/pkgs/
>> [5] https://review.fuel-infra.org/#/c/17712/
>>
>> On Wed, Mar 2, 2016 at 9:22 PM, Mike Scherbakov
>>  wrote:
>> > It is not just about BVT. I'd suggest to monitor situation overall,
>> > including failures of system tests [1]. If we see regressions there, or 
>> > some
>> > test cases will start flapping (what is even worse), then we'd have to
>> > revert back to CentOS 7.1.
>> >
>> > [1] https://github.com/openstack/fuel-qa
>> >
>> > On Wed, Mar 2, 2016 at 10:16 AM Dmitry Borodaenko 
>> > 
>> > wrote:
>> >>
>> >> I agree with Mike's concerns, and propose to make these limitations (4
>> >> weeks before FF for OS upgrades, 2 weeks for upgrades of key
>> >> dependencies -- RabbitMQ, MCollective, Puppet, MySQL, PostgreSQL,
>> >> anything else?) official for 10.0/Newton.
>> >>
>> >> For 9.0/Mitaka, it is too late to impose them, so we just have to be
>> >> very careful and conservative with this upgrade. First of all, we need
>> >> to have a green BVT before and after switching to the CentOS 7.2 repo
>> >> snapshot, so while I approved the spec, we can't move forward with this
>> >> until BVT is green again, and right now it's red:
>> >>
>> >> https://ci.fuel-infra.org/job/9.0.fuel_community.ubuntu.bvt_2/
>> >>
>> >> If we get it back to green but it becomes red after the upgrade, you
>> >> must switch back to CentOS 7.1 *immediately*. If you are able to stick
>> >> to this plan, there is still time to complete the transition today
>> >> without requiring an FFE.
>> >>
>> >> --
>> >> Dmitry Borodaenko
>> >>
>> >>
>> >> On Wed, Mar 02, 2016 at 05:53:53PM +, Mike Scherbakov wrote:
>> >> > Formally, we can merge it today. Historically, every update of OS caused
>> >> > us
>> >> > instability for some time: from days to a couple of month.
>> >> > Taking this into account and number of other exceptions requested,
>> >> > overall
>> >> > stability of code, my opinion would be to postpone this to 10.0.
>> >> >
>> >> > Also, 

Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-03 Thread Fox, Kevin M
Just because you had thought no one was using it outside of a PoC doesn't mean 
folks aren''t using it in production.

We would be happy to migrate to Octavia. We were planning on doing just that by 
running both v1 with haproxy namespace, and v2 with Octavia and then pick off 
upgrading lb's one at a time, but the reuse of the v1 tables really was an 
unfortunate decision that blocked that activity.

We're still trying to figure out a path forward.

We have an outage window next month. after that, it could be about 6 months 
before we could try a migration due to production load picking up for a while. 
I may just have to burn out all the lb's switch to v2, then rebuild them by 
hand in a marathon outage :/

And then there's this thingy that also critically needs fixing:
https://bugs.launchpad.net/neutron/+bug/1457556

Thanks,
Kevin

From: Eichberger, German [german.eichber...@hpe.com]
Sent: Thursday, March 03, 2016 12:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Kevin,

 If we are offering a migration tool it would be namespace -> namespace (or 
maybe Octavia since [1]) — given the limitations nobody should be using the 
namespace driver outside a PoC so I am a bit confused why customers can’t self 
migrate. With 3rd party Lbs I would assume vendors proving those scripts to 
make sure their particular hardware works with those. If you indeed need a 
migration from LBaaS V1 namespace -> LBaaS V2 namespace/Octavia please file an 
RfE with your use case so we can discuss it further…

Thanks,
German

[1] https://review.openstack.org/#/c/286380

From: "Fox, Kevin M" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, March 2, 2016 at 5:17 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

no removal without an upgrade path. I've got v1 LB's and there still isn't a 
migration script to go from v1 to v2.

Thanks,
Kevin



From: Stephen Balukoff [sbaluk...@bluebox.net]
Sent: Wednesday, March 02, 2016 4:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

I am also on-board with removing LBaaS v1 as early as possible in the Newton 
cycle.

On Wed, Mar 2, 2016 at 9:44 AM, Samuel Bercovici 
> wrote:
Thank you all for your response.

In my opinion given that UI/HEAT will make Mitaka and will have one cycle to 
mature, it makes sense to remove LBaaS v1 in Newton.
Do we want do discuss an upgrade process in the summit?

-Sam.


From: Bryan Jones [mailto:jone...@us.ibm.com]
Sent: Wednesday, March 02, 2016 5:54 PM
To: openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

And as for the Heat support, the resources have made Mitaka, with additional 
functional tests on the way soon.

blueprint: https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport
gerrit topic: https://review.openstack.org/#/q/topic:bp/lbaasv2-suport
BRYAN M. JONES
Software Engineer - OpenStack Development
Phone: 1-507-253-2620
E-mail: jone...@us.ibm.com


- Original message -
From: Justin Pomeroy 
>
To: openstack-dev@lists.openstack.org
Cc:
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are we ready?
Date: Wed, Mar 2, 2016 9:36 AM

As for the horizon support, much of it will make Mitaka.  See the blueprint and 
gerrit topic:

https://blueprints.launchpad.net/horizon/+spec/horizon-lbaas-v2-ui
https://review.openstack.org/#/q/topic:bp/horizon-lbaas-v2-ui,n,z

- Justin

On 3/2/16 9:22 AM, Doug Wiegley wrote:
Hi,

A few things:

- It’s not proposed for removal in Mitaka. That patch is for Newton.
- HEAT and Horizon are planned for Mitaka (see neutron-lbaas-dashboard for the 
latter.)
- I don’t view this as a “keep or delete” question. If sufficient folks are 
interested in maintaining it, there is a third option, which is that the code 
can be maintained in a separate repo, by a separate team (with or without the 
current core team’s blessing.)

No decisions have been made yet, but we are on the cusp of some major 
maintenance changes, and two deprecation cycles have passed. Which path forward 
is being discussed at today’s Octavia meeting, or feedback is of course 
welcomed here, in IRC, or 

Re: [openstack-dev] [app-catalog] Nominating Kirill Zaitsev to App Catalog Core

2016-03-03 Thread Fox, Kevin M
+1! :)

From: Christopher Aedo [d...@aedo.net]
Sent: Thursday, March 03, 2016 1:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [app-catalog] Nominating Kirill Zaitsev to App Catalog 
Core

I'd like to propose Kirill Zaitsev to the core team for app-catalog
and app-catalog-ui.
Kirill has been actively involved with the Community App Catalog since
nearly the beginning of the project, and more recently has been doing
the heavy lifting around implementing GLARE for a new backend.

I think he would be an excellent addition - please vote, thanks!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] PTL for Newton and beyond

2016-03-03 Thread Sean McGinnis
On Thu, Mar 03, 2016 at 06:32:42AM -0500, Davanum Srinivas wrote:
> Team,
> 
> It has been great working with you all as PTL for Oslo. Looks like the
> nominations open up next week for elections and am hoping more than
> one of you will step up for the next cycle(s). I can show you the
> ropes and help smoothen the transition process if you let me know
> about your interest in being the next PTL. With the move to more
> automated testing in our CI (periodic jobs running against oslo.*
> master) and the adoption of the release process (logging reviews in
> /releases repo) the load should be considerably less on you.
> especially proud of all the new people joining as both oslo cores and
> project cores and hitting the ground running. Big shout out to Doug
> Hellmann for his help and guidance when i transitioned into the PTL
> role.
> 
> Main challenges will be to get back confidence of all the projects
> that use the oslo libraries, NOT be the first thing they look for when
> things break (Better backward compat, better test matrix) and
> evangelizing that Oslo is still the common play ground for *all*
> projects and not just the headache of some nut jobs who are willing to
> take up the impossible task of defining and nurturing these libraries.
> There's a lot of great work ahead of us and i am looking forward to
> continue to work with you all.
> 
> Thanks,
> Dims

Thanks for all your hard work Dims. It's appreciated and it definitely
has been a positive impact across the board.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-03 Thread Jeremy Stanley
On 2016-03-03 09:35:54 -0500 (-0500), sridhar basam wrote:
[...]
> As other have said elsewhere in this thread, you already have the
> ability to not use the firewall driver in neutron.

I'm still struggling to find documentation on how a tenant^Wproject
can disable the firewall driver in Neutron when connecting to a
shared provider network. (Rhetorical question: i.e., I'm pretty sure
this is not a one-size-fits-all fix to the problem.)
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Use RGW as a default object store instead of Swift

2016-03-03 Thread Dmitry Borodaenko
Denied.

This is a major functional change and it needs a lot more discussion
than a single review posted 2 days before Feature Freeze. Lets start
this discussion now, summarize its result in a spec, and see if it's
something we want to target for Newton.

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 03:50:45PM +0200, Konstantin Danilov wrote:
> Igor,
> 
> You are right. I have updated a request -
> https://review.openstack.org/287195
> 
> Thanks.
> 
> On Tue, Mar 1, 2016 at 6:01 PM, Igor Kalnitsky 
> wrote:
> 
> > Hey Konstantin,
> >
> > I see that provided patch [1] is for stable/8.0. Fuel 8.0 is recently
> > released and we usually do not accept any features to stable branch.
> >
> > Or your meant that patch for master branch?
> >
> > Thanks,
> > Igor
> >
> > [1]: https://review.openstack.org/#/c/286100/
> >
> > On Tue, Mar 1, 2016 at 4:44 PM, Konstantin Danilov
> >  wrote:
> > > Colleagues,
> > > I would like to request a feature freeze exception for
> > > 'Use RGW as a default object store instead of Swift' [1].
> > >
> > > To merge the changes we need at most one week of time.
> > >
> > > [1]: https://review.openstack.org/#/c/286100/
> > >
> > > Thanks
> > > --
> > >
> > > Kostiantyn Danilov aka koder.ua
> > > Principal software engineer, Mirantis
> > >
> > > skype:koder.ua
> > > http://koder-ua.blogspot.com/
> > > http://mirantis.com
> > >
> > >
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Kostiantyn Danilov aka koder.ua
> Principal software engineer, Mirantis
> 
> skype:koder.ua
> http://koder-ua.blogspot.com/
> http://mirantis.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Feature Freeze request for CouchDB backup & restore

2016-03-03 Thread Mariam John


Hello,

   I would like to request a feature freeze for the following feature: Add
support for CouchDB backup & restore [1]. The code for this feature is
complete and up for review [2]. The integration tests requires a new
library which was added to global-requirements [3]. Unit tests and
integration tests have been added to cover all the features implemented.
This feature implements full backup and restore capabilities for CouchDB
and uses the existing Trove API's and does not affect or alter the base
Trove code. Hence the risk for regression is very minimal.

Thank you.

Regards,
Mariam
[1] https://blueprints.launchpad.net/trove/+spec/couchdb-backup-restore
[2] https://review.openstack.org/#/c/270443/
[3] https://review.openstack.org/#/c/285191/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [FFE] Add multipath disks support

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline March 10.

On Tue, Mar 01, 2016 at 04:27:50PM +0100, Szymon Banka wrote:
> Hi All,
> 
> I’d like to request a Feature Freeze Exception for "Add multipath disks 
> support” until Mar 9.
> This feature allows to use FC multipath disks in Fuel nodes – BP [1], spec 
> [2].
> 
> Development is already done and we need following patches still to be merged:
> review and merge multipath support in fuel-agent [3]
> review and merge multipath support railgun-agent [4]
> [1] https://blueprints.launchpad.net/fuel/+spec/multipath-disks-support
> [2] https://review.openstack.org/#/c/276745/
> [3] https://review.openstack.org/#/c/285340/
> [4] https://review.openstack.org/#/c/282552/
> 
> --
> Thanks,
> Szymon Bańka
> Mirantis
> http://www.mirantis.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [FFE] Use packetary for building ISO

2016-03-03 Thread Dmitry Borodaenko
On Thu, Mar 03, 2016 at 11:21:58PM +0800, Thomas Goirand wrote:
> On 03/03/2016 07:14 PM, Aleksey Zvyagintsev wrote:
> > Hello team's ,
> > Please take in mind my HUGE +1
> > We really need to remove package building process from iso build flow.
> Same point of view over here. Let's get rid of the "make world" approach
> ASAP.

Denied.

Apologies, but the risk of build process changes impacting development
and bugfixing in other components is too great. I realize that it's
almost done and it's a transition we've all been eagerly anticipating
for a long while, we'll just have to wait a little bit longer. Right
after SCF is a much better time and place for these kinds of changes.

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara-tests] Sahara-tests release and launchpad project

2016-03-03 Thread Evgeny Sikachev
Hi, sahara folks!

I would like to propose release sahara-tests. All steps from spec
implemented except releases and packaging.[0]

Release criteria: framework ready for testing a new release of Sahara.

Next step: build a package and publish to PyPI.

Also, I think we need to create a separate Launchpad project (like
python-saharaclient[1]) for correct bugs tracking process. This adds
ability nominate bugs to releases and will not be a confusion with Sahara
bugs.


[0]
https://github.com/openstack/sahara-specs/blob/master/specs/mitaka/move-scenario-tests-to-separate-repository.rst
[1] https://bugs.launchpad.net/python-saharaclient
-
Best Regards,

Evgeny Sikachev
QA Engineer
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] FFE request for ConfigDB service

2016-03-03 Thread Dmitry Borodaenko
Granted, merge deadline March 24, no impact expected in core components
(fuel-library, fuel-web, fuel-ui).

-- 
Dmitry Borodaenko


On Tue, Mar 01, 2016 at 04:22:05PM +0300, Oleg Gelbukh wrote:
> Greetings,
> 
> As you might know, we are working on centralised storage for
> deployment configuration data in Fuel. Such store will allow external
> 3rd-party services to consume the entirety of settings provided by
> Fuel to deployment mechanisms on target nodes. It will also allow to
> manage and override the settings via simple client application.
> 
> This change is required to enable Puppet Master based LCM solution.
> 
> We request a FFE for this feature for 3 weeks, until Mar 24. By that
> time, we will provide tested solution in accordance with the following
> specifications [1] [2]
> 
> The feature includes 3 main components:
> 1. Extension to Nailgun API with separate DB structure to store serialized 
> data
> 2. Backend library for Hiera to consume the API in question to lookup
> values of the certain parameters
> 3. Astute task to download all serialized data from nodes and upload
> them to ConfigDB API upon successful deployment of cluster
> 
> Since introduction of stevedore-based extensions [3], we could develop
> extensions in separate code repos. This makes change to Nailgun
> non-intrusive to core code.
> Backend library will be implemented in fuel-library code tree and
> packaged as a sub-package. This change also doesn't require changes in
> the core code.
> Astute task will add a task in the flow. We will make this task
> configurable, i.e. normally this code path won't be used at all. It
> also won't touch core code of Astute.
> 
> Overall, I consider this change as low risk for integrity and timeline
> of the release.
> 
> Please, consider our request and share concerns so we could properly
> resolve them.
> 
> [1] 
> https://blueprints.launchpad.net/fuel/+spec/upload-deployment-facts-to-configdb
> [2] https://blueprints.launchpad.net/fuel/+spec/serialized-facts-nailgun-api
> [3] https://blueprints.launchpad.net/fuel/+spec/stevedore-extensions-discovery
> 
> --
> Best regards,
> Oleg Gelbukh
> Mirantis Inc.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][i18n] Liaisons for I18n

2016-03-03 Thread Tony Breeds
On Mon, Feb 29, 2016 at 05:26:44PM +0800, Ying Chun Guo wrote:

> If you are interested to be a liaison and help translators,
> input your information here:
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n .

So https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n

says the CPL needs to be a core.  That reduces the potential pool of people to
those that are already busy.  Is there a good reason for that?

I'd suspect all that's required is a good working relationship with the project
cores.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][congress] python-congressclient 1.2.3 release (mitaka)

2016-03-03 Thread no-reply
We are tickled pink to announce the release of:

python-congressclient 1.2.3: Client for Congress

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/python-congressclient

Please report issues through launchpad:

http://bugs.launchpad.net/python-congressclient

For more details, please see below.

Changes in python-congressclient 1.2.2..1.2.3
-

6ab5893 Updated from global requirements
99f2080 Update help message for datasource CLI
f44bb5f Updated from global requirements
1e030f0 Updated from global requirements
15acd8c Updated from global requirements

Diffstat (except docs and test files)
-

congressclient/osc/v1/datasource.py | 36 ++--
requirements.txt|  6 +++---
2 files changed, 21 insertions(+), 21 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 4beb194..6fce840 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6,2 +6,2 @@ Babel>=1.3 # BSD
-cliff>=1.15.0 # Apache-2.0
-oslo.i18n>=1.5.0 # Apache-2.0
+cliff!=1.16.0,!=1.17.0,>=1.15.0 # Apache-2.0
+oslo.i18n>=2.1.0 # Apache-2.0
@@ -10 +10 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.4.0 # Apache-2.0
+oslo.utils>=3.5.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] Nominating Kirill Zaitsev to App Catalog Core

2016-03-03 Thread Christopher Aedo
I'd like to propose Kirill Zaitsev to the core team for app-catalog
and app-catalog-ui.
Kirill has been actively involved with the Community App Catalog since
nearly the beginning of the project, and more recently has been doing
the heavy lifting around implementing GLARE for a new backend.

I think he would be an excellent addition - please vote, thanks!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] deprecating pluggable managers

2016-03-03 Thread Jay Pipes

++ Long time coming :)

On 03/03/2016 11:31 AM, Sean Dague wrote:

As we were evaluating bits of configuration that we really don't expect
people to use, one of the issues is pluggable managers. From the early
days of Nova the classes that ran the service managers were defined in
the config file. However, something we largely haven't supported in a
real way in a while (there is no contract on these interfaces).

The following patch deprecates these -
https://review.openstack.org/#/c/287867/

A number of them will be removed in Newton. Compute Manager will live
for a bit longer as the Ironic team uses that for some HA scenarios.
However there is a plan forward to stop needing this plug point.
However, deprecating these helps signal that even if not immediately
removed, these are not things people should be building off of.

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] [FFE] DB2 Backup & Restore and CouchDB functions

2016-03-03 Thread Ishita Mandhan
I'd like to request a feature freeze exception for "Implement backup and restore functionality for db2 express c" [1] and "Implement database and user functions for CouchDB" [2]
The patch implementing db2 backup and restore is up for review [3] and has covered unit tests.
CouchDB user and database functions is up for review as well [4]. Integration tests for the same will be added as a separate patch in the next 3-4 days. 
[1] https://blueprints.launchpad.net/trove/+spec/db2-backup-restore
[2] https://blueprints.launchpad.net/trove/+spec/couchdb-database-user-functions
[3] https://review.openstack.org/#/c/246709/
[4] https://review.openstack.org/#/c/273204/
Regards,
ISHITA MANDHAN
E-mail: iman...@us.ibm.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove][FFE] CouchDB Configuration Groups

2016-03-03 Thread Victoria Martínez de la Cruz
I'd like to request a feature freeze exception for "CouchDB Configuration
Groups". [1]

The code/tests for this feature are complete and have been in review [2]
since Feb 17. The FFE is being requested so that Sonali (or myself) address
the nits reviewers left and to have more time to review/test etc. I expect
the review process to be done within a few days.

Best,

Victoria

[1] *https://blueprints.launchpad.net/trove/+spec/couchdb-configuration-groups
*
[2] *https://review.openstack.org/#/c/271781/
*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-03 Thread Eichberger, German
Kevin,

 If we are offering a migration tool it would be namespace -> namespace (or 
maybe Octavia since [1]) — given the limitations nobody should be using the 
namespace driver outside a PoC so I am a bit confused why customers can’t self 
migrate. With 3rd party Lbs I would assume vendors proving those scripts to 
make sure their particular hardware works with those. If you indeed need a 
migration from LBaaS V1 namespace -> LBaaS V2 namespace/Octavia please file an 
RfE with your use case so we can discuss it further…

Thanks,
German

[1] https://review.openstack.org/#/c/286380

From: "Fox, Kevin M" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, March 2, 2016 at 5:17 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

no removal without an upgrade path. I've got v1 LB's and there still isn't a 
migration script to go from v1 to v2.

Thanks,
Kevin



From: Stephen Balukoff [sbaluk...@bluebox.net]
Sent: Wednesday, March 02, 2016 4:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

I am also on-board with removing LBaaS v1 as early as possible in the Newton 
cycle.

On Wed, Mar 2, 2016 at 9:44 AM, Samuel Bercovici 
> wrote:
Thank you all for your response.

In my opinion given that UI/HEAT will make Mitaka and will have one cycle to 
mature, it makes sense to remove LBaaS v1 in Newton.
Do we want do discuss an upgrade process in the summit?

-Sam.


From: Bryan Jones [mailto:jone...@us.ibm.com]
Sent: Wednesday, March 02, 2016 5:54 PM
To: openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

And as for the Heat support, the resources have made Mitaka, with additional 
functional tests on the way soon.

blueprint: https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport
gerrit topic: https://review.openstack.org/#/q/topic:bp/lbaasv2-suport
BRYAN M. JONES
Software Engineer - OpenStack Development
Phone: 1-507-253-2620
E-mail: jone...@us.ibm.com


- Original message -
From: Justin Pomeroy 
>
To: openstack-dev@lists.openstack.org
Cc:
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are we ready?
Date: Wed, Mar 2, 2016 9:36 AM

As for the horizon support, much of it will make Mitaka.  See the blueprint and 
gerrit topic:

https://blueprints.launchpad.net/horizon/+spec/horizon-lbaas-v2-ui
https://review.openstack.org/#/q/topic:bp/horizon-lbaas-v2-ui,n,z

- Justin

On 3/2/16 9:22 AM, Doug Wiegley wrote:
Hi,

A few things:

- It’s not proposed for removal in Mitaka. That patch is for Newton.
- HEAT and Horizon are planned for Mitaka (see neutron-lbaas-dashboard for the 
latter.)
- I don’t view this as a “keep or delete” question. If sufficient folks are 
interested in maintaining it, there is a third option, which is that the code 
can be maintained in a separate repo, by a separate team (with or without the 
current core team’s blessing.)

No decisions have been made yet, but we are on the cusp of some major 
maintenance changes, and two deprecation cycles have passed. Which path forward 
is being discussed at today’s Octavia meeting, or feedback is of course 
welcomed here, in IRC, or anywhere.

Thanks,
doug

On Mar 2, 2016, at 7:06 AM, Samuel Bercovici 
> wrote:

Hi,

I have just notices the following change: 
https://review.openstack.org/#/c/286381 which aims to remove LBaaS v1.
Is this planned for Mitaka or for Newton?

While LBaaS v2 is becoming the default, I think that we should have the 
following before we replace LBaaS v1:
1.  Horizon Support – was not able to find any real activity on it
2.  HEAT Support – will it be ready in Mitaka?

Do you have any other items that are needed before we get rid of LBaaS v1?

-Sam.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [murano][stable] New blocking stable/liberty bug 1552887

2016-03-03 Thread Victor Ryzhenkin
Hi Matt!

FYI, it looks like a change made to the stable/liberty branch for murano 
on 3/1 broke the unit tests

Thank you for your notification.
This issue was fixed a few hours ago in this patch [0]

Cheers!

[0] - https://review.openstack.org/#/c/286856/
-- 
Victor Ryzhenkin
Quality Assurance Engineer
freerunner on #freenode

Включено 3 марта 2016 г. в 22:55:42, Matt Riedemann 
(mrie...@linux.vnet.ibm.com) написал:

FYI, it looks like a change made to the stable/liberty branch for murano  
on 3/1 broke the unit tests, details are in the bug:  

https://bugs.launchpad.net/murano/+bug/1552887  

--  

Thanks,  

Matt Riedemann  


__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  


signature.asc
Description: Message signed with OpenPGP using AMPGpg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][horizon] django_openstack_auth 2.2.0 release (mitaka)

2016-03-03 Thread no-reply
We are tickled pink to announce the release of:

django_openstack_auth 2.2.0: Django authentication backend for use
with OpenStack Identity

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/django_openstack_auth/

With package available at:

https://pypi.python.org/pypi/django_openstack_auth

Please report issues through launchpad:

https://bugs.launchpad.net/django-openstack-auth

For more details, please see below.

Changes in django_openstack_auth 2.1.1..2.2.0
-

9de5d87 Updated from global requirements
d3bca7a Update URLs to Django 1.8+ style
d0e3856 Add app_label
0be06b4 Change log.error to log.warning
e008112 Fix "Add API version to identity endpoint URLs"
d779eb6 Add convenient method to get admin roles and permissions
7f26e7d Update translation setup
d252117 Drop supporting python3.3
25ece10 Remove openstack-common.conf
c04a75b Updated from global requirements
b525627 Updated from global requirements
d8a9ad9 Fix the py27dj19 tests
658d955 Add py27dj19 tox env
1252df5 Updated from global requirements
474c503 Fix WebSSO when Keystone server hostname contains 'auth'
8011e9c Update url_for parameter for domain policy check
5ab3908 Unscoped PKI token should no longer be hashed multiple times.
2d88515 Python 3 deprecated the logger.warn method in favor of warning
c57b380 Use consistent region during login
ad98c9d Imported Translations from Zanata
27fcdc4 Imported Translations from Zanata
58ce9d7 Add API version to identity endpoint URLs
447bab5 Deprecated tox -downloadcache option removed

Diffstat (except docs and test files)
-

babel-django.cfg  |  1 +
babel.cfg |  1 -
openstack-common.conf |  2 -
openstack_auth/backend.py | 30 ++--
openstack_auth/forms.py   | 10 +--
openstack_auth/locale/django.pot  | 88 ++
openstack_auth/locale/openstack_auth.pot  | 88 --
openstack_auth/locale/pa_IN/LC_MESSAGES/django.po | 31 +---
openstack_auth/locale/ru/LC_MESSAGES/django.po| 14 ++--
openstack_auth/policy.py  |  8 +-
openstack_auth/urls.py| 22 +++---
openstack_auth/user.py| 26 ---
openstack_auth/utils.py   | 72 +-
openstack_auth/views.py   | 16 +---
requirements.txt  | 12 +--
setup.cfg |  6 +-
test-requirements.txt | 12 +--
tox.ini   | 15 ++--
21 files changed, 360 insertions(+), 200 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index bc5ff28..3d62f8b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4,3 +4,3 @@
-pbr>=1.6
-Django<1.9,>=1.8
-oslo.config>=2.7.0 # Apache-2.0
+pbr>=1.6 # Apache-2.0
+Django<1.9,>=1.8 # BSD
+oslo.config>=3.7.0 # Apache-2.0
@@ -8,3 +8,3 @@ oslo.policy>=0.5.0 # Apache-2.0
-python-keystoneclient!=1.8.0,>=1.6.0
-keystoneauth1>=2.1.0
-six>=1.9.0
+python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
+keystoneauth1>=2.1.0 # Apache-2.0
+six>=1.9.0 # MIT
diff --git a/test-requirements.txt b/test-requirements.txt
index 80cbd10..133922c 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5,5 +5,5 @@ hacking<0.11,>=0.10.0
-Babel>=1.3
-coverage>=3.6
-mock>=1.2
-mox3>=0.7.0
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
+Babel>=1.3 # BSD
+coverage>=3.6 # Apache-2.0
+mock>=1.2 # BSD
+mox3>=0.7.0 # Apache-2.0
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
@@ -11 +11 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-testscenarios>=0.4
+testscenarios>=0.4 # Apache-2.0/BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][barbican] python-barbicanclient 4.0.0 release (mitaka)

2016-03-03 Thread no-reply
We are tickled pink to announce the release of:

python-barbicanclient 4.0.0: Client Library for OpenStack Barbican Key
Management API

This release is part of the mitaka release series.

With source available at:

https://git.openstack.org/cgit/openstack/python-barbicanclient/

Please report issues through launchpad:

https://bugs.launchpad.net/python-barbicanclient/

For more details, please see below.

Changes in python-barbicanclient 3.3.0..4.0.0
-

8eab42d Replace assertEqual(None, *) with assertIsNone in tests
3aecbde Updated from global requirements
84fc9dc Use six.moves.urllib.parse to replace urlparse
e53f7cb Updated from global requirements
7228d26 Remove argparse from requirements
3626860 Updated from global requirements
bfa7e51 Updated from global requirements
fe73ce8 Replace deprecated keystoneclient...exceptions
33d0eb5 Update typos
e58819d Removes MANIFEST.in as it is not needed explicitely by PBR
58491d5 Deprecated tox -downloadcache option removed
a80f7ea Remove py26 support
2105777 Updated from global requirements
654d3b8 Updated from global requirements
00ba97c Updated from global requirements
2efd085 Updated from global requirements
b132160 Update Readme to include new/updated CLI commands
37bd68c Allow tox to be able to run independent functional tests
098a0c8 Make CLI Order's type field a required argument
1da42ea Remove invalid skipping of tests
0b97754 Fix Secrets Filter
6d43432 Updated from global requirements
a2a27ee Updated from global requirements
631c8a9 README.rst devstack link not properly displayed
e65611f improve readme contents
bf187ba Updated from global requirements
7af1c13 Add to_dict method to EntityFormatter.
6e99579 Updated from global requirements
af771fe Part 3: Adding ACL functional tests.
8d8e973 Updated from global requirements
020e240 Initialize optional attributes in CA object
424f2ac Fix keystone version selection
d083e1e Part 2: Adding ACL support for CLI commands and docs
4a2007c Fix incorrect error when performing Barbican Secret Update
5455c28 Part 1: Adding ACL support for Client API.
776187d Fix error where barbican order create returns invalid error
85f5ec2 Create Openstack CLI plugin for Barbican
0306b2b Fix OS_PROJECT_ID not getting read by client
28cc338 Remove Client V1 Behaviors
89c2f28 enable barbican help without authentication
3c2ff2a Fix barbican-client README.rst
d60c7a4 Add functional test for updating a Secret
e2d1f4e Remove test behaviors abstraction for containers smoke tests
a4e224e Remove test behaviors abstraction for orders smoke tests
2415011 Remove test behaviors abstraction for secrets smoke tests
bee632f Remove test behaviors abstraction for containers
3b6a12e Remove test behaviors abstraction for orders
8c2050f Remove test behaviors abstraction for secrets
0771bb8 Create Common functions used for cleaning up items used for testing
4572293 Add epilog to parser
17ed50a Add Unit Tests for Store and Update Payload when Payload is zero
34256de Allow Barbican Client Secret Update Functionality

Diffstat (except docs and test files)
-

MANIFEST.in|   4 -
README.rst |  89 +++-
barbicanclient/acls.py | 475 +
barbicanclient/barbican.py |  55 +-
barbicanclient/barbican_cli/acls.py| 261 ++
barbicanclient/barbican_cli/cas.py |   5 +-
barbicanclient/barbican_cli/containers.py  |  19 +-
barbicanclient/barbican_cli/orders.py  |  15 +-
barbicanclient/barbican_cli/secrets.py |  33 +-
barbicanclient/cas.py  |   2 +
barbicanclient/client.py   |  10 +-
barbicanclient/containers.py   |  11 +
barbicanclient/formatter.py|   4 +
barbicanclient/orders.py   |   8 +-
barbicanclient/osc_plugin.py   |  43 ++
barbicanclient/secrets.py  |  75 ++-
.../cli/v1/behaviors/container_behaviors.py|   8 +-
.../cli/v1/behaviors/secret_behaviors.py   |  18 +-
.../client/v1/behaviors/base_behaviors.py  |  52 --
.../client/v1/behaviors/container_behaviors.py | 110 
.../client/v1/behaviors/order_behaviors.py |  95 
.../client/v1/behaviors/secret_behaviors.py|  96 
.../client/v1/functional/test_containers.py| 141 ++---
.../client/v1/functional/test_orders.py| 216 
.../client/v1/functional/test_secrets.py   | 402 ---
requirements.txt   |  17 +-
setup.cfg  |  29 +-
setup.py   |   2 +-
test-requirements.txt  |  21 +-
tox.ini|   7 +-

[openstack-dev] [Trove][FFE] Vertica Cluster Grow and Shrink

2016-03-03 Thread Alex Tomic
I'd like to request a feature freeze exception for "Vertica Cluster Grow 
and Shrink". [1]


The code/tests for this feature are complete and have been in review [2] 
since Feb 17. The FFE is being requested so that the reviewers have more 
time to review/test etc. Hopefully the review process can be wrapped up 
within a few days.


Regards,
Alex Tomic
Tesora Inc.

[1] https://blueprints.launchpad.net/trove/+spec/vertica-grow-shrink-cluster
[2] https://review.openstack.org/#/c/281439/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove][FFE] Vertica configuration groups

2016-03-03 Thread Alex Tomic
I'd like to request a feature freeze exception for "Vertica 
configuration groups". [1]


The code/tests for this feature are complete and have been in review [2] 
since Feb 23. The FFE is being requested so that the reviewers have more 
time to review/test etc. Hopefully the review process can be wrapped up 
within a few days.


Regards,
Alex Tomic
Tesora Inc.

[1] 
https://blueprints.launchpad.net/trove/+spec/vertica-configuration-groups

[2] https://review.openstack.org/#/c/283785/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Mitaka release planning

2016-03-03 Thread Armando M.
Hi Neutrinos,

Mitaka-3 is out [1] and we should be focussing on rc1. This is the time
where we switch gear:

   - Test M-3, find issues and target them for RC1 [2];
   - Apply/agree for FFE status for pending features on the postmortem [3];
   - For features that get FFE granted, I'll be moving targeting them to
   RC1. Those that get denied will be pushed back to N as soon as it opens up.
   - Be mindful of the gate. Watch its state and make sure that you
   approve/merge only changes that are targeted for RC1. Anything else should
   be pushed back, especially trivial, and cosmetic fixes.
   - RC1 is for critical and high bug fixes (release blockers or gate
   failures) and FFE changes only. Anything else should be left untargeted.
   - Remember to tie loose ends, testing and docs are paramount to allow
   our users to enjoy our new features reliably.

When in doubt, reach out!

Cheers,
Armando

[1] https://launchpad.net/neutron/+milestone/mitaka-3
[2] https://launchpad.net/neutron/+milestone/mitaka-rc1
[3] https://review.openstack.org/#/c/286413/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Octavia configuration options were deleted but needed for Mitaka

2016-03-03 Thread Michael Johnson
This is in reference to bug:

https://bugs.launchpad.net/openstack-manuals/+bug/1552797

The liberty documentation set has the octavia.conf section:
http://docs.openstack.org/liberty/config-reference/content/networking-plugin-lbaas.html

The current Mitaka documentation does not have an octavia.conf section:
http://docs.openstack.org/draft/config-reference/networking/networking_options_reference.html

It appears that the octavia.conf settings documentation was deleted
from the Mitaka docs here:
https://review.openstack.org/#/c/259889

It has been pointed out that this was due to this e-mail chain:
http://lists.openstack.org/pipermail/openstack-docs/2015-December/008026.html

That said, Octavia is the reference driver implementation for
neutron-lbaas and it is important that we continue to provide
documentation for users of neutron-lbaas.  Armando is asking us to
update this documentation for the Mitaka release, but we are unable to
do so.

Can someone help me with instructions on what I need to do to restore
this documentation for the Mitaka release?

I am not familiar with the xml->rst changes that have occurred so I
need some help.

In the docs channel and on the bug I have had advice to use pandoc to
convert the liberty xml to rst and check it in.  This does not seem
right as the other files say they are auto-generated.

I'm just looking for a way to get this restored so we can update them
for the Mitaka release.

Thanks,
Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano][stable] New blocking stable/liberty bug 1552887

2016-03-03 Thread Matt Riedemann
FYI, it looks like a change made to the stable/liberty branch for murano 
on 3/1 broke the unit tests, details are in the bug:


https://bugs.launchpad.net/murano/+bug/1552887

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] deprecating pluggable managers

2016-03-03 Thread Sean Dague
As we were evaluating bits of configuration that we really don't expect
people to use, one of the issues is pluggable managers. From the early
days of Nova the classes that ran the service managers were defined in
the config file. However, something we largely haven't supported in a
real way in a while (there is no contract on these interfaces).

The following patch deprecates these -
https://review.openstack.org/#/c/287867/

A number of them will be removed in Newton. Compute Manager will live
for a bit longer as the Ironic team uses that for some HA scenarios.
However there is a plan forward to stop needing this plug point.
However, deprecating these helps signal that even if not immediately
removed, these are not things people should be building off of.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-03 Thread Chris Friesen

On 03/03/2016 01:13 PM, Preston L. Bannister wrote:


> Scanning the same volume from within the instance still gets the same
> ~450MB/s that I saw before.

Hmmm, with iSCSI inbetween that could be the TCP memcpy limitation.


Measuring iSCSI in isolation is next on my list. Both on the physical host, and
in the instance. (Now to find that link to the iSCSI test, again...)


Based on earlier comments it appears that you're using the qemu built-in iSCSI 
initiator.


Assuming that's the case, maybe it would make sense to do a test run with the 
in-kernel iSCSI code and take qemu out of the picture?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] FFE Request for Improving anti-affinity in Sahara

2016-03-03 Thread Akanksha Agrawal
Hi,


On Fri, Mar 4, 2016 at 12:43 AM, Sergey Lukjanov 
wrote:

> Hi,
>
> correct spec link is https://review.openstack.org/#/c/269202/
>

Sorry for the incorrect links

Specification Review: https://review.openstack.org/#/c/269202/

Code Review: https://review.openstack.org/#/c/282506/

>
> Spec isn't approved and so FFE isn't granted. It's very important to not
> spread the team's efforts onto reviewing specs and additional features in
> the pre-RC time frame, so, I'm considering non-merged spec as a blocker for
> FFE.
>

Noted Sergey.


>
> Thanks.
>
> On Thu, Mar 3, 2016 at 11:08 AM, Akanksha Agrawal 
> wrote:
>
>> Hi folks,
>>
>> I would like to request a FFE for the feature “Improving anti-affinity in
>> Sahara”:
>>
>>
>> BP: https://blueprints.launchpad.net/sahara/+spec/improving-anti-affinity
>>
>>
>> 
>>
>> Specification Review: https://review.openstack.org/#/c/269202/
>> 
>>
>>
>> 
>>
>> Code Review: https://review.openstack.org/#/c/282506/
>> 
>>
>>
>> Estimated Completion Time: The specification is complete and should not
>> take more than 1-2 days to be approved.
>>
>> The code is under review and needs few more changes before it could be
>> completed. Mostly pep8 related errors have to be fixed and the error for
>> backward compatibility has to be thrown. This would not require more than a
>> week.
>>
>>
>> The Benefits for this change: Improved anti-affinity behaviour while
>> cluster creation
>>
>>
>> The Risk: The risk would be low for this patch, since the code would be
>> completed in time.
>>
>>
>>
>> Thanks,
>>
>> Akanksha Agrawal
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Principal Software Engineer
> Mirantis Inc.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-03 Thread Preston L. Bannister
Note that my end goal is to benchmark an application that runs in an
instance that does primarily large sequential full-volume-reads.

On this path I ran into unexpectedly poor performance within the instance.
If this is a common characteristic of OpenStack, then this becomes a
question of concern to OpenStack developers.

Until recently, ~450MB/s would be (and still is for many cases)
*outstanding* throughput. Most similar questions on the web are happy with
saturating a couple of gigabit links, or a few spinning disks. So that
few(?) folk (to now) have asked questions at this level of performance ...
is not a surprise.

But with flash displacing spinning disks, much higher throughput is
possible. If there is an unnecessary bottleneck, this might be a good time
to call attention.


>From general notions to current specifics... :)


On Wed, Mar 2, 2016 at 10:10 PM, Philipp Marek 
wrote:

> > The benchmark scripts are in:
> >   https://github.com/pbannister/openstack-bootstrap



> in case that might help, here are a few notes and hints about doing
> benchmarks for the DRDB block device driver:
>
> http://blogs.linbit.com/p/897/benchmarking-drbd/
>
> Perhaps there's something interesting for you.
>

Found this earlier. :)


> Found that if I repeatedly scanned the same 8GB volume from the physical
> > host (with 1/4TB of memory), the entire volume was cached in (host)
> memory
> > (very fast scan times).
>


> If the iSCSI target (or QEMU, for direct access) is set up to use buffer
> cache, yes.
> Whether you really want that is up to discussion - it might be much more
> beneficial to move that RAM from the Hypervisor to the VM, which should
> then be able to do more efficient caching of the filesystem contents that
> it should operate on.
>

You are right, but my aim was a bit different. Doing a bit of
divide-and-conquer.

In essence, this test was to see if reducing the host-side volume-read time
to (practically) zero would have *any* impact on performance. Given the
*huge* introduced latency (somewhere), I did not expect a notable
difference - and that it what the measure shows. This further supports the
theory that host-side Linux is *not* the issue.



> Scanning the same volume from within the instance still gets the same
> > ~450MB/s that I saw before.
>


> Hmmm, with iSCSI inbetween that could be the TCP memcpy limitation.


Measuring iSCSI in isolation is next on my list. Both on the physical host,
and in the instance. (Now to find that link to the iSCSI test, again...)




> > The "iostat" numbers from the instance show ~44 %iowait, and ~50 %idle.
> > (Which to my reading might explain the ~50% loss of performance.) Why so
> > much idle/latency?
> >
> > The in-instance "dd" CPU use is ~12%. (Not very interesting.)
>


> Because your "dd" testcase will be single-threaded, io-depth 1.
> And that means synchronous access, each IO has to wait for the preceeding
> one to finish...
>

Given the Linux kernel read-ahead parameter has a noticeable impact on
performance, I believe that "dd" does not need wait (much?) for I/O. Note
also the large difference between host and instance with "dd".



> > Not sure from where the (apparent) latency comes. The host iSCSI target?
> > The QEMU iSCSI initiator? Onwards...
>


> Thread scheduling, inter-CPU cache trashing (if the iSCSI target is on
> a different physical CPU package/socket than the VM), ...
>
> Benchmarking is a dark art.
>

This physical host has an absurd number of CPUs (at 40), so what you
mention is possible. At these high rates, if only losing 10-20% of the
throughput, I might consider such causes. But losing 60% ... my guess ...
the cause is much less esoteric.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] FFE Request for Improving anti-affinity in Sahara

2016-03-03 Thread Sergey Lukjanov
Hi,

correct spec link is https://review.openstack.org/#/c/269202/

Spec isn't approved and so FFE isn't granted. It's very important to not
spread the team's efforts onto reviewing specs and additional features in
the pre-RC time frame, so, I'm considering non-merged spec as a blocker for
FFE.

Thanks.

On Thu, Mar 3, 2016 at 11:08 AM, Akanksha Agrawal 
wrote:

> Hi folks,
>
> I would like to request a FFE for the feature “Improving anti-affinity in
> Sahara”:
>
>
> BP: https://blueprints.launchpad.net/sahara/+spec/improving-anti-affinity
>
>
> 
>
> Specification Review: https://review.openstack.org/#/c/269202/
> 
>
>
> 
>
> Code Review: https://review.openstack.org/#/c/282506/
> 
>
>
> Estimated Completion Time: The specification is complete and should not
> take more than 1-2 days to be approved.
>
> The code is under review and needs few more changes before it could be
> completed. Mostly pep8 related errors have to be fixed and the error for
> backward compatibility has to be thrown. This would not require more than a
> week.
>
>
> The Benefits for this change: Improved anti-affinity behaviour while
> cluster creation
>
>
> The Risk: The risk would be low for this patch, since the code would be
> completed in time.
>
>
>
> Thanks,
>
> Akanksha Agrawal
>



-- 
Sincerely yours,
Sergey Lukjanov
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-03-03 Thread Armando M.
Hi folks,

Status update on this matter:

Russell, Kyle and I had a number of patches out [1], to try and converge on
how to better organize Neutron-related efforts. As a result, a number of
patches merged and a number of patches are still pending. Because of Mitaka
feature freeze, other initiatives too priority.

That said, some people rightly wonder what's the latest outcome of the
discussion. Bottom line: we are still figuring this out. For now the
marching order is unchanged: as far as Mitaka is concerned, things stay as
they were, and new submissions for inclusion are still frozen. I aim (with
or without the help of the new PTL) to get to a final resolution by or
shortly after the Mitaka release [2].

Please be patient and stay focussed on delivering a great Mitaka experience!

Cheers,
Armando

[1] https://review.openstack.org/#/q/branch:master+topic:stadium-implosion
[2] http://releases.openstack.org/mitaka/schedule.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] FFE Request for Improving anti-affinity in Sahara

2016-03-03 Thread Akanksha Agrawal
Hi folks,

I would like to request a FFE for the feature “Improving anti-affinity in
Sahara”:


BP: https://blueprints.launchpad.net/sahara/+spec/improving-anti-affinity




Specification Review: https://review.openstack.org/#/c/269202/





Code Review: https://review.openstack.org/#/c/282506/



Estimated Completion Time: The specification is complete and should not
take more than 1-2 days to be approved.

The code is under review and needs few more changes before it could be
completed. Mostly pep8 related errors have to be fixed and the error for
backward compatibility has to be thrown. This would not require more than a
week.


The Benefits for this change: Improved anti-affinity behaviour while
cluster creation


The Risk: The risk would be low for this patch, since the code would be
completed in time.



Thanks,

Akanksha Agrawal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-03 Thread Sean M. Collins
Mathieu Gagné wrote:
> We had issue with GRE but unrelated to the one mentioned above.
> 
> Although security group is configured to allow GRE,
> nf_conntrack_proto_gre module is not loaded by iptables/Neutron and
> traffic is dropped. We had to load the module manually.
> 

Let's put together a bug and tackle this in Newton?

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] App Catalog IRC meeting minutes - 3/3/2016

2016-03-03 Thread Christopher Aedo
During our meeting this morning we touched on some of the excellent
stuff markvan is doing with Horizon integration testing (thanks!)  We
also talked some more about transition/implementation details around
using GLARE as the backend for the app catalog.  kzaitsev_mb is making
some excellent progress with help from a few others, but the next big
challenge we're likely to hit will be creating the auth middleware to
allow catalog contributors to authenticate with their OpenStack ID.
Once we have that sorted out, there won't be too much left to do in
order to put GLARE into production as the backend/API for the
Community App Catalog.

Thanks as always to everyone working with us to continually improve
the App Catalog!

=
#openstack-meeting-3: app-catalog
=
Meeting started by docaedo at 17:00:24 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/app_catalog/2016/app_catalog.2016-03-03-17.00.log.html
.

Meeting summary
---
* LINK: https://wiki.openstack.org/wiki/Meetings/app-catalog  (docaedo,
  17:01:41)
* Status updates  (docaedo, 17:01:48)
  * LINK: https://review.openstack.org/286807  (docaedo, 17:02:26)
  * LINK: https://review.openstack.org/286927  (docaedo, 17:03:00)
  * LINK: https://review.openstack.org/#/c/276440/  (docaedo, 17:03:55)
  * LINK: https://review.openstack.org/#/c/276438/  (docaedo, 17:04:11)
  * LINK: https://review.openstack.org/#/c/283202/  (markvan, 17:06:32)
  * LINK: https://review.openstack.org/#/c/283201/  (markvan, 17:06:43)
* Glare transition review  (docaedo, 17:10:38)
* Open discussion  (docaedo, 17:36:15)

Meeting ended at 17:42:03 UTC.

People present (lines said)
---
* docaedo (70)
* kzaitsev_mb (27)
* markvan (7)
* openstack (3)

Generated by `MeetBot`_ 0.1.4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] OpenStack Contributor Awards

2016-03-03 Thread Michael Krotscheck
On Tue, Feb 16, 2016 at 2:48 AM Tom Fifield  wrote:

>
> in the meantime, let's use this thread to discuss the fun part: goodies.
> What do you think we should lavish award winners with? Soft toys?
> Perpetual trophies? baseball caps ?
>

"I made a substantial contribution to OpenStack, and all I got was this
lousy t-shirt"

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Unable to get IPMI meter readings

2016-03-03 Thread Kapil
So, we upgraded our openstack install from Juno to Kilo 2015.1.1
Not sure if this fixed some stuff, but I can now get samples for
hardware.ipmi.(fan|temperature). However, I want to get
hardware.ipmi.node.power samples and I get the following error in the
ceilometer log-

ERROR ceilometer.agent.base [-] Skip loading extension for
hardware.ipmi.node.power

I edited pipeline.yaml as follows-
sources:
- name: meter_ipmi
  interval: 10
  resources:
  - "ipmi://"
  meters:
  - "hardware.ipmi.node.power"
  sinks:
  - ipmi_sink
sinks:
 - name: ipmi_sink
  transformers:
  publishers:
  - notifier://?per_meter_topic=1

I also checked "rabbitmqctl list_queues | grep metering" and all the queues
are empty.

Do I need to change anything in ceilometer.conf or on the controller nodes
? Currently, I am working only with the compute node and only running
ceilometer queries from controller node.

Thanks

Regards,
Kapil Agarwal

On Thu, Feb 25, 2016 at 12:20 PM, gordon chung  wrote:

> at quick glance, it seems like data is being generated[1]. if you check
> your queues (rabbitmqctl list_queues for rabbit), do you see any items
> sitting on notification.sample queue or metering.sample queue? do you
> receive other meters fine? maybe you can query db directly to verify
> it's not a permission issue.
>
> [1] see: 2016-02-25 13:36:58.909 21226 DEBUG ceilometer.pipeline [-]
> Pipeline meter_sink: Transform sample  at 0x7f6b3630ae50> from 0 transformer _publish_samples
> /usr/lib/python2.7/dist-packages/ceilometer/pipeline.py:296
>
> On 25/02/2016 8:43 AM, Kapil wrote:
> > Below is the output of ceilometer-agent-ipmi in debug mode
> >
> > http://paste.openstack.org/show/488180/
> > ᐧ
> >
> > Regards,
> > Kapil Agarwal
> >
> > On Wed, Feb 24, 2016 at 8:18 PM, Lu, Lianhao  > > wrote:
> >
> > On Feb 25, 2016 06:18, Kapil wrote:
> >  > Hi
> >  >
> >  >
> >  > I discussed this problem with gordc on the telemetry IRC channel
> > but I
> >  > am still facing issues.
> >  >
> >  > I am running the ceilometer-agent-ipmi on the compute nodes, I
> > changed
> >  > pipeline.yaml of the compute node to include the ipmi meters and
> >  > resource as "ipmi://localhost".
> >  >
> >  > - name: meter_ipmi
> >  >   interval: 60
> >  >   resources:
> >  >   - ipmi://localhost meters: - "hardware.ipmi.node*" -
> >  >   "hardware.ipmi*" - "hardware.degree*" sinks: -
> meter_sink I
> >  > have ipmitool installed on the compute nodes and restarted the
> >  > ceilometer services on compute and controller nodes. Yet, I am not
> >  > receiving any ipmi meters when I run "ceilometer meter-list". I
> also
> >  > tried passing the hypervisor IP address and the ipmi address I get
> >  > when I run "ipmitool lan print" to resources but to no avail.
> >  >
> >  >
> >  > Please help in this regard.
> >  >
> >  >
> >  > Thanks
> >  > Kapil Agarwal
> >
> > Hi Kapil,
> >
> > Would you please turn on debug/verbose configurations and paste the
> > log of ceilometer-agent-ipmi on http://paste.openstack.org ?
> >
> > -Lianhao Lu
> >
>  __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-03 Thread Mathieu Gagné
On 2016-03-03 12:53 PM, Sean M. Collins wrote:
> sridhar basam wrote:
>> This doesn't sound like a neutron issue but an issue with how the
>> conntrack module for GRE changed in the kernel in 3.18.
>>
>>
>> http://comments.gmane.org/gmane.comp.security.firewalls.netfilter.general/47705
>>
>> Sri
> 
> Oooh! Wicked nice find. Thanks Sri!
> 

We had issue with GRE but unrelated to the one mentioned above.

Although security group is configured to allow GRE,
nf_conntrack_proto_gre module is not loaded by iptables/Neutron and
traffic is dropped. We had to load the module manually.

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugins] Should we maintain example plugins?

2016-03-03 Thread Swann Croiset
IMHO it is important to keep plugin examples and keep testing them, very
valuable for plugin developers.

For example, I've encountered [0] the case where "plugin as role" feature
wasn't easily testable with fuel-qa because not compliant with the last
plugin data structure,
and more recently we've spotted a regression [1] with "vip-reservation"
feature introduced by a change in nailgun.
These kind of issues are time consuming for plugin developers and can/must
be avoided by testing them.

I don't even understand why the question is raised while fuel plugins are
supposed to be supported and more and more used [3], even by murano [4] ...

[0] https://bugs.launchpad.net/fuel/+bug/1543962
[1] https://bugs.launchpad.net/fuel/+bug/1551320
[3]
http://lists.openstack.org/pipermail/openstack-dev/2016-February/085636.html
[4] https://review.openstack.org/#/c/286310/

On Thu, Mar 3, 2016 at 3:19 PM, Matthew Mosesohn 
wrote:

> Hi Fuelers,
>
> I would like to bring your attention a dilemma we have here. It seems
> that there is a dispute as to whether we should maintain the releases
> list for example plugins[0]. In this case, this is for adding version
> 9.0 to the list.
>
> Right now, we run a swarm test that tries to install the example
> plugin and do a deployment, but it's failing only for this reason. I
> should add that this is the only automated daily test that will verify
> that our plugin framework actually works. During the Mitaka
> development  cycle, we already had an extended period where plugins
> were broken[1]. Removing this test (or leaving it permanently red,
> which is effectively the same), would raise the risk to any member of
> the Fuel community who depends on plugins actually working.
>
> The other impact of abandoning maintenance of example plugins is that
> it means that a given interested Fuel Plugin developer would not be
> able to easily get started with plugin development. It might not be
> inherently obvious to add the current Fuel release to the
> metadata.yaml file and it would likely discourage such a user. In this
> case, I would propose that we remove example plugins from fuel-plugins
> GIT repo if they are not maintained. Non-functioning code is worse
> than deleted code in my opinion.
>
> Please share your opinions and let's decide which way to go with this
> bug[2]
>
> [0] https://github.com/openstack/fuel-plugins/tree/master/examples
> [1] https://bugs.launchpad.net/fuel/+bug/1544505
> [2] https://launchpad.net/bugs/1548340
>
> Best Regards,
> Matthew Mosesohn
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-03 Thread Sean M. Collins
sridhar basam wrote:
> This doesn't sound like a neutron issue but an issue with how the
> conntrack module for GRE changed in the kernel in 3.18.
> 
> 
> http://comments.gmane.org/gmane.comp.security.firewalls.netfilter.general/47705
> 
> Sri

Oooh! Wicked nice find. Thanks Sri!

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] [FFE] Support TOSCA definitions for applications

2016-03-03 Thread Tetiana Lashchova
Hi all,

I would like to request a feature freeze exception for "Support TOSCA
definitions for applications" [1].
The spec is already merged [2], patch is on review [3] and the task is
almost finished.
I am looking forward for your decision about considering this change for a
FFE.

[1] https://blueprints.launchpad.net/murano/+spec/support-tosca-format
[2] https://review.openstack.org/#/c/194422/
[3] https://review.openstack.org/#/c/243872/

Thanks,
Tetiana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-03 Thread Eichberger, German
Hi Jon,

As part of our FWaaS V2 efforts [1] we have been rethinking FWaaS and Security 
Groups. The idea is that eventually to augment Security Groups with some richer 
functionality and provide a default FWaaS policy to add to the (vm) port. 
Furthermore there is a way to “share” Firewall rules with others so you could 
have a set the user could pick from. I think one of the problems highlighted in 
this thread is that we can’t think independently of securing a VM port and the 
perimeter security. To use the example the user wants to have ssh access to his 
VM and he does’t care if it’s blocked at the vm, the router, or some perimeter 
firewall — it should just work. So just looking at Security Groups as this 
thread has done si probably too limited and we likely need a bigger effort to 
unify network security in OpenStack…

Thanks,
German




[1] 
https://www.openstack.org/summit/tokyo-2015/videos/presentation/openstack-neutron-fwaas-roadmap

On 3/3/16, 7:12 AM, "Jonathan Proulx"  wrote:

>On Wed, Mar 02, 2016 at 10:19:50PM +, James Denton wrote:
>:My opinion is that the current stance of ‘deny all’ is probably the safest 
>bet for all parties (including users) at this point. It’s been that way for 
>years now, and is a substantial change that may result in little benefit. 
>After all, you’re probably looking at most users removing the default rule(s) 
>just to add something that’s more restrictive and suits their organization’s 
>security posture. If they aren’t, then it’s possible they’re introducing 
>unnecessary risk. 
>
>
>I agree whole heartedly that reversing the default would be
>disasterous.
>
>It would be good if a site could define their own default, so I could
>say allow ssh from 'our' networks by default (but not the whole
>internet). Or maybe even further restrict egress traffic so that it
>could only talk to internal hosts.
>
>To go a little further down my wish list I'd really like to do be able
>to offer a standard selection of security groups for my site no tjust
>'default', but that may be a bit off this topic.  Briefly my
>motivation is that 'internal' here includes a number of differnt
>netblock some with pretty weird masks so users tend to use 0.0.0.0/0
>when they don't really meant to just to save some rather tedious
>typing at setup time.
>
>-Jon
>
>
>:
>:There should be some onus put on the provider and/or the user/project/tenant 
>to develop a default security policy that meets their needs, even going so far 
>as to make the configuration of their default security group the first thing 
>they do once the project is created. Maybe some changes to the workflow in 
>Horizon could help mitigate some issues users are experiencing with limited 
>access to instances by allowing them to apply some rules at the time of 
>instance creation rather than associating groups consisting of unknown rules. 
>Or allowing changes to the default security group rules of a project when that 
>project is created. There are some ways to enable providers/users to help 
>themselves rather than a blanket default change across all environments. If 
>I’m a user utilizing multiple OpenStack providers, I’m probably bringing my 
>own security groups and rules with me anyway and am not relying on any 
>provider defaults.
>: 
>:
>:James
>:
>:
>:
>:
>:
>:
>:
>:On 3/2/16, 3:47 PM, "Jeremy Stanley"  wrote:
>:
>:>On 2016-03-02 21:25:25 + (+), Sean M. Collins wrote:
>:>> Jeremy Stanley wrote:
>:>> > On 2016-03-03 07:49:03 +1300 (+1300), Xav Paice wrote:
>:>> > [...]
>:>> > > In my mind, the default security group is there so that as people
>:>> > > are developing their security policy they can at least start with
>:>> > > a default that offers a small amount of protection.
>:>> > 
>:>> > Well, not a small amount of protection. The instances boot
>:>> > completely unreachable from the global Internet, so this is pretty
>:>> > significant protection if you consider the most secure system is one
>:>> > which isn't connected to anything.
>:>> 
>:>> This is only if you are booting on a v4 network, which has NAT enabled.
>:>> Many public providers, the network you attach to is publicly routed, and
>:>> with the move to IPv6 - this will become more common. Remember, NAT is
>:>> not a security device.
>:>
>:>I agree that address translation is a blight on the Internet, useful
>:>in some specific circumstances (such as virtual address load
>:>balancing) but otherwise an ugly workaround for dealing with address
>:>exhaustion and connecting conflicting address assignments. I'll be
>:>thrilled when its use trails off to the point that newcomers cease
>:>thinking that's what connectivity with the Internet is supposed to
>:>be like.
>:>
>:>What I was referring to in my last message was the default security
>:>group policy, which blocks all ingress traffic. My point was that
>:>dropping all inbound connections, while a pretty secure
>:>configuration, is unlikely to be the desired 

[openstack-dev] [release][ironic] ironic-lib 1.1.0 release (mitaka)

2016-03-03 Thread no-reply
We are pumped to announce the release of:

ironic-lib 1.1.0: Ironic common library

This release is part of the mitaka release series.

With package available at:

https://pypi.python.org/pypi/ironic-lib

For more details, please see below.

Changes in ironic-lib 1.0.0..1.1.0
--

f8bb790 Fixes naming for the partitions in baremetal.
c19984b Add support for choosing the disk label
cfd7a91 Updated from global requirements
8f54601 Updated from global requirements

Diffstat (except docs and test files)
-

ironic_lib/disk_utils.py| 40 +---
requirements.txt|  6 +--
3 files changed, 116 insertions(+), 26 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index d8cd8b1..5cfcbd3 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ pbr>=1.6 # Apache-2.0
-eventlet>=0.18.2 # MIT
+eventlet!=0.18.3,>=0.18.2 # MIT
@@ -8 +8 @@ greenlet>=0.3.2 # MIT
-oslo.concurrency>=2.3.0 # Apache-2.0
+oslo.concurrency>=3.5.0 # Apache-2.0
@@ -12 +12 @@ oslo.service>=1.0.0 # Apache-2.0
-oslo.utils>=3.4.0 # Apache-2.0
+oslo.utils>=3.5.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-03 Thread Matt Riedemann



On 3/3/2016 2:55 AM, Zhenyu Zheng wrote:

Yes, I agree with you guys, I'm also OK for non-admin users to list
their own instances no matter what status they are.

My question is this:
I have done some tests, yet we have 2 different ways to list deleted
instances (not counting using changes-since):

1.
"GET /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?status=deleted 
HTTP/1.1"
(nova list --status deleted in CLI)
2. REQ: curl -g -i -X GET
http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
 (nova
list --deleted in CLI)

for admin user, we can all get deleted instances(after the fix of Matt's
patch).

But for non-admin users, #1 is restricted here:
https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n350
and it will return 403 error:
RESP BODY: {"forbidden": {"message": "Only administrators may list deleted instances", 
"code": 403}}


This is part of the API so if we were going to allow non-admins to query 
for deleted servers using status=deleted, it would have to be a 
microversion change. [1] I could also see that being policy-driven.


It does seem odd and inconsistent though that non-admins can't query 
with status=deleted but they can query with deleted=True in the query 
options.




and for #2 it will strangely return servers that are not in deleted status:


This seems like a bug. I tried looking for something obvious in the code 
but I'm not seeing the issue, I'd suspect something down in the DB API 
code that's doing the filtering.




DEBUG (connectionpool:387) "GET
/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
HTTP/1.1" 200 3361
DEBUG (session:235) RESP: [200] Content-Length: 3361
X-Compute-Request-Id: req-bd073750-982a-4ef7-864a-a5db03e59a68 Vary:
X-OpenStack-Nova-API-Version Connection: keep-alive
X-Openstack-Nova-Api-Version: 2.1 Date: Thu, 03 Mar 2016 08:43:17 GMT
Content-Type: application/json
RESP BODY: {"servers": [{"status": "ACTIVE", "updated":
"2016-02-29T06:24:16Z", "hostId":
"56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7", "addresses":
{"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version":
4, "addr": "10.0.0.14", "OS-EXT-IPS:type": "fixed"},
{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version": 6, "addr":
"fdb7:5d7b:6dcd:0:f816:3eff:fe4f:1b32", "OS-EXT-IPS:type": "fixed"}]},
"links": [{"href":
"http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70;,
"rel": "self"}, {"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70;,
"rel": "bookmark"}], "key_name": null, "image": {"id":
"6455625c-a68d-4bd3-ac2e-07382ac5cbf4", "links": [{"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/images/6455625c-a68d-4bd3-ac2e-07382ac5cbf4;,
"rel": "bookmark"}]}, "OS-EXT-STS:task_state": null,
"OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at":
"2016-02-29T06:24:16.00", "flavor": {"id": "1", "links": [{"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/flavors/1;,
"rel": "bookmark"}]}, "id": "ee8907c7-0730-4051-8426-64be44300e70",
"security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at":
null, "OS-EXT-AZ:availability_zone": "nova", "user_id":
"da935c024dc1444abb7b32390eac4e0b", "name": "test_inject", "created":
"2016-02-29T06:24:08Z", "tenant_id": "62bfb653eb0d4d5cabdf635dd8181313",
"OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached":
[], "accessIPv4": "", "accessIPv6": "", "progress": 0,
"OS-EXT-STS:power_state": 1, "config_drive": "True", "metadata": {}},
{"status": "ACTIVE", "updated": "2016-02-29T06:21:22Z", "hostId":
"56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7", "addresses":
{"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:63:b0:12", "version":
4, "addr": "10.0.0.13", "OS-EXT-IPS:type": "fixed"},
{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:63:b0:12", "version": 6, "addr":
"fdb7:5d7b:6dcd:0:f816:3eff:fe63:b012", "OS-EXT-IPS:type": "fixed"}]},
"links": [{"href":
"http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/40bab05f-0692-43df-a8a9-e7c0d58a73bd;,
"rel": "self"}, {"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/servers/40bab05f-0692-43df-a8a9-e7c0d58a73bd;,
"rel": "bookmark"}], "key_name": null, "image": {"id":
"6455625c-a68d-4bd3-ac2e-07382ac5cbf4", "links": [{"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/images/6455625c-a68d-4bd3-ac2e-07382ac5cbf4;,
"rel": "bookmark"}]}, "OS-EXT-STS:task_state": null,
"OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at":
"2016-02-29T06:21:22.00", "flavor": {"id": "1", "links": [{"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/flavors/1;,
"rel": "bookmark"}]}, "id": "40bab05f-0692-43df-a8a9-e7c0d58a73bd",
"security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at":
null, "OS-EXT-AZ:availability_zone": "nova", "user_id":

Re: [openstack-dev] [fuel][plugins] Should we maintain example plugins?

2016-03-03 Thread Alex Schultz
On Thu, Mar 3, 2016 at 7:19 AM, Matthew Mosesohn  wrote:
>
> Hi Fuelers,
>
> I would like to bring your attention a dilemma we have here. It seems
> that there is a dispute as to whether we should maintain the releases
> list for example plugins[0]. In this case, this is for adding version
> 9.0 to the list.
>
> Right now, we run a swarm test that tries to install the example
> plugin and do a deployment, but it's failing only for this reason. I
> should add that this is the only automated daily test that will verify
> that our plugin framework actually works. During the Mitaka
> development  cycle, we already had an extended period where plugins
> were broken[1]. Removing this test (or leaving it permanently red,
> which is effectively the same), would raise the risk to any member of
> the Fuel community who depends on plugins actually working.
>

IMHO we need to fix the plugins and this should just be part of the
basic maintenance of the plugins for each release cycle. These are
effectively documentation that needs to be updated on a regular basis
and should not be let to go stale.  Integrating with fuel and plugins
is already a complex task so having something that can be used as an
example is very important from an end user experiance standpoint.

>
> The other impact of abandoning maintenance of example plugins is that
> it means that a given interested Fuel Plugin developer would not be
> able to easily get started with plugin development. It might not be
> inherently obvious to add the current Fuel release to the
> metadata.yaml file and it would likely discourage such a user. In this
> case, I would propose that we remove example plugins from fuel-plugins
> GIT repo if they are not maintained. Non-functioning code is worse
> than deleted code in my opinion.
>
> Please share your opinions and let's decide which way to go with this bug[2]
>
> [0] https://github.com/openstack/fuel-plugins/tree/master/examples
> [1] https://bugs.launchpad.net/fuel/+bug/1544505
> [2] https://launchpad.net/bugs/1548340
>
> Best Regards,
> Matthew Mosesohn
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] PTL for Newton and beyond

2016-03-03 Thread Joshua Harlow

On 03/03/2016 04:05 AM, Flavio Percoco wrote:

Thanks for all your hard work as an Oslo PTL. You did amazing and I
think you'd
do an awesome mentor for other folks as well. It was an honor to have
you as a
PTL and I look forward to keep working with you as an Oslo contributor.


+2 +A ;)

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-03 Thread Matt Riedemann



On 3/3/2016 10:02 AM, Matt Riedemann wrote:



On 3/3/2016 2:55 AM, Zhenyu Zheng wrote:

Yes, I agree with you guys, I'm also OK for non-admin users to list
their own instances no matter what status they are.

My question is this:
I have done some tests, yet we have 2 different ways to list deleted
instances (not counting using changes-since):

1.
"GET
/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?status=deleted
HTTP/1.1"
(nova list --status deleted in CLI)
2. REQ: curl -g -i -X GET
http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
(nova
list --deleted in CLI)

for admin user, we can all get deleted instances(after the fix of Matt's
patch).

But for non-admin users, #1 is restricted here:
https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n350

and it will return 403 error:
RESP BODY: {"forbidden": {"message": "Only administrators may list
deleted instances", "code": 403}}


This is part of the API so if we were going to allow non-admins to query
for deleted servers using status=deleted, it would have to be a
microversion change. [1] I could also see that being policy-driven.

It does seem odd and inconsistent though that non-admins can't query
with status=deleted but they can query with deleted=True in the query
options.



and for #2 it will strangely return servers that are not in deleted
status:


This seems like a bug. I tried looking for something obvious in the code
but I'm not seeing the issue, I'd suspect something down in the DB API
code that's doing the filtering.



DEBUG (connectionpool:387) "GET
/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
HTTP/1.1" 200 3361
DEBUG (session:235) RESP: [200] Content-Length: 3361
X-Compute-Request-Id: req-bd073750-982a-4ef7-864a-a5db03e59a68 Vary:
X-OpenStack-Nova-API-Version Connection: keep-alive
X-Openstack-Nova-Api-Version: 2.1 Date: Thu, 03 Mar 2016 08:43:17 GMT
Content-Type: application/json
RESP BODY: {"servers": [{"status": "ACTIVE", "updated":
"2016-02-29T06:24:16Z", "hostId":
"56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7", "addresses":
{"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version":
4, "addr": "10.0.0.14", "OS-EXT-IPS:type": "fixed"},
{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version": 6, "addr":
"fdb7:5d7b:6dcd:0:f816:3eff:fe4f:1b32", "OS-EXT-IPS:type": "fixed"}]},
"links": [{"href":
"http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70;,

"rel": "self"}, {"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70;,

"rel": "bookmark"}], "key_name": null, "image": {"id":
"6455625c-a68d-4bd3-ac2e-07382ac5cbf4", "links": [{"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/images/6455625c-a68d-4bd3-ac2e-07382ac5cbf4;,

"rel": "bookmark"}]}, "OS-EXT-STS:task_state": null,
"OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at":
"2016-02-29T06:24:16.00", "flavor": {"id": "1", "links": [{"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/flavors/1;,
"rel": "bookmark"}]}, "id": "ee8907c7-0730-4051-8426-64be44300e70",
"security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at":
null, "OS-EXT-AZ:availability_zone": "nova", "user_id":
"da935c024dc1444abb7b32390eac4e0b", "name": "test_inject", "created":
"2016-02-29T06:24:08Z", "tenant_id": "62bfb653eb0d4d5cabdf635dd8181313",
"OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached":
[], "accessIPv4": "", "accessIPv6": "", "progress": 0,
"OS-EXT-STS:power_state": 1, "config_drive": "True", "metadata": {}},
{"status": "ACTIVE", "updated": "2016-02-29T06:21:22Z", "hostId":
"56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7", "addresses":
{"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:63:b0:12", "version":
4, "addr": "10.0.0.13", "OS-EXT-IPS:type": "fixed"},
{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:63:b0:12", "version": 6, "addr":
"fdb7:5d7b:6dcd:0:f816:3eff:fe63:b012", "OS-EXT-IPS:type": "fixed"}]},
"links": [{"href":
"http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/40bab05f-0692-43df-a8a9-e7c0d58a73bd;,

"rel": "self"}, {"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/servers/40bab05f-0692-43df-a8a9-e7c0d58a73bd;,

"rel": "bookmark"}], "key_name": null, "image": {"id":
"6455625c-a68d-4bd3-ac2e-07382ac5cbf4", "links": [{"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/images/6455625c-a68d-4bd3-ac2e-07382ac5cbf4;,

"rel": "bookmark"}]}, "OS-EXT-STS:task_state": null,
"OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at":
"2016-02-29T06:21:22.00", "flavor": {"id": "1", "links": [{"href":
"http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/flavors/1;,
"rel": "bookmark"}]}, "id": "40bab05f-0692-43df-a8a9-e7c0d58a73bd",
"security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at":
null, 

  1   2   >