Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Joshua Harlow

joehuang wrote:

Cells is a good enhancement for Nova scalability, but there are some issues in 
deployment Cells for massively distributed edge clouds:

1) using RPC for inter-data center communication will bring the difficulty in 
inter-dc troubleshooting and maintenance, and some critical issue in operation. 
No CLI or restful API or other tools to manage a child cell directly. If the 
link between the API cell and child cells is broken, then the child cell in the 
remote edge cloud is unmanageable, no matter locally or remotely.

2). The challenge in security management for inter-site RPC communication. 
Please refer to the slides[1] for the challenge 3: Securing OpenStack over the 
Internet, Over 500 pin holes had to be opened in the firewall to allow this to 
work – Includes ports for VNC and SSH for CLIs. Using RPC in cells for edge 
cloud will face same security challenges.

3)only nova supports cells. But not only Nova needs to support edge clouds, 
Neutron, Cinder should be taken into account too. How about Neutron to support 
service function chaining in edge clouds? Using RPC? how to address challenges 
mentioned above? And Cinder?

4). Using RPC to do the production integration for hundreds of edge cloud is 
quite challenge idea, it's basic requirements that these edge clouds may be 
bought from multi-vendor, hardware/software or both.

That means using cells in production for massively distributed edge clouds is 
quite bad idea. If Cells provide RESTful interface between API cell and child 
cell, it's much more acceptable, but it's still not enough, similar in Cinder, 
Neutron. Or just deploy lightweight OpenStack instance in each edge cloud, for 
example, one rack. The question is how to manage the large number of OpenStack 
instance and provision service.

[1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf

Best Regards
Chaoyi Huang(joehuang)



Very interesting questions,

I'm starting to think that the API you want isn't really nova, neutron, 
or cinder at this point though. At some point it feels like the efforts 
you are spending in things like service chaining (there is a south park 
episode I almost linked here, but decided I probably shouldn't) would 
almost be better served by a top-level API that knows how to communicate 
with the more isolated silos (edge clouds I guess u are calling them).


It just starts to feel that the architecture you want and the one I see 
being built are seemingly quite different and I haven't seen it shift to 
something different so maybe it's time to switch the problem on the head 
and accept that a solution may/will have to figure out how to unify a 
bunch of disjoint clouds (as best you can)?


I know I can say that such a thing I'd like as well, because though 
godaddy doesn't have hundreds of edge clouds, it is approaching more 
than a handful of disjoint clouds (across the world) and a way to join 
them behind something that can unify them (across just nova) as much as 
it can would be welcome.


-Josh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] store and client stable branches for newton

2016-08-30 Thread Nikhil Komawar
Hi,

The stable/newton has been cut and respective commits have merged for
both store and client [1]. Please feel free to merge any commits on
master that were pending however, note that we won't be releasing any
libraries out until R-0 when the freeze on libraries stops.

Let me know if there are questions.

[1] https://review.openstack.org/#/q/topic:create-newton


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] upcoming freeze (was Re: [release] Release countdown for week R-5, 29 Aug - 2 Sept)

2016-08-30 Thread Tony Breeds
Hi all,
I'd just like to add a point to Doug's email.

Along with this being the Final release for client libraries freeze its also
the requirements freeze.

That means that on Thursday Sept 1st (UTC) we'll add a procedural -2 to all
open changes in the master branch of openstack/requirements.

We've already started looking very hard at reviews that alter
global-requirements for large knock on effects.

Of course there is an exception process for things at absolutely *must* land in
newton.

Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-30 Thread Mike Bayer



On 08/30/2016 08:04 PM, Clint Byrum wrote:


My direct experience with this was MySQL 5.0 and 5.1. They worked as
documented, and no I don't think they've changed much since then.

When they were actually installed into the schema and up to date with
the code that expected them, and the debugging individual was aware of them, 
things were fine.

However, every other imperative part of the code was asserted with git,
package managers, ansible, puppet, pick your choice of thing that puts
file on disk and restarts daemons. These things all have obvious entry
points too. X is where wsgi starts running code. Y is where flask hands
off to the app, etc. But triggers are special and go in the database at
whatever time they go in. This means you lose all the benefit of all of
the tools you're used to using to debug and operate on imperative code.


to use your phrasing, I'd characterize this as "an unnecessarily bleak 
view" of the use of triggers as a whole.  I've no doubt you worked with 
some horrible trigger code (just as I've worked with some horrible 
application code, but I've worked with horrible stored procedure / 
trigger stuff too).


The triggers that have been in play in the current Keystone proposals as 
well as the one we were working with in Neutron were simple one liners 
that essentially act as custom constraints - they check a condition then 
raise an error if it fails.  In particular, MySQL doesn't have support 
for CHECK constraints, so if you want to assert that values going into a 
row have some quality more exotic than "not null", you might have to use 
a trigger to get this effect.


Clearly, a trigger that is so complex that it is invoking a whole series 
of imperative steps is not a trigger any of us should be considering. 
IMO these are not those triggers.





Of course, you can have books that get their edition 0 updated in book
while you're upgrading. But the editions feature code always treats
that old update as an update to edition 0.  It's still the same object
it always was, your app just makes some assumptions about it. You can
use a union in some cases where you need to see them all for instance,
and just select a literal '0' for the edition column of your union.


I find unions to be very awkward and really subject to poor performance. 
 Of course this can be made to work but I'm sticking to my preference 
for getting the data in the right shape on the write side, not the read 
side.




And one can say "old app is gone" when one knows it's gone. At that point,
one can run a migration that inserts 0 editions into book_edition, and
drops the book table. For OpenStack, we can say "all releases that used
that old schema are EOL, so we can simplify the code now". Our 6 month
pace and short EOL windows are built for this kind of thing.


Assuming we aren't able to use Nova's approach and we're stuck 
translating in the data access layer, we can simplify the code and put 
out a new release, although that "simplified" code now has to be 
"unsimplified" by all the *new* schema changes - code will always be 
carrying along junk to try and adapt it to the previous version of the 
software.   There's no problem if projects in this situation want to do 
it this way and I will gladly support everyone's efforts in going this 
route.However, I still think it's worth looking into approaches that 
can push the interaction between old and new app version into the write 
side instead of the read side, and if that interaction can be removed 
from the primary database access code into a separate layer.


To the degree that teams can just emulate Nova's finessing of the issue 
at the service level, that's even better.   This thread is just in 
response to particular teams who *want* to use triggers for a specific 
problem.Hopefully I will have time to flesh out my alternative 
technique for "application-level translation triggers" and maybe those 
folks might want to try that kind of thing too someday.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-08-30 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread joehuang
Cells is a good enhancement for Nova scalability, but there are some issues in 
deployment Cells for massively distributed edge clouds: 

1) using RPC for inter-data center communication will bring the difficulty in 
inter-dc troubleshooting and maintenance, and some critical issue in operation. 
No CLI or restful API or other tools to manage a child cell directly. If the 
link between the API cell and child cells is broken, then the child cell in the 
remote edge cloud is unmanageable, no matter locally or remotely. 

2). The challenge in security management for inter-site RPC communication. 
Please refer to the slides[1] for the challenge 3: Securing OpenStack over the 
Internet, Over 500 pin holes had to be opened in the firewall to allow this to 
work – Includes ports for VNC and SSH for CLIs. Using RPC in cells for edge 
cloud will face same security challenges.

3)only nova supports cells. But not only Nova needs to support edge clouds, 
Neutron, Cinder should be taken into account too. How about Neutron to support 
service function chaining in edge clouds? Using RPC? how to address challenges 
mentioned above? And Cinder? 

4). Using RPC to do the production integration for hundreds of edge cloud is 
quite challenge idea, it's basic requirements that these edge clouds may be 
bought from multi-vendor, hardware/software or both. 

That means using cells in production for massively distributed edge clouds is 
quite bad idea. If Cells provide RESTful interface between API cell and child 
cell, it's much more acceptable, but it's still not enough, similar in Cinder, 
Neutron. Or just deploy lightweight OpenStack instance in each edge cloud, for 
example, one rack. The question is how to manage the large number of OpenStack 
instance and provision service.

[1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf

Best Regards
Chaoyi Huang(joehuang)


From: Andrew Laski [and...@lascii.com]
Sent: 30 August 2016 21:03
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On Tue, Aug 30, 2016, at 05:36 AM, lebre.adr...@free.fr wrote:
> Dear all
>
> Sorry my lack of reactivity, I 've been out for the few last days.
>
> According to the different replies, I think we should enlarge the
> discussion and not stay on the vCPE use-case, which is clearly specific
> and represents only one use-case among the ones we would like to study.
> For instance we are in touch with NRENs in France and Poland that are
> interested to deploy up to one rack in each of their largest PoP in order
> to provide a distributed IaaS platform  (for further informations you can
> give a look to the presentation we gave during the last summit [1] [2]).
>
> The two questions were:
> 1./ Understand whether the fog/edge computing use case is in the scope of
> the Architecture WG and if not, do we need a massively distributed WG?

Besides the question of which WG this might fall under is the question
of how any of the work groups are going to engage with the project
communities. There is a group of developers pushing forward on cellsv2
in Nova there should be some level of engagement between them and
whomever is discussing the fog/edge computing use case. To me it seems
like there's some level of overlap between the efforts even if cellsv2
is not a full solution. But whatever conversations are taking place
about fog/edge or large scale distributed use cases seem  to be
happening in channels that I am not aware of, and I haven't heard any
other cells developers mention them either.

So let's please find a way for people who are interested in these use
cases to talk to the developers who are working on similar things.


> 2./ How can we coordinate our actions with the ones performed in the
> Architecture WG?
>
> Regarding 1./, according to the different reactions, I propose to write a
> first draft in an etherpard to present the main goal of the Massively
> distributed WG and how people interested by such discussions can interact
> (I will paste the link to the etherpad by tomorrow).
>
> Regarding 2./,  I mentioned the Architecture WG because we do not want to
> develop additional software layers like Tricircle or other solutions (at
> least for the moment).
> The goal of the WG is to conduct studies and experiments to identify to
> what extent current mechanisms can satisfy the needs of such a massively
> distributed use-cases and what are the missing elements.
>
> I don't want to give to many details in the present mail in order to stay
> as consice as possible (details will be given in the proposal).
>
> Best regards,
> Adrien
>
> [1] https://youtu.be/1oaNwDP661A?t=583 (please just watch the use-case
> introduction ;  the distribution of the DB  was one possible revision of
> Nova and according to the cell V2 changes it is probably now deprecated).
> [2] 

Re: [openstack-dev] The State of the NFS Driver ...

2016-08-30 Thread Matt Riedemann

On 8/30/2016 10:50 AM, Jay S. Bryant wrote:

All,

I wanted to follow up on the e-mail thread [1] on Cloning support in the
NFS driver.  The purpose of this e-mail is to provide the plan for the
NFS driver going forward as I see it.

First, I am aware that the driver has gone quite some time without care
and feeding.  For a number of reasons, the Public Cloud team within IBM
is currently dependent upon the NFS driver working properly for the
cloud environment we are building.  Given our current dependence on the
driver we are planning on picking up the driver and maintaining it.

The first step in this process was getting the existing patch that adds
snapshot support for NFS [2] rebased.  I did this work a couple of weeks
ago and also got all the unit tests working for the unit test
environment on the master branch.  I now see that it is in merge
conflict again, I plan to continue to keep the patch up-to-date.

Erlon has been investigating issues with attaching snapshots.  It
appears that this may be related to AppArmor running on the system where
the VM is running and attachment is being attempted.  I am hoping to
look into the other questions posed in the patch review in the next week
or two.

The next step is to create a dependent patch, upon the snapshot patch,
to implement cloning.  I am planning to also undertake this work.  I am
assuming that getting the cloning support in place shouldn't be too
difficult once snapshots are working as it will be just a matter of
using the support from the remotefs driver.

The last piece of work we have in flight is working on adding QoS
support to the NFS driver.  We have the following spec proposed to get
that work started: [3]

So, we are in the process of bringing the NFS driver up to good
standing.  During this process we would greatly appreciate reviews and
input from those of you who have previously worked on the driver in
order to expedite integration of the necessary changes. I feel it is in
the best interest of the community to get the driver updated and
supported given that it is the 4th most used driver according to our
user survey.  I think it would not look good to our users if it were to
suddenly be removed.

Thanks to all of your for your support in this effort!

Jay

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-August/102193.html

[2] https://review.openstack.org/#/c/147186/

[3] https://review.openstack.org/361456


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



IMO priority #1 is getting the NFS job passing consistently, who is 
working on that? Last I checked it was failing a bunch because it was 
running snapshot and clone tests, which obviously don't work since that 
support isn't implemented in the driver. I think configuring tempest in 
the devstack-plugin-nfs repo is fairly straightforward, someone just 
needs to do it.


But at least that gets you closer to a clean NFS job run which gets it 
out of the experimental queue (possibly) and as a non-voting job in 
Cinder so you can see if you're regressing anything (or if anything else 
regresses it once you have clean CI runs).


My 2 cents.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-30 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2016-08-30 18:15:14 -0400:
> 
> On 08/30/2016 04:43 PM, Clint Byrum wrote:
> >>
> >
> > Correct, it is harder for development. Since the database server has all
> > of the potential for the worst problems, being a stateful service, then
> > I believe moving complexity _out_ of it, is generally an operational
> > win, at the expense of some development effort. The development effort,
> > however, is mostly on the front of the pipeline where timelines can be
> > longer. Operations typically is operating under SLA's and with
> > requirements to move slowly in defense of peoples' data and performance
> > of the system. So I suggest that paying costs in dev, vs. at the
> > database is usually the highest value choice.
> >
> > This is of course not the case if timelines are short for development as
> > well, but I can't really answer the question in that case. For OpenStack,
> > we nearly always find ourselves with more time to develop, than operators
> > do to operate.
> 
> So the idea of triggers is hey, for easy things like column X is now 
> column Y elsewhere, instead of complicating the code, use a trigger to 
> maintain that value.   Your argument against triggers is: "Triggers 
> introduce emergent behaviors and complicate scaling and reasonable 
> debugging in somewhat hidden ways that
> can frustrate even the most experienced DBA."
> 
> I'd wager that triggers probably work a little more smoothly in modern 
> MySQL/Postgresql than a more classical "DBA" platform like a crusty old 
> MS SQL Server or Oracle, but more examples on these emergent behaviors 
> would be useful, as well as evidence that they apply to current versions 
> of database software that are in use within Openstack, and are 
> disruptive enough that even the most clear-cut case for triggers vs. 
> in-application complexity should favor in-app complexity without question.
> 

My direct experience with this was MySQL 5.0 and 5.1. They worked as
documented, and no I don't think they've changed much since then.

When they were actually installed into the schema and up to date with
the code that expected them, and the debugging individual was aware of them, 
things were fine.

However, every other imperative part of the code was asserted with git,
package managers, ansible, puppet, pick your choice of thing that puts
file on disk and restarts daemons. These things all have obvious entry
points too. X is where wsgi starts running code. Y is where flask hands
off to the app, etc. But triggers are special and go in the database at
whatever time they go in. This means you lose all the benefit of all of
the tools you're used to using to debug and operate on imperative code.

> >
> >>> I don't think it's all that ambitious to think we can just use tried and
> >>> tested schema evolution techniques that work for everyone else.
> >>
> >> People have been asking me for over a year how to do this, and I have no
> >> easy answer, I'm glad that you do.  I would like to see some examples of
> >> these techniques.
> >>
> >> If you can show me the SQL access code that deals with the above change,
> >> that would help a lot.
> >>
> >
> > So schema changes fall into several categories. But basically, the only
> > one that is hard, is a relationship change. Basically, a new PK. Here's
> > an example:
> >
> > Book.isbn was the PK, but we want to have a record per edition, so the
> > new primary key is (isbn, edition).
> >
> > Solution: Maintain two tables. You have created an entirely new object!
> >
> > CREATE TABLE book (
> >   isbn varchar(30) not null primary key,
> >   description text,
> > )
> >
> > CREATE TABLE book_editions (
> >   isbn varchar(30) not null,
> >   edition int not null,
> >   description text,
> >   primary key (isbn, edition),
> > )
> >
> > And now on read, your new code has to do this:
> >
> > SELECT b.isbn,
> >COALESCE(be.edition, 0) AS edition,
> >COALESCE(be.description, b.description) AS description
> > FROM book b
> >  LEFT OUTER JOIN book_editions be
> >  ON b.isbn = be.isbn
> > WHERE b.isbn = 'fooisbn'
> >
> > And now, if a book has only ever been written by old code, you get one
> > record with a 0 edition. And if it were written by the new system, the
> > new system would need to go ahead and duplicate the book description into
> > the old table for as long as we have code that might expect it.
> 
> So some pain points here are:
> 
> 1. you really can't ever trust what's in book_editions.description as 
> long as any "old" application is running, since it can put new data into 
> book.description at any time.  You shouldn't bother reading from it at 
> all, just write to it. You won't be able to use it until the next 
> version of the application, e.g. "new" + 1. Or if you support some kind 
> of "old app is gone! " flag that modifies the behavior of "new" app to 
> modify all its queries, which is even more awkward.
> 

Of course, you can have books that get 

Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-30 Thread Lance Bragstad
Since the encrypted credential work is currently based on triggers, I spent
most of today documenting a walk-though migration from Mitaka to Newton
[0]. Regardless of the outcome discussed here - figured it would be worth
sharing since it's relevant to the thread. Most of the gist contains stuff
not directly related to the upgrade from Mitaka to Newton, like config
files and install processes. I included them anyway since I started with a
green field deployment on Ubuntu 16.04.

Everything is technically still up for review so if you notice anything
fishy about the implementation via the walkthrough feel free to leave a
comment.

[0]
https://gist.github.com/lbragstad/ddfb10f9f9048414d1f781ba006e95d1#file-migration-md

On Tue, Aug 30, 2016 at 5:15 PM, Mike Bayer  wrote:

>
>
> On 08/30/2016 04:43 PM, Clint Byrum wrote:
>
>>
>>>
>> Correct, it is harder for development. Since the database server has all
>> of the potential for the worst problems, being a stateful service, then
>> I believe moving complexity _out_ of it, is generally an operational
>> win, at the expense of some development effort. The development effort,
>> however, is mostly on the front of the pipeline where timelines can be
>> longer. Operations typically is operating under SLA's and with
>> requirements to move slowly in defense of peoples' data and performance
>> of the system. So I suggest that paying costs in dev, vs. at the
>> database is usually the highest value choice.
>>
>> This is of course not the case if timelines are short for development as
>> well, but I can't really answer the question in that case. For OpenStack,
>> we nearly always find ourselves with more time to develop, than operators
>> do to operate.
>>
>
> So the idea of triggers is hey, for easy things like column X is now
> column Y elsewhere, instead of complicating the code, use a trigger to
> maintain that value.   Your argument against triggers is: "Triggers
> introduce emergent behaviors and complicate scaling and reasonable
> debugging in somewhat hidden ways that
> can frustrate even the most experienced DBA."
>
> I'd wager that triggers probably work a little more smoothly in modern
> MySQL/Postgresql than a more classical "DBA" platform like a crusty old MS
> SQL Server or Oracle, but more examples on these emergent behaviors would
> be useful, as well as evidence that they apply to current versions of
> database software that are in use within Openstack, and are disruptive
> enough that even the most clear-cut case for triggers vs. in-application
> complexity should favor in-app complexity without question.
>
>
>
>
>
>> I don't think it's all that ambitious to think we can just use tried and
 tested schema evolution techniques that work for everyone else.

>>>
>>> People have been asking me for over a year how to do this, and I have no
>>> easy answer, I'm glad that you do.  I would like to see some examples of
>>> these techniques.
>>>
>>> If you can show me the SQL access code that deals with the above change,
>>> that would help a lot.
>>>
>>>
>> So schema changes fall into several categories. But basically, the only
>> one that is hard, is a relationship change. Basically, a new PK. Here's
>> an example:
>>
>> Book.isbn was the PK, but we want to have a record per edition, so the
>> new primary key is (isbn, edition).
>>
>> Solution: Maintain two tables. You have created an entirely new object!
>>
>> CREATE TABLE book (
>>   isbn varchar(30) not null primary key,
>>   description text,
>> )
>>
>> CREATE TABLE book_editions (
>>   isbn varchar(30) not null,
>>   edition int not null,
>>   description text,
>>   primary key (isbn, edition),
>> )
>>
>> And now on read, your new code has to do this:
>>
>> SELECT b.isbn,
>>COALESCE(be.edition, 0) AS edition,
>>COALESCE(be.description, b.description) AS description
>> FROM book b
>>  LEFT OUTER JOIN book_editions be
>>  ON b.isbn = be.isbn
>> WHERE b.isbn = 'fooisbn'
>>
>> And now, if a book has only ever been written by old code, you get one
>> record with a 0 edition. And if it were written by the new system, the
>> new system would need to go ahead and duplicate the book description into
>> the old table for as long as we have code that might expect it.
>>
>
> So some pain points here are:
>
> 1. you really can't ever trust what's in book_editions.description as long
> as any "old" application is running, since it can put new data into
> book.description at any time.  You shouldn't bother reading from it at all,
> just write to it. You won't be able to use it until the next version of the
> application, e.g. "new" + 1. Or if you support some kind of "old app is
> gone! " flag that modifies the behavior of "new" app to modify all its
> queries, which is even more awkward.
>
> 2. deletes by "old" app of entries in "book" have to be synchronized
> offline by a background script of some kind.  You at least need to run a
> final, authoritative "clean up 

Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-30 Thread Mike Bayer



On 08/30/2016 04:43 PM, Clint Byrum wrote:




Correct, it is harder for development. Since the database server has all
of the potential for the worst problems, being a stateful service, then
I believe moving complexity _out_ of it, is generally an operational
win, at the expense of some development effort. The development effort,
however, is mostly on the front of the pipeline where timelines can be
longer. Operations typically is operating under SLA's and with
requirements to move slowly in defense of peoples' data and performance
of the system. So I suggest that paying costs in dev, vs. at the
database is usually the highest value choice.

This is of course not the case if timelines are short for development as
well, but I can't really answer the question in that case. For OpenStack,
we nearly always find ourselves with more time to develop, than operators
do to operate.


So the idea of triggers is hey, for easy things like column X is now 
column Y elsewhere, instead of complicating the code, use a trigger to 
maintain that value.   Your argument against triggers is: "Triggers 
introduce emergent behaviors and complicate scaling and reasonable 
debugging in somewhat hidden ways that

can frustrate even the most experienced DBA."

I'd wager that triggers probably work a little more smoothly in modern 
MySQL/Postgresql than a more classical "DBA" platform like a crusty old 
MS SQL Server or Oracle, but more examples on these emergent behaviors 
would be useful, as well as evidence that they apply to current versions 
of database software that are in use within Openstack, and are 
disruptive enough that even the most clear-cut case for triggers vs. 
in-application complexity should favor in-app complexity without question.







I don't think it's all that ambitious to think we can just use tried and
tested schema evolution techniques that work for everyone else.


People have been asking me for over a year how to do this, and I have no
easy answer, I'm glad that you do.  I would like to see some examples of
these techniques.

If you can show me the SQL access code that deals with the above change,
that would help a lot.



So schema changes fall into several categories. But basically, the only
one that is hard, is a relationship change. Basically, a new PK. Here's
an example:

Book.isbn was the PK, but we want to have a record per edition, so the
new primary key is (isbn, edition).

Solution: Maintain two tables. You have created an entirely new object!

CREATE TABLE book (
  isbn varchar(30) not null primary key,
  description text,
)

CREATE TABLE book_editions (
  isbn varchar(30) not null,
  edition int not null,
  description text,
  primary key (isbn, edition),
)

And now on read, your new code has to do this:

SELECT b.isbn,
   COALESCE(be.edition, 0) AS edition,
   COALESCE(be.description, b.description) AS description
FROM book b
 LEFT OUTER JOIN book_editions be
 ON b.isbn = be.isbn
WHERE b.isbn = 'fooisbn'

And now, if a book has only ever been written by old code, you get one
record with a 0 edition. And if it were written by the new system, the
new system would need to go ahead and duplicate the book description into
the old table for as long as we have code that might expect it.


So some pain points here are:

1. you really can't ever trust what's in book_editions.description as 
long as any "old" application is running, since it can put new data into 
book.description at any time.  You shouldn't bother reading from it at 
all, just write to it. You won't be able to use it until the next 
version of the application, e.g. "new" + 1. Or if you support some kind 
of "old app is gone! " flag that modifies the behavior of "new" app to 
modify all its queries, which is even more awkward.


2. deletes by "old" app of entries in "book" have to be synchronized 
offline by a background script of some kind.  You at least need to run a 
final, authoritative "clean up all the old book deletions" job before 
you go into "old app is gone" mode and the new app begins reading from 
book_editions alone.


3. LEFT OUTER JOINs can be a major performance hit.   You can't turn it 
off here until you go to version "new + 1" (bad performance locked in 
for a whole release cycle) or your app has a "turn off old app mode" 
flag (basically you have to write two different database access layers).


Contrast to the trigger approach, which removes all the SELECT pain and 
moves it all to writes:


1. new application has no code whatsoever referring to old application

2. no performance hit on SELECT

3. no "wait til version "new+1"" and/or "old app is gone" switch

If we have evidence that triggers are always, definitely, universally 
going to make even this extremely simple use case non-feasible, great, 
let's measure and test for that.   But in a case like this they look 
very attractive and I'd hate to just dispense with them unilaterally 
without a case-by-case examination.


As I wrote this, 

[openstack-dev] [kolla] Deadline Aug 31 for Newton Milestone #3

2016-08-30 Thread Steven Dake (stdake)
Hey folks,

The deadline for milestone #3 is August 31st (this is when we are tagging 
milestone #3).  We then have until September 15th to stabilize the release and 
tag rc1.  The release team branches occata at rc1, so if a feature doesn’t make 
rc1, its not making newton.  After rc1, any patches in master required for 
newton need to be backported into the stable/newton branch.

As far as bugs are concerned, we have a bit more time to release 3.0.0 because 
we use the cycle_trailing model.  However, our m1/m2/m3/rc1/rc2 must be on 
schedule.

Regards
-stee

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-30 Thread Matt Riedemann

On 8/30/2016 4:36 PM, Michael Still wrote:

Sorry for being slow on this one, I've been pulled into some internal
things at work.

So... Talking to Matt Riedemann just now, it seems like we should
continue to pass through the user authentication details when we have
them to the plugin. The problem is what to do in the case where we do
not (which is mostly going to be when the instance itself makes a
metadata request).

I think what you're saying though is that the middleware wont let any
requests through if they have no auth details? Is that correct?

Michael




On Fri, Aug 26, 2016 at 12:46 PM, Adam Young > wrote:

On 08/22/2016 11:11 AM, Rob Crittenden wrote:

Adam Young wrote:

On 08/15/2016 05:10 PM, Rob Crittenden wrote:

Review https://review.openstack.org/#/c/317739/
 added a new
dynamic
metadata handler to nova. The basic jist is that rather
than serving
metadata statically, it can be done dyamically, so that
certain values
aren't provided until they are needed, mostly for
security purposes
(like credentials to enroll in an AD domain). The
metadata is
configured as URLs to a REST service.

Very little is passed into the REST call, mostly UUIDs
of the
instance, image, etc. to ensure a stable API. What this
means though
is that the REST service may need to make calls into
nova or glance to
get information, like looking up the image metadata in
glance.

Currently the dynamic metadata handler _can_ generate
auth headers if
an authenticated request is made to it, but consider
that a common use
case is fetching metadata from within an instance using
something like:

% curl
http://169.254.169.254/openstack/2016-10-06/vendor_data2.json


This will come into the nova metadata service
unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a relative
newbie) both
authenticated and unauthenticated requests are accepted
such that IF
an authenticated request comes it, those credentials can
be used,
otherwise fall back to something else?



Only if they are on different URLs, I think.  Its auth_token
middleware
for all services but Keystone.  Keystone, the rles are
similar, but the
implementation is a little different.


Ok. I'm fine with the unauthenticated path if the service we can
just create a separate service user for it.

2. If an unauthenticated request comes in, how best to
obtain a token
to use? Is it best to create a service user for the REST
services
(perhaps several), use a shared user, something else?



No unauthenticated requests, please.  If the call is to
Keystone, we
could use the X509 Tokenless approach, but if the call comes
from the
new server, you won't have a cert by the time you need to
make the call,
will you?


Not sure which cert you're referring too but yeah, the metadata
service is unauthenticated. The requests can come in from the
instance which has no credentials (via http://169.254.169.254/).

Shared service users are probably your best bet.  We can
limit the roles
that they get.  What are these calls you need to make?


To glance for image metadata, Keystone for project information
and nova for instance information. The REST call passes in
various UUIDs for these so they need to be dereferenced. There
is no guarantee that these would be called in all cases but it
is a possibility.

rob


I guess if config_drive is True then this isn't really a
problem as
the metadata will be there in the instance already.

thanks

rob


__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-30 Thread Michael Still
Sorry for being slow on this one, I've been pulled into some internal
things at work.

So... Talking to Matt Riedemann just now, it seems like we should continue
to pass through the user authentication details when we have them to the
plugin. The problem is what to do in the case where we do not (which is
mostly going to be when the instance itself makes a metadata request).

I think what you're saying though is that the middleware wont let any
requests through if they have no auth details? Is that correct?

Michael




On Fri, Aug 26, 2016 at 12:46 PM, Adam Young  wrote:

> On 08/22/2016 11:11 AM, Rob Crittenden wrote:
>
>> Adam Young wrote:
>>
>>> On 08/15/2016 05:10 PM, Rob Crittenden wrote:
>>>
 Review https://review.openstack.org/#/c/317739/ added a new dynamic
 metadata handler to nova. The basic jist is that rather than serving
 metadata statically, it can be done dyamically, so that certain values
 aren't provided until they are needed, mostly for security purposes
 (like credentials to enroll in an AD domain). The metadata is
 configured as URLs to a REST service.

 Very little is passed into the REST call, mostly UUIDs of the
 instance, image, etc. to ensure a stable API. What this means though
 is that the REST service may need to make calls into nova or glance to
 get information, like looking up the image metadata in glance.

 Currently the dynamic metadata handler _can_ generate auth headers if
 an authenticated request is made to it, but consider that a common use
 case is fetching metadata from within an instance using something like:

 % curl http://169.254.169.254/openstack/2016-10-06/vendor_data2.json

 This will come into the nova metadata service unauthenticated.

 So a few questions:

 1. Is it possible to configure paste (I'm a relative newbie) both
 authenticated and unauthenticated requests are accepted such that IF
 an authenticated request comes it, those credentials can be used,
 otherwise fall back to something else?

>>>
>>>
>>> Only if they are on different URLs, I think.  Its auth_token middleware
>>> for all services but Keystone.  Keystone, the rles are similar, but the
>>> implementation is a little different.
>>>
>>
>> Ok. I'm fine with the unauthenticated path if the service we can just
>> create a separate service user for it.
>>
>> 2. If an unauthenticated request comes in, how best to obtain a token
 to use? Is it best to create a service user for the REST services
 (perhaps several), use a shared user, something else?

>>>
>>>
>>> No unauthenticated requests, please.  If the call is to Keystone, we
>>> could use the X509 Tokenless approach, but if the call comes from the
>>> new server, you won't have a cert by the time you need to make the call,
>>> will you?
>>>
>>
>> Not sure which cert you're referring too but yeah, the metadata service
>> is unauthenticated. The requests can come in from the instance which has no
>> credentials (via http://169.254.169.254/).
>>
>> Shared service users are probably your best bet.  We can limit the roles
>>> that they get.  What are these calls you need to make?
>>>
>>
>> To glance for image metadata, Keystone for project information and nova
>> for instance information. The REST call passes in various UUIDs for these
>> so they need to be dereferenced. There is no guarantee that these would be
>> called in all cases but it is a possibility.
>>
>> rob
>>
>>
 I guess if config_drive is True then this isn't really a problem as
 the metadata will be there in the instance already.

 thanks

 rob

 __


 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Sounded like you had this sorted.  True?
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace 

Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Kirill Zaitsev
FYI: we’re in the process of doing almost that =) 
Here is a link for yaql doc commits 
https://review.openstack.org/#/q/status:open+project:openstack/yaql+branch:master+topic:yaqldocs

We plan to have it finished and published in time for N release and Barcelona =)

-- 
Kirill Zaitsev
Murano Project Tech Lead
Software Engineer at
Mirantis, Inc

On 30 août 2016 at 23:55:18, Zane Bitter (zbit...@redhat.com) wrote:

On 30/08/16 12:18, Jiří Stránský wrote:
>
> Hmm yea that's strange, because YAQL has a test case for reduce() with 5
> items:
>
> https://github.com/openstack/yaql/blob/f71a0305089997cbfa5ff00f660920711b04f39e/yaql/tests/test_queries.py#L337-L339

If YAQL people are reading this, I suggest you should immediately  
replace your absurdly awful documentation[1] with the contents of these  
test files.

[1] http://yaql.readthedocs.io/en/latest/language_description.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: Message signed with OpenPGP using AMPGpg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Zane Bitter

On 30/08/16 12:18, Jiří Stránský wrote:


Hmm yea that's strange, because YAQL has a test case for reduce() with 5
items:

https://github.com/openstack/yaql/blob/f71a0305089997cbfa5ff00f660920711b04f39e/yaql/tests/test_queries.py#L337-L339


If YAQL people are reading this, I suggest you should immediately 
replace your absurdly awful documentation[1] with the contents of these 
test files.


[1] http://yaql.readthedocs.io/en/latest/language_description.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Barcelona Design Summit space needs

2016-08-30 Thread Steven Hardy
On Thu, Aug 25, 2016 at 07:18:26PM -0400, James Slagle wrote:
> On Thu, Aug 25, 2016 at 9:10 AM, Steven Hardy  wrote:
> > On Tue, Aug 23, 2016 at 09:04:15AM -0400, Emilien Macchi wrote:
> >> As a reminder, here's what we had for Austin:
> >> Fishbowl slots (Wed-Thu): 2
> >> Workroom slots (Tue-Thu): 3
> >> Contributors meetup (Fri): 1/2
> >
> > I think this allocation worked well in Austin, so I'd suggest we ask for
> > the same again.
> >
> > I know Thierry indicated we should request less, but we are asking for far
> > fewer sessions than many other projects, so I'd like to aim for the same
> > allocation and see if that can be accommodated.
> >
> > What do folks think, if I can get some acks on this plan I will go ahead
> > and provide the feedback to Thierry.
> 
> +1, sounds good to me.

Ok, not much feedback other than James on this, but I went ahead and
requested the same allocation again.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-08-30 Thread Steven Hardy
On Tue, Aug 30, 2016 at 03:25:30PM -0400, Emilien Macchi wrote:
> Here's my 2 cents:
> 
> The patch in puppet-ceph has been here for long time now and it still
> doesn't work (recent update of today, puppet-ceph is not idempotent
> when deploying RGW service. It must be fixed in order to get
> successful deployment).
> Puppet CI is still not gating on Ceph RGW (scenario004 still in
> progress and really low progress to make it working recently).

This does sound concerning, Giulio, can you provide any feedback on work
in-progress or planned to improve this?

> My opinion says we should not push to have it in Newton. Work to do it
> was not extremely pushed during the cycle I see zero reason to push
> for it now the cycle is ending.

I agree this is being proposed too late, but given it will be disabled by
default that does mitigate the risk somewhat.

Giulio - can you confirm this will just be a new service template and
puppet profile, and that it's not likely to require rework outside of the
composable services interface?  If so I'm inclined to say OK even if we
know the puppet module needs work.

Wider feedback welcomed, any more thoughts?

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-30 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2016-08-30 14:56:15 -0400:
> 
> On 08/30/2016 09:57 AM, Clint Byrum wrote:
> >>
> >
> > As someone else brought up, this is an unnecessarily bleak view of how 
> > database
> > migrations work.
> 
> We aren't talking about database migrations.  We are talking about 
> *online* database migrations, where we would like both the *old* and 
> *new* versions of the code, talking to the database at the same time.
> 
> 
> If I write code that does this:
> 
> 
>  SELECT foo, bar FROM table
> 
> then I do a migration that replaces "bar" with some new table, the new 
> SQL is:
> 
>  SELECT table.foo, othertable.bar FROM table JOIN othertable ON 
> table.id == othertable.foo_id
> 
> Those two SQL statements are incompatible.  The "new" version of the 
> code must expect and maintain the old "bar" column for the benefit of 
> the "old" version of the code still reading and writing to it.   To me, 
> this seems to contradict your suggestion "don't delete columns, ignore 
> them".  We can't ignore "bar" above.
> 

It's hard to think about what you're saying without concrete examples,
but I'll try.

As I said, don't remove columns, ignore them. Of course, you can't ignore
them on writes, they still exist. If you have a new relationship for that
data, then yes, you still have to write to the old columns and tables so
that older versions of the code can find the data it needs.

Your join needs to be a left join, so that you get the data from the old
table when it was written by old code.

> >
> > Following these commandments, one can run schema changes at any time. A
> > new schema should be completely ignorable by older code, because their
> > columns keep working, and no new requirements are introduced. New code
> > can deal with defaulted new columns gracefully.
> 
> You need to specify how new code deals with the above two totally 
> different SQL statements "gracefully", except that it has to accommodate 
> for both versions of the schema at the same time.   This may be 
> "graceful" in operator land but in developer land, there is no easy 
> solution for this.  Unless there is, and nobody has shown it to me yet:
> 

Correct, it is harder for development. Since the database server has all
of the potential for the worst problems, being a stateful service, then
I believe moving complexity _out_ of it, is generally an operational
win, at the expense of some development effort. The development effort,
however, is mostly on the front of the pipeline where timelines can be
longer. Operations typically is operating under SLA's and with
requirements to move slowly in defense of peoples' data and performance
of the system. So I suggest that paying costs in dev, vs. at the
database is usually the highest value choice.

This is of course not the case if timelines are short for development as
well, but I can't really answer the question in that case. For OpenStack,
we nearly always find ourselves with more time to develop, than operators
do to operate.

> > I don't think it's all that ambitious to think we can just use tried and
> > tested schema evolution techniques that work for everyone else.
> 
> People have been asking me for over a year how to do this, and I have no 
> easy answer, I'm glad that you do.  I would like to see some examples of 
> these techniques.
> 
> If you can show me the SQL access code that deals with the above change, 
> that would help a lot.
> 

So schema changes fall into several categories. But basically, the only
one that is hard, is a relationship change. Basically, a new PK. Here's
an example:

Book.isbn was the PK, but we want to have a record per edition, so the
new primary key is (isbn, edition).

Solution: Maintain two tables. You have created an entirely new object!

CREATE TABLE book (
  isbn varchar(30) not null primary key,
  description text,
)

CREATE TABLE book_editions (
  isbn varchar(30) not null,
  edition int not null,
  description text,
  primary key (isbn, edition),
)

And now on read, your new code has to do this:

SELECT b.isbn,
   COALESCE(be.edition, 0) AS edition,
   COALESCE(be.description, b.description) AS description
FROM book b
 LEFT OUTER JOIN book_editions be
 ON b.isbn = be.isbn
WHERE b.isbn = 'fooisbn'

And now, if a book has only ever been written by old code, you get one
record with a 0 edition. And if it were written by the new system, the
new system would need to go ahead and duplicate the book description into
the old table for as long as we have code that might expect it.

Most other things are simpler and have quite obvious solutions.

> If the answer is, "oh well just don't do a schema change like that", 
> then we're basically saying we aren't really changing our schemas 
> anymore except for totally new features that otherwise aren't accessed 
> by the older version of the code.  That's fine.   It's not what people 
> coming to me are saying, though.
> 

I mean, yes and no. We should pay some 

Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Stan Lagun
There is a bug in how yaql interprets its 3rd argument (aggregator). It was
supposed to be a function to aggregate values within each group but instead
was applied to the outcome of groupBy. I submitted a fix for this:

https://review.openstack.org/363191. Though I'm not sure we can release new
yaql version that late in the Newton cycle. Meanwhile it is better to use
one of alternative solutions above so that it won't break after this patch
get merged.

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis



On Tue, Aug 30, 2016 at 12:46 PM, Thomas Herve  wrote:

> On Tue, Aug 30, 2016 at 6:02 PM, Steven Hardy  wrote:
> > On Tue, Aug 30, 2016 at 04:10:47PM +0200, Jiří Stránský wrote:
> >>
> >> On 30.8.2016 10:17, Steven Hardy wrote:
> >>
> >> 
> >>
> >> > Yeah, that gets us closer, but we do need to handle more than one
> value
> >> > (list entry) per key, e.g:
> >> >
> >> >  data:
> >> >l:
> >> >  - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
> >> >"tripleo_packages_node_names": ["a0", "a1", "a2"]
> >> >  - "nova_compute_node_names": ["b0"]
> >> >"tripleo_packages_node_names": ["b0"]
> >> >
> >> > Output needs to be like:
> >> >
> >> >  "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
> >> >  "tripleo_packages_node_names": ["a0", "a1", "a2", "b0"]
> >> >  "nova_compute_node_names": ["b0"]
> >> >
> >>
> >> Hoping this could do it:
> >>
> >> [stack@instack ~]$ cat yaq.yaml
> >> heat_template_version: 2016-10-14
> >>
> >> outputs:
> >>   debug:
> >> value:
> >>   yaql:
> >> expression: $.data.l.reduce($1.mergeWith($2))
> >> data:
> >>   l:
> >> - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
> >>   "tripleo_packages_node_names": ["a0", "a1", "a2"]
> >> - "nova_compute_node_names": ["b0"]
> >>   "tripleo_packages_node_names": ["b0"]
> >
> > Thanks for this!
> >
> > Unfortunately I dont think it works with more than two list items:
> >
> >   debug_tripleo2:
> > value:
> >   yaql:
> > expression: $.data.l.reduce($1.mergeWith($2))
> > data:
> >   l:
> > - "gnocchi_metricd_node_names": ["overcloud-controller-0",
> >   "overcloud-controller-1", "overcloud-controller-2"]
> >   "tripleo_packages_node_names": ["overcloud-controller-0",
> "overcloud-controller-1", "overcloud-controller-2"]
> > - "nova_compute_node_names": ["overcloud-compute-0"]
> >   "tripleo_packages_node_names": ["overcloud-compute-0"]
> >   "tripleo_packages_node_names2": ["overcloud-compute-0"]
> > - "ceph_osd_node_names": ["overcloud-cephstorage-0"]
> >   "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
> >   "tripleo_packages_node_names2":
> ["overcloud-cephstorage-0"]
> >
> > $ heat output-show foo5 debug_tripleo2
> > stack output show" instead
> > Output error: can only concatenate tuple (not "list") to tuple
> >
> > I've not dug too deeply yet, but assuming that's a yaql error vs a heat
> bug
> > it looks like it won't work.
>
> I'd say it's a yaql bug (commented on the bug opened in Heat), that
> can work around using the list_merger argument:
>
> $.data.l.reduce($1.mergeWith($2, listMerger => $1.toList() + $2.toList()))
>
> Still slightly more elegant than the one I came up with.
>
> --
> Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] launchpad bugs

2016-08-30 Thread Emilien Macchi
As a gentle and peaceful reminder, please make auto triage when filing
a bug and keep track on it.

- Set priority level, set it to triaged if you set a milestone.
- Assign the bug to yourself if you work on it.
- Set it to Fix released when your patch is merged and automation
didn't update launchpad for you.

On behalf of TripleO bugs supervisors : merci :-)

On Tue, Aug 16, 2016 at 9:28 PM, Emilien Macchi  wrote:
> Hi team,
>
> This e-mail is addressed to TripleO developers interested by helping
> in bug triage.
> If you already subscribed to TripleO bugs notifications, you can skip
> and go at the second part of the e-mail.
> If not, please:
> 1) Go on https://bugs.launchpad.net/tripleo/+subscriptions
> 2) Click on "Add a subscription"
> 3) Bug mail recipient: yourself / Receive mail for bugs affecting
> tripleo that are added or closed
> 4) Create a mail filter if you like your emails classified.
> That way, you'll get an email for every new bug and their updates.
>
>
> At least but not least, please keep your assigned bugs up-to-date and
> close them with "Fix released" when automation didn't make it for you.
> Reminder: auto-triage is our model, we trust you to assign the bug in
> the right priority and milestone.
> If any doubt, please ask on #tripleo.
> Note: I spent time today to make triage and updates on a good number
> of bugs. Let me know if I did something wrong with your bugs.
>
> Thanks,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][winstackers] os-win 1.2.0 release (newton)

2016-08-30 Thread no-reply
We are tickled pink to announce the release of:

os-win 1.2.0: Windows / Hyper-V library for OpenStack projects.

This release is part of the newton stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-win

With package available at:

https://pypi.python.org/pypi/os-win

Please report issues through launchpad:

http://bugs.launchpad.net/os-win

For more details, please see below.

Changes in os-win 1.1.0..1.2.0
--

faad2b4 TrivialFix: Remove cfg import unused
56bc302 TrivialFix: Remove logging import unused
a5efe06 Fix DNS zone serial number retrieval
fb5c5da Adds docstrings to the public methods of hostutils and jobutils
9a905a7 Updates README.rst
8044d04 Updated from global requirements
61dcd56 Refactors wmi exceptions usage
477c56c Sets parent class for ISCSITargetUtils
3700ee4 Removes Windows Server 2008 R2 specific code
38244f1 Resolves flake8 errors
6f3b6c6 Retry failed disk rescans
58edccc Add public method for destroying planned vms
579ec01 Remove discover from test-requirements


Diffstat (except docs and test files)
-

README.rst | 73 --
os_win/_hacking/checks.py  | 44 -
os_win/_utils.py   | 14 +++--
os_win/exceptions.py   | 16 +
.../storage/initiator/test_iscsi_cli_utils.py  |  6 +-
.../utils/storage/initiator/test_iscsi_utils.py|  6 +-
.../storage/initiator/test_iscsi_wmi_utils.py  |  6 +-
.../storage/target/test_iscsi_target_utils.py  |  9 ++-
os_win/utils/compute/clusterutils.py   | 31 +
os_win/utils/compute/livemigrationutils.py | 26 
os_win/utils/compute/vmutils.py| 43 +++--
os_win/utils/compute/vmutils10.py  |  3 +
os_win/utils/dns/dnsutils.py   |  6 ++
os_win/utils/hostutils.py  | 48 --
os_win/utils/jobutils.py   | 40 +---
os_win/utils/network/networkutils.py   |  8 +--
os_win/utils/pathutils.py  | 24 +++
os_win/utils/storage/diskutils.py  |  2 +-
os_win/utils/storage/initiator/base_iscsi_utils.py |  6 +-
os_win/utils/storage/initiator/fc_utils.py |  6 +-
os_win/utils/storage/initiator/iscsi_cli_utils.py  |  6 +-
os_win/utils/storage/initiator/iscsi_utils.py  | 10 +--
os_win/utils/storage/initiator/iscsi_wmi_utils.py  |  6 +-
os_win/utils/storage/initiator/iscsierr.py |  6 +-
os_win/utils/storage/smbutils.py   |  5 +-
os_win/utils/storage/target/iscsi_target_utils.py  | 38 +--
os_win/utils/storage/virtdisk/vhdutils.py  | 27 
os_win/utils/win32utils.py |  8 +--
os_win/utilsfactory.py |  8 +--
requirements.txt   |  4 +-
test-requirements.txt  |  1 -
tox.ini|  2 -
47 files changed, 413 insertions(+), 285 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 4b827b8..a840c28 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ oslo.concurrency>=3.8.0 # Apache-2.0
-oslo.config>=3.12.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0
@@ -12 +12 @@ oslo.log>=1.14.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 8289eb5..f3206d6 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -9 +8,0 @@ ddt>=1.0.1 # MIT
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-30 Thread Dan Smith
>> I don't think it's all that ambitious to think we can just use
>> tried and tested schema evolution techniques that work for everyone
>> else.
> 
> People have been asking me for over a year how to do this, and I have
> no easy answer, I'm glad that you do.  I would like to see some
> examples of these techniques.

I'm not sure how to point you at the examples we have today because
they're not on a single line (or set of lines) in a single file. Nova
has moved a lot of data around at runtime using this approach in the
last year or so with good success.

> If you can show me the SQL access code that deals with the above
> change, that would help a lot.

We can't show you that, because as you said, there isn't a way to do
it...in SQL. That is in fact the point though: don't do it in SQL.

> If the answer is, "oh well just don't do a schema change like that", 
> then we're basically saying we aren't really changing our schemas 
> anymore except for totally new features that otherwise aren't
> accessed by the older version of the code.

We _are_ saying "don't change schema like that", but it's not a very
limiting requirement. It means you can't move things in a schema
migration, but that's all. Nova changes schema all the time.

In the last year or so, off the top of my head, nova has:

1. Moved instance flavors from row=value metadata storage to a JSON
   blob in another table
2. Moved core flavors, aggregates, keypairs and other structures from
   the cell database to the api database
3. Added uuid to aggregates
4. Added a parent_addr linkage in PCI device

...all online. Those are just the ones I have in my head that have
required actual data migrations. We've had dozens of schema changes that
enable new features that are all just new data and don't require any of
this.

> That's fine.   It's not what people coming to me are saying, though.

Not sure who is coming to you or what they're saying, but.. okay :)

If keystone really wants to use triggers to do this, then that's fine.
But I think the overwhelming response from this thread (which is asking
people's opinions on the matter) seems to be that they're an unnecessary
complication that will impede people debugging and working on that part
of the code base. We have such impediments elsewhere, but I think we
generally try to avoid doing one thing a hundred different ways to keep
the playing field as level as possible.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Thomas Herve
On Tue, Aug 30, 2016 at 6:02 PM, Steven Hardy  wrote:
> On Tue, Aug 30, 2016 at 04:10:47PM +0200, Jiří Stránský wrote:
>>
>> On 30.8.2016 10:17, Steven Hardy wrote:
>>
>> 
>>
>> > Yeah, that gets us closer, but we do need to handle more than one value
>> > (list entry) per key, e.g:
>> >
>> >  data:
>> >l:
>> >  - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
>> >"tripleo_packages_node_names": ["a0", "a1", "a2"]
>> >  - "nova_compute_node_names": ["b0"]
>> >"tripleo_packages_node_names": ["b0"]
>> >
>> > Output needs to be like:
>> >
>> >  "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
>> >  "tripleo_packages_node_names": ["a0", "a1", "a2", "b0"]
>> >  "nova_compute_node_names": ["b0"]
>> >
>>
>> Hoping this could do it:
>>
>> [stack@instack ~]$ cat yaq.yaml
>> heat_template_version: 2016-10-14
>>
>> outputs:
>>   debug:
>> value:
>>   yaql:
>> expression: $.data.l.reduce($1.mergeWith($2))
>> data:
>>   l:
>> - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
>>   "tripleo_packages_node_names": ["a0", "a1", "a2"]
>> - "nova_compute_node_names": ["b0"]
>>   "tripleo_packages_node_names": ["b0"]
>
> Thanks for this!
>
> Unfortunately I dont think it works with more than two list items:
>
>   debug_tripleo2:
> value:
>   yaql:
> expression: $.data.l.reduce($1.mergeWith($2))
> data:
>   l:
> - "gnocchi_metricd_node_names": ["overcloud-controller-0",
>   "overcloud-controller-1", "overcloud-controller-2"]
>   "tripleo_packages_node_names": ["overcloud-controller-0", 
> "overcloud-controller-1", "overcloud-controller-2"]
> - "nova_compute_node_names": ["overcloud-compute-0"]
>   "tripleo_packages_node_names": ["overcloud-compute-0"]
>   "tripleo_packages_node_names2": ["overcloud-compute-0"]
> - "ceph_osd_node_names": ["overcloud-cephstorage-0"]
>   "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
>   "tripleo_packages_node_names2": ["overcloud-cephstorage-0"]
>
> $ heat output-show foo5 debug_tripleo2
> stack output show" instead
> Output error: can only concatenate tuple (not "list") to tuple
>
> I've not dug too deeply yet, but assuming that's a yaql error vs a heat bug
> it looks like it won't work.

I'd say it's a yaql bug (commented on the bug opened in Heat), that
can work around using the list_merger argument:

$.data.l.reduce($1.mergeWith($2, listMerger => $1.toList() + $2.toList()))

Still slightly more elegant than the one I came up with.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Requirement for bug when reno is present

2016-08-30 Thread Christian Berendt
> On 30 Aug 2016, at 12:42, Paul Bourke  wrote:
> 
> Do people feel we still want to require a bug-id in the commit message for 
> features, when reno notes are present? My understanding is that till now 
> we've required people to add bugs for non trivial features in order to track 
> them as part of releases. Does/should reno supersede this?

I think it makes sense to keep the bug/blueprint id because some of our 
features are distributed in several reviews and only one of the reviews 
includes the release note. When removing the bug/blueprint ids from the commit 
message when a reno note will be added with one of the reviews it will be 
difficult to track the relations.

Christian.


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] RFC: Nested resources missing orchestration UUID at stack lifecycle plugin-time

2016-08-30 Thread D'ANDREA, JOE (JOE)
I'd like to open the proverbial floor for feedback regarding a problem I'm 
trying to tackle. It concerns Heat's assignment of stack resource UUIDs prior 
to a stack's instantiation. (Note that this is not the same as physical UUIDs 
assigned by Nova, Cinder, et. al; one ID is upstream/heat-specific, the other 
is downstream/service-specific.)

For a summary of the use case, problem, and three (high level) proposals to 
solve it, please refer to Heat bug 1516807 [1].

My thinking so far: The first two proposals are non-starters, while the third 
one sounds the most plausible. I imagine there are others I haven't thought of.

Questions/feedback welcomed and appreciated, especially if it can help me draw 
a better bead on the scope of this change and what would need altering. I will 
then use that to help inform an initial blueprint and shepherd it forward.

Thank you!

jd

--
Joe D’Andrea
Cloud Services and Technology Research Lab
AT Advanced Technology and Architecture

[1] https://bugs.launchpad.net/heat/+bug/1516807
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-08-30 Thread Emilien Macchi
Here's my 2 cents:

The patch in puppet-ceph has been here for long time now and it still
doesn't work (recent update of today, puppet-ceph is not idempotent
when deploying RGW service. It must be fixed in order to get
successful deployment).
Puppet CI is still not gating on Ceph RGW (scenario004 still in
progress and really low progress to make it working recently).

My opinion says we should not push to have it in Newton. Work to do it
was not extremely pushed during the cycle I see zero reason to push
for it now the cycle is ending.

On Tue, Aug 30, 2016 at 12:40 PM, Giulio Fidente  wrote:
> Together with Keith we're working on some patches to integrate (via
> puppet-ceph) the deployment of Ceph RGW in TripleO as a composable service
> which can optionally replace SwiftProxy
>
>
> Changes are tracked via blueprint at:
>
> https://blueprints.launchpad.net/tripleo/+spec/ceph-rgw-integration
>
> They should be tagged with the appropriate topic branch, so can be found
> with:
>
> https://review.openstack.org/#/q/topic:bp/ceph-rgw-integration,n,z
>
>
> There is also a [NO MERGE] change which we use to test the above in upstream
> CI:
>
> https://review.openstack.org/#/c/357182/
>
>
> We'd like to formally request an FFE for this feature.
>
> Thanks for consideration, feedback, help and reviews :)
> --
> Giulio Fidente
> GPG KEY: 08D733BA | IRC: gfidente
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Barcelona summit space and sessions!

2016-08-30 Thread Davanum Srinivas
Sounds good Josh

On Tue, Aug 30, 2016 at 1:53 PM, Joshua Harlow  wrote:
> Hi all oslo (and others),
>
> Just to make sure that everyone is aware, we have a summit soon in barcelona
> (ha); hopefully that's not news to many, the bigger news though is that we
> (the oslo folks) need to decide on how many fishbowls and working rooms we
> would like to get.
>
> I've brought this up in the IRC weekly meeting a few times and just want to
> make sure its more widely known (since not everyone knows or attends those
> meetings).
>
> http://eavesdrop.openstack.org/meetings/oslo/2016/oslo.2016-08-29-16.00.log.html#l-81
>
> The last summit we had 3 FB(fishbowl), 5 WR(working rooms) and I and others
> think that turned out pretty well and it seems the folks in the IRC meeting
> (and myself) think we should just aim for that same amount again.
>
> Does that sound good to folks, more or less?
>
> Also I've started to throw together:
>
> https://etherpad.openstack.org/p/ocata-oslo-summit-planning
>
> Feel free to add ideas for these FB and WR sessions on there :)
>
> -Josh
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-30 Thread Mike Bayer



On 08/30/2016 09:57 AM, Clint Byrum wrote:




As someone else brought up, this is an unnecessarily bleak view of how database
migrations work.


We aren't talking about database migrations.  We are talking about 
*online* database migrations, where we would like both the *old* and 
*new* versions of the code, talking to the database at the same time.



If I write code that does this:


SELECT foo, bar FROM table

then I do a migration that replaces "bar" with some new table, the new 
SQL is:


SELECT table.foo, othertable.bar FROM table JOIN othertable ON 
table.id == othertable.foo_id


Those two SQL statements are incompatible.  The "new" version of the 
code must expect and maintain the old "bar" column for the benefit of 
the "old" version of the code still reading and writing to it.   To me, 
this seems to contradict your suggestion "don't delete columns, ignore 
them".  We can't ignore "bar" above.





Following these commandments, one can run schema changes at any time. A
new schema should be completely ignorable by older code, because their
columns keep working, and no new requirements are introduced. New code
can deal with defaulted new columns gracefully.


You need to specify how new code deals with the above two totally 
different SQL statements "gracefully", except that it has to accommodate 
for both versions of the schema at the same time.   This may be 
"graceful" in operator land but in developer land, there is no easy 
solution for this.  Unless there is, and nobody has shown it to me yet:




I don't think it's all that ambitious to think we can just use tried and
tested schema evolution techniques that work for everyone else.


People have been asking me for over a year how to do this, and I have no 
easy answer, I'm glad that you do.  I would like to see some examples of 
these techniques.


If you can show me the SQL access code that deals with the above change, 
that would help a lot.


If the answer is, "oh well just don't do a schema change like that", 
then we're basically saying we aren't really changing our schemas 
anymore except for totally new features that otherwise aren't accessed 
by the older version of the code.  That's fine.   It's not what people 
coming to me are saying, though.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Requirement for bug when reno is present

2016-08-30 Thread Steven Dake (stdake)
The bug id and blueprint id are for automatic tracking via Launchpad.  If they 
are omitted, managing the release becomes especially difficult.  Unfortunately 
launchpad doesn't know about reno because then I'd agree, that would be an 
optimal way too go.

Regards
-steve




On 8/30/16, 3:42 AM, "Paul Bourke"  wrote:

>Kolla,
>
>Do people feel we still want to require a bug-id in the commit message 
>for features, when reno notes are present? My understanding is that till 
>now we've required people to add bugs for non trivial features in order 
>to track them as part of releases. Does/should reno supersede this?
>
>-Paul
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Unable to create openstack SFC

2016-08-30 Thread Cathy Zhang
Hi Alioune,

It is weird that when you create a port chain, you get a “chain delete failed” 
error message.
We never had this problem. Chain deletion is only involved when you do “delete 
chain” or “update chain”.
Not sure which networking code file combination you are using or whether it is 
because your system is not properly cleaned up or not properly installed.
We are going to release the networking-sfc mitaka version soon.
I would suggest that you wait a little bit and then use the official released 
mitaka version and reinstall the feature on your system.

Thanks,
Cathy

From: Alioune [mailto:baliou...@gmail.com]
Sent: Tuesday, August 30, 2016 8:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Cathy Zhang; Mohan Kumar; Henry Fourie
Subject: Re: [openstack-dev][neutron][networking-sfc] Unable to create 
openstack SFC

Hi,
Have you received my previous email ?

Regards,

On 15 August 2016 at 13:39, Alioune 
> wrote:
Hi all,
I'm trying to launch Openstack SFC as explained in[1] by creating 2 SFs, 1 Web 
Server (DST) and the DHCP namespace as the SRC.
I've installed OVS (Open vSwitch) 2.3.90 with Linux kernel 3.13.0-62 and the 
neutron L2-agent runs correctly.
I followed the process by creating classifier, port pairs and port_group but I 
got a wrong message "delete_port_chain failed." when creating port_chain [2]
I tried to create the neutron ports with and without the option 
"--no-security-groups" then tcpdpump on SFs tap interfaces but the ICMP packets 
don't go through the SFs.

Can anyone advice to fix? that ?
What's your channel on IRC ?

Regards,


[1] https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
[2]
vagrant@ubuntu:~/openstack_sfc$ ./08-os_create_port_chain.sh
delete_port_chain failed.
vagrant@ubuntu:~/openstack_sfc$ cat 08-os_create_port_chain.sh
#!/bin/bash

neutron port-chain-create --port-pair-group PG1 --port-pair-group PG2 
--flow-classifier FC1 PC1

[3] Output OVS Flows

vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-tun -O OpenFlow13
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0xbc2e9105125301dc, duration=9615.385s, table=0, n_packets=146, 
n_bytes=11534, priority=1,in_port=1 actions=resubmit(,2)
 cookie=0xbc2e9105125301dc, duration=9615.382s, table=0, n_packets=0, 
n_bytes=0, priority=0 actions=drop
 cookie=0xbc2e9105125301dc, duration=9615.382s, table=2, n_packets=5, 
n_bytes=490, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,20)
 cookie=0xbc2e9105125301dc, duration=9615.381s, table=2, n_packets=141, 
n_bytes=11044, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,22)
 cookie=0xbc2e9105125301dc, duration=9615.380s, table=3, n_packets=0, 
n_bytes=0, priority=0 actions=drop
 cookie=0xbc2e9105125301dc, duration=9615.380s, table=4, n_packets=0, 
n_bytes=0, priority=0 actions=drop
 cookie=0xbc2e9105125301dc, duration=8617.106s, table=4, n_packets=0, 
n_bytes=0, priority=1,tun_id=0x40e 
actions=push_vlan:0x8100,set_field:4097->vlan_vid,resubmit(,10)
 cookie=0xbc2e9105125301dc, duration=9615.379s, table=6, n_packets=0, 
n_bytes=0, priority=0 actions=drop
 cookie=0xbc2e9105125301dc, duration=9615.379s, table=10, n_packets=0, 
n_bytes=0, priority=1 
actions=learn(table=20,hard_timeout=300,priority=1,cookie=0xbc2e9105125301dc,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0xbc2e9105125301dc, duration=9615.378s, table=20, n_packets=5, 
n_bytes=490, priority=0 actions=resubmit(,22)
 cookie=0xbc2e9105125301dc, duration=9615.342s, table=22, n_packets=146, 
n_bytes=11534, priority=0 actions=drop
vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-int -O OpenFlow13
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0xbc2e9105125301dc, duration=6712.090s, table=0, n_packets=0, 
n_bytes=0, priority=10,icmp6,in_port=7,icmp_type=136 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6709.623s, table=0, n_packets=0, 
n_bytes=0, priority=10,icmp6,in_port=8,icmp_type=136 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6555.755s, table=0, n_packets=0, 
n_bytes=0, priority=10,icmp6,in_port=10,icmp_type=136 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6559.596s, table=0, n_packets=0, 
n_bytes=0, priority=10,icmp6,in_port=9,icmp_type=136 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6461.028s, table=0, n_packets=0, 
n_bytes=0, priority=10,icmp6,in_port=11,icmp_type=136 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6712.071s, table=0, n_packets=13, 
n_bytes=546, priority=10,arp,in_port=7 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6709.602s, table=0, n_packets=0, 
n_bytes=0, priority=10,arp,in_port=8 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6555.727s, table=0, n_packets=0, 
n_bytes=0, priority=10,arp,in_port=10 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, 

[openstack-dev] [oslo] Barcelona summit space and sessions!

2016-08-30 Thread Joshua Harlow

Hi all oslo (and others),

Just to make sure that everyone is aware, we have a summit soon in 
barcelona (ha); hopefully that's not news to many, the bigger news 
though is that we (the oslo folks) need to decide on how many fishbowls 
and working rooms we would like to get.


I've brought this up in the IRC weekly meeting a few times and just want 
to make sure its more widely known (since not everyone knows or attends 
those meetings).


http://eavesdrop.openstack.org/meetings/oslo/2016/oslo.2016-08-29-16.00.log.html#l-81

The last summit we had 3 FB(fishbowl), 5 WR(working rooms) and I and 
others think that turned out pretty well and it seems the folks in the 
IRC meeting (and myself) think we should just aim for that same amount 
again.


Does that sound good to folks, more or less?

Also I've started to throw together:

https://etherpad.openstack.org/p/ocata-oslo-summit-planning

Feel free to add ideas for these FB and WR sessions on there :)

-Josh


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Barcelona summit space requirements and session planning etherpad

2016-08-30 Thread Nikhil Komawar
This was sent earlier in the day. Please note the final request for
schedule sent: FB: 2, WR: 2, CM: 1


Also, a request was made to have all Glance sessions on same day for the
convenience of the audience willing to show up (be in the same
place/floor/etc.) and request for the contributors' meetup to be on
Friday morning rather than afternoon so that participants can choose to
attend other contributors' meetup in the afternoon or leave early. This
was based on the experience from Austin summit where many participants
had to leave in the middle of Glance's CM to attend some other and then
came back. Hopefully, this plan works for the majority this time.


On 8/29/16 6:43 PM, Nikhil Komawar wrote:
> Just a friendly reminder that I will be sending the final summit
> planning request for slots for Glance first thing tomorrow. So, please
> cast your vote if haven't already. Thanks!
>
>
> From the looks of it, the current winner looks to be FB: 2, WR: 2, CM: 1
> (Friday afternoon) -- but this could change.
>
> https://etherpad.openstack.org/p/ocata-glance-summit-planning
>
>
> On 8/25/16 11:58 AM, Nikhil Komawar wrote:
>> Hi,
>>
>>
>> Just wanted to point out to those who haven't been to Glance meetings in
>> the past couple of weeks that we've to submit space requirements for the
>> Barcelona design summit early next week. I've listed the constraints
>> poised in front of us in the planning etherpad [1]. Please see the top
>> portion of this etherpad under "Layout Proposal" to either propose or
>> vote on the layout proposal options to help us collaboratively determine
>> the space needs for Glance. Currently there are 2 proposals and if you
>> don't have any other in mind, please cast your vote on the given.
>>
>>
>> I need the votes by EOD on Monday 29th Aug and will be sending our final
>> space requirement request first thing on Tuesday 30th.
>>
>>
>> On another note, if you want to start proposing sessions for the summit
>> feel free to scroll to the bottom of the etherpad for the template and
>> the slots for the topics.
>>
>>
>> Let me know if you've any questions.
>>
>>
>> [1] https://etherpad.openstack.org/p/ocata-glance-summit-planning
>>
>>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][nova] os_vif 1.2.0 release (newton)

2016-08-30 Thread no-reply
We are delighted to announce the release of:

os_vif 1.2.0: A library for plugging and unplugging virtual interfaces
in OpenStack.

This release is part of the newton release series.

With package available at:

https://pypi.python.org/pypi/os_vif

For more details, please see below.

Changes in os_vif 1.1.0..1.2.0
--

b088d21 Add a reminder to remove Route.interface field
45750a9 Updated from global requirements
c74f92a Disable IPv6 on bridge devices in linux bridge code
57157de Trivial: clean up oslo-incubator related stuff
ff28181 Fix logging calls
071a63a Remove discover from test-requirements
1383121 mtu: don't attempt to set link mtu if it's invalid


Diffstat (except docs and test files)
-

openstack-common.conf |   8 --
os_vif/__init__.py|   5 +-
os_vif/objects/route.py   |   2 +
requirements.txt  |   6 +-
test-requirements.txt |   1 -
tox.ini   |   2 +-
vif_plug_linux_bridge/linux_net.py|  18 +++-
8 files changed, 148 insertions(+), 19 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 896b7d9..60919f1 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ netaddr!=0.7.16,>=0.7.12 # BSD
-oslo.config>=3.10.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0
@@ -11 +11 @@ oslo.privsep>=1.9.0 # Apache-2.0
-oslo.versionedobjects>=1.9.1 # Apache-2.0
+oslo.versionedobjects>=1.13.0 # Apache-2.0
@@ -13 +13 @@ six>=1.9.0 # MIT
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index ab729b7..d0925b7 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7 +6,0 @@ coverage>=3.6 # Apache-2.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] API v.next meeting cancelled

2016-08-30 Thread Jim Rollenhagen
Sorry for the late notice, but we decided we'd rather keep working on
Newton things this week instead of having this meeting, so it's
cancelled for today.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [devstack] [ocata] Consistent Endpoint Discovery

2016-08-30 Thread Michael Krotscheck
Hey everyone - I have a little bit of a UX request for our API developers.

For the last week or so, I've been working on building version negotiation
logic for an OpenStack SDK. The process is pretty simple:

1- Read clouds.yaml for the keystone URL.
2- Query keystone for the service catalog.
3- Instantiate service instances for each discovered service.
4- Perform version negotiation on each service.

The problem: the service endpoints registered in the catalog all behave
just a little bit differently, which makes building consistent version
negotiation a royal PITA. I've annotated the various behaviors from a
default devstack configuration here: http://paste.openstack.org/show/564863/
.

In a perfect world, every endpoint would return the same type of resource -
most likely the versions resource as described in the API WG Microversions
spec. It would also be nice if version negotiation can happen without
requiring authentication, the easiest path to which would be supporting the
'max_version' and 'min_version' fields in the root versions resource.

Sadly, this is my last week before I'm no longer paid to contribute to the
OpenStack community, so I can't take on the responsibility of proposing
something of this magnitude as an Ocata goal with only my own free time to
offer. Is there anyone willing to help push this forward?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] [security] [tc] Add the vulnerability:managed tag to Manila

2016-08-30 Thread Jeremy Stanley
Ben has proposed[1] adding manila, manila-ui and python-manilaclient
to the list of deliverables whose vulnerability reports and
advisories are overseen by the OpenStack Vulnerability Management
Team. This proposal is an assertion that the requirements[2] for the
vulnerability:managed governance tag are met by these deliverables.
As such, I wanted to initiate a discussion evaluating each of the
listed requirements to see how far along those deliverables are in
actually fulfilling these criteria.

1. All repos for a covered deliverable must meet the criteria or
else none do. Easy enough, each deliverable has only one repo so
this isn't really a concern.

2. We need a dedicated point of contact for security issues. Our
typical point of contact would be a manila-coresec team in
Launchpad, but that doesn't exist[3] (yet). Since you have a fairly
large core review team[4], you should pick a reasonable subset of
those who are willing to act as the next line of triage after the
VMT hands off a suspected vulnerability report under embargo. You
should have at least a couple of active volunteers for this task so
there's good coverage, but more than 5 or so is probably pushing the
bounds of information safety. Not all of them need to be core
reviewers, but enough of them should be so that patches proposed as
attachments to private bugs can effectively be "pre-approved" in an
effort to avoid delays merging at time of publication.

3. The PTL needs to agree to act as a point of escalation or
delegate this responsibility to a specific liaison. This is Ben by
default, but if he's not going to have time to serve in that role
then he should record a dedicated Vulnerability Management Liaison
in the CPLs list[5].

4. Configure sharing[6][7][8] on the defect trackers for these
deliverables so that OpenStack Vulnerability Management team
(openstack-vuln-mgmt) has "Private Security: All". Once the
vulnerability:managed tag is approved for them, also remove the
"Private Security: All" sharing from any other teams (so that the
VMT can redirect incorrectly reported vulnerabilities without
prematurely disclosing them to manila reviewers).

5. Independent security review, audit, or threat analysis... this is
almost certainly the hardest to meet. After some protracted
discussion on Kolla's application for this tag, it was determined
that projects should start supplying threat analyses to a central
security-analysis[9] repo where they can be openly reviewed and
ultimately published. No projects have actually completed this yet,
but there is some process being finalized by the Security Team which
projects will hopefully be able to follow. You may want to check
with them on the possibility of being an early adopter for that
process.

6. Covered deliverables need tests we can rely on to be able to
evaluate whether privately proposed security patches will break the
software. A cursory look shows many jobs[10] running in our upstream
CI for changes to these repos, so that requirement is probably
addressed (I did not yet check whether those
unit/functional/integration tests are particularly extensive).

So in summary, it looks like there are still some outstanding
requirements not yet met for the vulnerability:managed tag but I
don't see any insurmountable challenges there. Please let me know if
any of the above is significantly off-track.

[1] https://review.openstack.org/350597
[2] 
https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
[3] https://launchpad.net/~manila-coresec
[4] https://review.openstack.org/#/admin/groups/213,members
[5] 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management
[6] https://launchpad.net/manila/+sharing
[7] https://launchpad.net/manila-ui/+sharing
[8] https://launchpad.net/pythonmanilaclient/+sharing
[9] 
https://git.openstack.org/cgit/openstack/security-analysis/tree/doc/source/templates/
[10] 
https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml

-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Andrew Laski


On Tue, Aug 30, 2016, at 09:55 AM, lebre.adr...@free.fr wrote:
> 
> 
> - Mail original -
> > De: "Andrew Laski" 
> > À: openstack-dev@lists.openstack.org
> > Envoyé: Mardi 30 Août 2016 15:03:35
> > Objet: Re: [openstack-dev] [all][massively 
> > distributed][architecture]Coordination between actions/WGs
> > 
> > 
> > 
> > On Tue, Aug 30, 2016, at 05:36 AM, lebre.adr...@free.fr wrote:
> > > Dear all
> > > 
> > > Sorry my lack of reactivity, I 've been out for the few last days.
> > > 
> > > According to the different replies, I think we should enlarge the
> > > discussion and not stay on the vCPE use-case, which is clearly
> > > specific
> > > and represents only one use-case among the ones we would like to
> > > study.
> > > For instance we are in touch with NRENs in France and Poland that
> > > are
> > > interested to deploy up to one rack in each of their largest PoP in
> > > order
> > > to provide a distributed IaaS platform  (for further informations
> > > you can
> > > give a look to the presentation we gave during the last summit [1]
> > > [2]).
> > > 
> > > The two questions were:
> > > 1./ Understand whether the fog/edge computing use case is in the
> > > scope of
> > > the Architecture WG and if not, do we need a massively distributed
> > > WG?
> > 
> > Besides the question of which WG this might fall under is the
> > question
> > of how any of the work groups are going to engage with the project
> > communities. There is a group of developers pushing forward on
> > cellsv2
> > in Nova there should be some level of engagement between them and
> > whomever is discussing the fog/edge computing use case. To me it
> > seems
> > like there's some level of overlap between the efforts even if
> > cellsv2
> > is not a full solution. But whatever conversations are taking place
> > about fog/edge or large scale distributed use cases seem  to be
> > happening in channels that I am not aware of, and I haven't heard any
> > other cells developers mention them either.
> > 
> 
> I can only agree !
> Actually we organised an informal exchange with Sylvain Bauza in July in
> order to get additional information regarding the Cell V2
> architecture/implementation.  From our point of view, such changes in the
> code can help us toward our ultimate goal of managing remote DCs in an
> efficient manner (i.e by mitigating for instance the inter-sites
> traffic). 
> 
> 
> > So let's please find a way for people who are interested in these use
> > cases to talk to the developers who are working on similar things.
> 
> What is your proposal ? any particular ideas in mind?  

I am generally aware of things that are discussed in the weekly Nova IRC
meeting, on the ML with a [Nova] tag, and in proposed specs. Using those
forums as part of these discussions would be my recommendation. Or at
the very least use those forums to advertise that there is discussion
happening elsewhere.

The reality is that in order for any discussion to turn into tangible
work it needs to end up as a proposed spec. That can be the start of a
discussion or a summary of a discussion but it really needs to be a part
of the lifecycle of any discussion. Often from there it can branch out
into ML discussions or summit discussions. But specs are a good contact
point between Nova developers and people who have use cases for Nova. It
is important to note that spec proposals should be backed by someone
willing to do the work, which doesn't necessarily need to be the person
proposing the spec.


> 
> Ad_rien_
> 
> > 
> > 
> > > 2./ How can we coordinate our actions with the ones performed in
> > > the
> > > Architecture WG?
> > > 
> > > Regarding 1./, according to the different reactions, I propose to
> > > write a
> > > first draft in an etherpard to present the main goal of the
> > > Massively
> > > distributed WG and how people interested by such discussions can
> > > interact
> > > (I will paste the link to the etherpad by tomorrow).
> > > 
> > > Regarding 2./,  I mentioned the Architecture WG because we do not
> > > want to
> > > develop additional software layers like Tricircle or other
> > > solutions (at
> > > least for the moment).
> > > The goal of the WG is to conduct studies and experiments to
> > > identify to
> > > what extent current mechanisms can satisfy the needs of such a
> > > massively
> > > distributed use-cases and what are the missing elements.
> > > 
> > > I don't want to give to many details in the present mail in order
> > > to stay
> > > as consice as possible (details will be given in the proposal).
> > > 
> > > Best regards,
> > > Adrien
> > > 
> > > [1] https://youtu.be/1oaNwDP661A?t=583 (please just watch the
> > > use-case
> > > introduction ;  the distribution of the DB  was one possible
> > > revision of
> > > Nova and according to the cell V2 changes it is probably now
> > > deprecated).
> > > [2] https://hal.inria.fr/hal-01320235
> > > 
> > > - Mail original -
> > > > De: "Peter Willis" 

[openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-08-30 Thread Giulio Fidente
Together with Keith we're working on some patches to integrate (via 
puppet-ceph) the deployment of Ceph RGW in TripleO as a composable 
service which can optionally replace SwiftProxy



Changes are tracked via blueprint at:

https://blueprints.launchpad.net/tripleo/+spec/ceph-rgw-integration

They should be tagged with the appropriate topic branch, so can be found 
with:


https://review.openstack.org/#/q/topic:bp/ceph-rgw-integration,n,z


There is also a [NO MERGE] change which we use to test the above in 
upstream CI:


https://review.openstack.org/#/c/357182/


We'd like to formally request an FFE for this feature.

Thanks for consideration, feedback, help and reviews :)
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][FFE]Support a param to specify subnet or fixed IP when creating port

2016-08-30 Thread Rob Cresswell (rcresswe)
I’m happy to allow this personally, but wanted to get others input and give 
people the chance to object.

My reasoning for allowing this:
- It’s high level, doesn’t affect any base horizon lib features.
- It is mature code, has multiple patch sets and a +2

I’ll give it a few days to allow others a chance speak up, then we can move 
forward.

Rob
 
> On 29 Aug 2016, at 07:17, Kenji Ishii  wrote:
> 
> Hi, horizoners
> 
> I'd like to request a feature freeze exception for the feature.
> (This is a bug ticket, but the contents written in this ticket is a new 
> feature.)
> https://bugs.launchpad.net/horizon/+bug/1588663
> 
> This is implemented by the following patch.
> https://review.openstack.org/#/c/325104/
> 
> It is useful to be able to create a port with using subnet or IP address 
> which a user want to use.
> And this has already reviewed by many reviewers, so I think the risk in this 
> patch is very few.
> 
> ---
> Best regards,
> Kenji Ishii
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-30 Thread Matt Riedemann

On 8/29/2016 2:40 PM, Matt Riedemann wrote:

I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done this week.

Keep in mind feature freeze is more or less Thursday 9/1.

Also keep in mind the goal from the midcycle:

"Jay's personal goal for Newton is for the resource tracker to be
writing inventory and allocation data via the placement API. Get the
data pumping into the placement API in Newton so we can start using it
in Ocata."

1. The ResourceTracker work starts here:

https://review.openstack.org/#/c/358797/

That relies on the placement service being in the service catalog and
will be optional for Newton. There are details to be sorted about
if/when to retry connecting to the placement service with or without
requiring a restart of nova-compute, but I don't think those are too hairy.

Jay is working on changes that go on top of that series to push the
inventory and allocation data from the resource tracker to the placement
service.

Chris Dent pointed out that there is remaining work to do with the
allocation objects in the placement API, but those can be worked in
parallel to the RT work Jay is doing.

2. Chris is going to cleanup the devstack change that adds the placement
service:

https://review.openstack.org/#/c/342362/

The main issue is there isn't a separate placement database, at least
not by default, so Chris has to take that into account. In Newton, by
default, the Nova API DB will be used for the placement service. You can
optionally configure a separate placement database with the API DB
schema, but we're not going to test with that as the default in devstack
in Newton since that's most likely not what deployers would be doing in
Newton as the placement service is still optional.

3. I'm going to work on a job that runs in the experimental queue and
enables the placement service. So by default in Newton devstack the
placement service will not be configured or running. With the
experimental queue job we can test the Nova changes with and without the
placement service to make sure we didn't completely screw something up.

--

If I've left something out please add it here.



We're also tracking stuff in this etherpad:

https://etherpad.openstack.org/p/placement-next

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Jiří Stránský

On 30.8.2016 18:16, Zane Bitter wrote:

On 30/08/16 12:02, Steven Hardy wrote:

  debug_tripleo2:
value:
  yaql:
expression: $.data.l.reduce($1.mergeWith($2))
data:
  l:
- "gnocchi_metricd_node_names": ["overcloud-controller-0",
  "overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
- "nova_compute_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names2": ["overcloud-compute-0"]
- "ceph_osd_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names2": ["overcloud-cephstorage-0"]

$ heat output-show foo5 debug_tripleo2
stack output show" instead
Output error: can only concatenate tuple (not "list") to tuple

I've not dug too deeply yet, but assuming that's a yaql error vs a heat bug
it looks like it won't work.


It works flawlessly in yaqluator, so that sounds like a Heat bug.


Ack, i reported it so that it doesn't fall through the cracks:

https://bugs.launchpad.net/heat/+bug/1618538


Jirka



- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Jiří Stránský

On 30.8.2016 18:02, Steven Hardy wrote:

On Tue, Aug 30, 2016 at 04:10:47PM +0200, Jiří Stránský wrote:


On 30.8.2016 10:17, Steven Hardy wrote:




Yeah, that gets us closer, but we do need to handle more than one value
(list entry) per key, e.g:

 data:
   l:
 - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
   "tripleo_packages_node_names": ["a0", "a1", "a2"]
 - "nova_compute_node_names": ["b0"]
   "tripleo_packages_node_names": ["b0"]

Output needs to be like:

 "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
 "tripleo_packages_node_names": ["a0", "a1", "a2", "b0"]
 "nova_compute_node_names": ["b0"]



Hoping this could do it:

[stack@instack ~]$ cat yaq.yaml
heat_template_version: 2016-10-14

outputs:
  debug:
value:
  yaql:
expression: $.data.l.reduce($1.mergeWith($2))
data:
  l:
- "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
  "tripleo_packages_node_names": ["a0", "a1", "a2"]
- "nova_compute_node_names": ["b0"]
  "tripleo_packages_node_names": ["b0"]


Thanks for this!

Unfortunately I dont think it works with more than two list items:

  debug_tripleo2:
value:
  yaql:
expression: $.data.l.reduce($1.mergeWith($2))
data:
  l:
- "gnocchi_metricd_node_names": ["overcloud-controller-0",
  "overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
- "nova_compute_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names2": ["overcloud-compute-0"]
- "ceph_osd_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names2": ["overcloud-cephstorage-0"]

$ heat output-show foo5 debug_tripleo2
stack output show" instead
Output error: can only concatenate tuple (not "list") to tuple

I've not dug too deeply yet, but assuming that's a yaql error vs a heat bug
it looks like it won't work.


Hmm yea that's strange, because YAQL has a test case for reduce() with 5 
items:


https://github.com/openstack/yaql/blob/f71a0305089997cbfa5ff00f660920711b04f39e/yaql/tests/test_queries.py#L337-L339

Anyway, good that we have the solution below that works :)

Jirka



However I did find an approach earler with therve which seems to do what is
needed:

 debug_tripleo:
value:
  yaql:
# $.selectMany($.items()).groupBy($[0], $[1][0])
# reduce($1 + $2)')
# dict($.selectMany($.items()).groupBy($[0], $[1], [$[0],
# $[1].flatten()]))
expression: dict($.data.l.selectMany($.items()).groupBy($[0], $[1],
[$[0], $[1].flatten()]))
data:
  l:
- "gnocchi_metricd_node_names": ["overcloud-controller-0",
  "overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names2": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
- "nova_compute_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names2": ["overcloud-compute-0"]
- "ceph_osd_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names2": ["overcloud-cephstorage-0"]

Output:

$ heat output-show foo5 debug_tripleo
stack output show" instead
{
  "gnocchi_metricd_node_names": [
"overcloud-controller-0",
"overcloud-controller-1",
"overcloud-controller-2"
  ],
  "tripleo_packages_node_names": [
"overcloud-controller-0",
"overcloud-controller-1",
"overcloud-controller-2",
"overcloud-compute-0",
"overcloud-cephstorage-0"
  ],
  "ceph_osd_node_names": [
"overcloud-cephstorage-0"
  ],
  "tripleo_packages_node_names2": [
"overcloud-controller-0",
"overcloud-controller-1",
"overcloud-controller-2",
"overcloud-compute-0",
"overcloud-cephstorage-0"
  ],
  "nova_compute_node_names": [
"overcloud-compute-0"
  ]
}

It's a bit complex, but I think it will work as a stopgap solution until we
can land a map_deep_merge function for heat (further yaql optimizations
welcome! :)

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Zane Bitter

On 30/08/16 12:02, Steven Hardy wrote:

  debug_tripleo2:
value:
  yaql:
expression: $.data.l.reduce($1.mergeWith($2))
data:
  l:
- "gnocchi_metricd_node_names": ["overcloud-controller-0",
  "overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
- "nova_compute_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names2": ["overcloud-compute-0"]
- "ceph_osd_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names2": ["overcloud-cephstorage-0"]

$ heat output-show foo5 debug_tripleo2
stack output show" instead
Output error: can only concatenate tuple (not "list") to tuple

I've not dug too deeply yet, but assuming that's a yaql error vs a heat bug
it looks like it won't work.


It works flawlessly in yaqluator, so that sounds like a Heat bug.

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-30 Thread Dan Smith
>> Even in the case of projects using versioned objects, it still
>> means a SQL layer has to include functionality for both versions of
>> a particular schema change which itself is awkward.

That's not true. Nova doesn't have multiple models to straddle a
particular change. We just...

> It's simple, these are the holy SQL schema commandments:
> 
> Don't delete columns, ignore them.
> Don't change columns, create new ones.
> When you create a column, give it a default that makes sense.
> Do not add new foreign key constraints.

...do this ^ :)

We can drop columns once they're long-since-unused, but we still don't
need duplicate models for that.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Steven Hardy
On Tue, Aug 30, 2016 at 04:10:47PM +0200, Jiří Stránský wrote:
> 
> On 30.8.2016 10:17, Steven Hardy wrote:
> 
> 
> 
> > Yeah, that gets us closer, but we do need to handle more than one value
> > (list entry) per key, e.g:
> > 
> >  data:
> >l:
> >  - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
> >"tripleo_packages_node_names": ["a0", "a1", "a2"]
> >  - "nova_compute_node_names": ["b0"]
> >"tripleo_packages_node_names": ["b0"]
> > 
> > Output needs to be like:
> > 
> >  "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
> >  "tripleo_packages_node_names": ["a0", "a1", "a2", "b0"]
> >  "nova_compute_node_names": ["b0"]
> > 
> 
> Hoping this could do it:
> 
> [stack@instack ~]$ cat yaq.yaml
> heat_template_version: 2016-10-14
> 
> outputs:
>   debug:
> value:
>   yaql:
> expression: $.data.l.reduce($1.mergeWith($2))
> data:
>   l:
> - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
>   "tripleo_packages_node_names": ["a0", "a1", "a2"]
> - "nova_compute_node_names": ["b0"]
>   "tripleo_packages_node_names": ["b0"]

Thanks for this!

Unfortunately I dont think it works with more than two list items:

  debug_tripleo2:
value:
  yaql:
expression: $.data.l.reduce($1.mergeWith($2))
data:
  l:
- "gnocchi_metricd_node_names": ["overcloud-controller-0",
  "overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
- "nova_compute_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names2": ["overcloud-compute-0"]
- "ceph_osd_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names2": ["overcloud-cephstorage-0"]

$ heat output-show foo5 debug_tripleo2
stack output show" instead
Output error: can only concatenate tuple (not "list") to tuple

I've not dug too deeply yet, but assuming that's a yaql error vs a heat bug
it looks like it won't work.

However I did find an approach earler with therve which seems to do what is
needed:

 debug_tripleo:
value:
  yaql:
# $.selectMany($.items()).groupBy($[0], $[1][0])
# reduce($1 + $2)')
# dict($.selectMany($.items()).groupBy($[0], $[1], [$[0],
# $[1].flatten()]))
expression: dict($.data.l.selectMany($.items()).groupBy($[0], $[1],
[$[0], $[1].flatten()]))
data:
  l:
- "gnocchi_metricd_node_names": ["overcloud-controller-0",
  "overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names2": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
- "nova_compute_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names2": ["overcloud-compute-0"]
- "ceph_osd_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names2": ["overcloud-cephstorage-0"]

Output:

$ heat output-show foo5 debug_tripleo
stack output show" instead
{
  "gnocchi_metricd_node_names": [
"overcloud-controller-0", 
"overcloud-controller-1", 
"overcloud-controller-2"
  ], 
  "tripleo_packages_node_names": [
"overcloud-controller-0", 
"overcloud-controller-1", 
"overcloud-controller-2", 
"overcloud-compute-0", 
"overcloud-cephstorage-0"
  ], 
  "ceph_osd_node_names": [
"overcloud-cephstorage-0"
  ], 
  "tripleo_packages_node_names2": [
"overcloud-controller-0", 
"overcloud-controller-1", 
"overcloud-controller-2", 
"overcloud-compute-0", 
"overcloud-cephstorage-0"
  ], 
  "nova_compute_node_names": [
"overcloud-compute-0"
  ]
}

It's a bit complex, but I think it will work as a stopgap solution until we
can land a map_deep_merge function for heat (further yaql optimizations
welcome! :)

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] should we provide 'ironic node-list --chassis' and 'ironic port-list --node' commands?

2016-08-30 Thread Vladyslav Drok
The ironic node-list --chassis seems to be easier to understand for me.
I've commented on one of the patches that maybe we should deprecate the
chassis-node-list if we add this, but then, the deprecation is slow, we
have some functional tests already... Having two commands kind of reflects
the duplication in our API, where we can do /chassis//nodes and
/nodes?chassis_uuid=, so maybe having both of them is fine.

Vlad

On Mon, Aug 29, 2016 at 7:22 PM, Loo, Ruby  wrote:

> Hi,
>
>
>
> While working on the openstackclient plugin commands for ironic, I was
> thinking about the equivalents for 'ironic chassis-node-list' (nodes that
> are part of specified chassis) and 'ironic-node-port-list' (ports that are
> part of specified node). It didn't make sense to me to have an 'openstack
> baremetal chassis node list', since a 'chassis' and a 'node' are two
> different objects in osc lingo and we already have 'openstack baremetal
> chassis xx' and 'openstack baremetal node yy' commands. Furthermore, our
> REST API supports 'GET /v1/nodes?chassis=c1' and 'GET /v1/ports?node=n1'.
>
>
>
> So I proposed 'openstack baremetal node list --chassis' and 'openstack
> baremetal port list --node' [1]. To implement this, I need to enhance our
> corresponding python APIs. The question I have is whether we want to only
> enhance the python API, or also provide 'ironic node-list --chassis' and
> 'ironic port-list --node' commands. The latter is being proposed [2] and
> coded at [3]. Doing this would mean two different ironic CLIs to do the
> same thing, but also provide a more obvious 1:1 correspondence between
> ironic & osc commands, and between ironic CLI and python API.
>
>
>
> Thoughts?
>
>
>
> It'd be great if we could decide in the next day or so, in order to get
> the osc-related commands into the client this week for the Newton release.
>
>
>
> --ruby
>
>
>
> [1] http://specs.openstack.org/openstack/ironic-specs/specs/
> approved/ironicclient-osc-plugin.html#openstack-baremetal-node
>
> [2] https://launchpad.net/bugs/1616242
>
> [3] https://review.openstack.org/#/c/359520/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The State of the NFS Driver ...

2016-08-30 Thread Jay S. Bryant

All,

I wanted to follow up on the e-mail thread [1] on Cloning support in the 
NFS driver.  The purpose of this e-mail is to provide the plan for the 
NFS driver going forward as I see it.


First, I am aware that the driver has gone quite some time without care 
and feeding.  For a number of reasons, the Public Cloud team within IBM 
is currently dependent upon the NFS driver working properly for the 
cloud environment we are building.  Given our current dependence on the 
driver we are planning on picking up the driver and maintaining it.


The first step in this process was getting the existing patch that adds 
snapshot support for NFS [2] rebased.  I did this work a couple of weeks 
ago and also got all the unit tests working for the unit test 
environment on the master branch.  I now see that it is in merge 
conflict again, I plan to continue to keep the patch up-to-date.


Erlon has been investigating issues with attaching snapshots.  It 
appears that this may be related to AppArmor running on the system where 
the VM is running and attachment is being attempted.  I am hoping to 
look into the other questions posed in the patch review in the next week 
or two.


The next step is to create a dependent patch, upon the snapshot patch, 
to implement cloning.  I am planning to also undertake this work.  I am 
assuming that getting the cloning support in place shouldn't be too 
difficult once snapshots are working as it will be just a matter of 
using the support from the remotefs driver.


The last piece of work we have in flight is working on adding QoS 
support to the NFS driver.  We have the following spec proposed to get 
that work started: [3]


So, we are in the process of bringing the NFS driver up to good 
standing.  During this process we would greatly appreciate reviews and 
input from those of you who have previously worked on the driver in 
order to expedite integration of the necessary changes. I feel it is in 
the best interest of the community to get the driver updated and 
supported given that it is the 4th most used driver according to our 
user survey.  I think it would not look good to our users if it were to 
suddenly be removed.


Thanks to all of your for your support in this effort!

Jay

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/102193.html


[2] https://review.openstack.org/#/c/147186/

[3] https://review.openstack.org/361456


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Feature freeze exception for OVN

2016-08-30 Thread Steven Hardy
On Tue, Aug 30, 2016 at 07:53:07PM +0530, Babu Shanmugam wrote:
> Hi,
> 
> The THT patch for OVN [1] has no more dependencies with the recent merging
> of [2]. The changes in heat templates does not have any impact on the
> existing templates as major of the changes are in new templates and will be
> used only when OVN is enabled.
> 
> It would be nice to have OVN templates for the upcoming release considering
> that OVN's first official release is due shortly. I am sure [1] will have
> some review comments and is unlikely to get merged by today, but will you be
> able to consider a freeze exception for this feature.

Considering it's only one patch and disabled by default I think we can, but
please can you raise a launchpad blueprint for this feature?

I've reviewed the patch previously, but it's slipped off my newton review
list because it's not tracked in launchpad.

Thanks!

Steve

> 
> 
> Thank you,
> Babu
> 
> 
> [1] - https://review.openstack.org/307734
> 
> [2] - https://review.openstack.org/314875

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #91

2016-08-30 Thread Emilien Macchi
No topic this week, meeting cancelled!

See you next week :)

On Mon, Aug 29, 2016 at 1:45 PM, Emilien Macchi <emil...@redhat.com> wrote:
> Hi,
>
> If you have any topic to add for this week, please use the etherpad:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160830
>
> See you tomorrow,
>
> On Tue, Aug 23, 2016 at 1:08 PM, Iury Gregory <iurygreg...@gmail.com> wrote:
>> No topic/discussion in our agenda, we cancelled the meeting, see you next
>> week!
>>
>>
>>
>> 2016-08-22 16:19 GMT-03:00 Iury Gregory <iurygreg...@gmail.com>:
>>>
>>> Hi Puppeteers!
>>>
>>> We'll have our weekly meeting tomorrow at 3pm UTC on #openstack-meeting-4
>>>
>>> Here's a first agenda:
>>> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160823
>>>
>>> Feel free to add topics, and any outstanding bug and patch.
>>>
>>> See you tomorrow!
>>> Thanks,
>>
>>
>>
>>
>> --
>>
>> ~
>> Att[]'s
>> Iury Gregory Melo Ferreira
>> Master student in Computer Science at UFCG
>> E-mail:  iurygreg...@gmail.com
>> ~
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Unable to create openstack SFC

2016-08-30 Thread Alioune
Hi,
Have you received my previous email ?

Regards,

On 15 August 2016 at 13:39, Alioune  wrote:

> Hi all,
> I'm trying to launch Openstack SFC as explained in[1] by creating 2 SFs, 1
> Web Server (DST) and the DHCP namespace as the SRC.
> I've installed OVS (Open vSwitch) 2.3.90 with Linux kernel 3.13.0-62 and
> the neutron L2-agent runs correctly.
> I followed the process by creating classifier, port pairs and port_group
> but I got a wrong message "delete_port_chain failed." when creating
> port_chain [2]
> I tried to create the neutron ports with and without the option
> "--no-security-groups" then tcpdpump on SFs tap interfaces but the ICMP
> packets don't go through the SFs.
>
> Can anyone advice to fix? that ?
> What's your channel on IRC ?
>
> Regards,
>
>
> [1] https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
> [2]
> vagrant@ubuntu:~/openstack_sfc$ ./08-os_create_port_chain.sh
> delete_port_chain failed.
> vagrant@ubuntu:~/openstack_sfc$ cat 08-os_create_port_chain.sh
> #!/bin/bash
>
> neutron port-chain-create --port-pair-group PG1 --port-pair-group PG2
> --flow-classifier FC1 PC1
>
> [3] Output OVS Flows
>
> vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-tun -O OpenFlow13
> OFPST_FLOW reply (OF1.3) (xid=0x2):
>  cookie=0xbc2e9105125301dc, duration=9615.385s, table=0, n_packets=146,
> n_bytes=11534, priority=1,in_port=1 actions=resubmit(,2)
>  cookie=0xbc2e9105125301dc, duration=9615.382s, table=0, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>  cookie=0xbc2e9105125301dc, duration=9615.382s, table=2, n_packets=5,
> n_bytes=490, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
> actions=resubmit(,20)
>  cookie=0xbc2e9105125301dc, duration=9615.381s, table=2, n_packets=141,
> n_bytes=11044, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
> actions=resubmit(,22)
>  cookie=0xbc2e9105125301dc, duration=9615.380s, table=3, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>  cookie=0xbc2e9105125301dc, duration=9615.380s, table=4, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>  cookie=0xbc2e9105125301dc, duration=8617.106s, table=4, n_packets=0,
> n_bytes=0, priority=1,tun_id=0x40e actions=push_vlan:0x8100,set_
> field:4097->vlan_vid,resubmit(,10)
>  cookie=0xbc2e9105125301dc, duration=9615.379s, table=6, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>  cookie=0xbc2e9105125301dc, duration=9615.379s, table=10, n_packets=0,
> n_bytes=0, priority=1 actions=learn(table=20,hard_
> timeout=300,priority=1,cookie=0xbc2e9105125301dc,NXM_OF_
> VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0-
> >NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],
> output:NXM_OF_IN_PORT[]),output:1
>  cookie=0xbc2e9105125301dc, duration=9615.378s, table=20, n_packets=5,
> n_bytes=490, priority=0 actions=resubmit(,22)
>  cookie=0xbc2e9105125301dc, duration=9615.342s, table=22, n_packets=146,
> n_bytes=11534, priority=0 actions=drop
> vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-int -O OpenFlow13
> OFPST_FLOW reply (OF1.3) (xid=0x2):
>  cookie=0xbc2e9105125301dc, duration=6712.090s, table=0, n_packets=0,
> n_bytes=0, priority=10,icmp6,in_port=7,icmp_type=136 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6709.623s, table=0, n_packets=0,
> n_bytes=0, priority=10,icmp6,in_port=8,icmp_type=136 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6555.755s, table=0, n_packets=0,
> n_bytes=0, priority=10,icmp6,in_port=10,icmp_type=136
> actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6559.596s, table=0, n_packets=0,
> n_bytes=0, priority=10,icmp6,in_port=9,icmp_type=136 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6461.028s, table=0, n_packets=0,
> n_bytes=0, priority=10,icmp6,in_port=11,icmp_type=136
> actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6712.071s, table=0, n_packets=13,
> n_bytes=546, priority=10,arp,in_port=7 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6709.602s, table=0, n_packets=0,
> n_bytes=0, priority=10,arp,in_port=8 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6555.727s, table=0, n_packets=0,
> n_bytes=0, priority=10,arp,in_port=10 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6559.574s, table=0, n_packets=12,
> n_bytes=504, priority=10,arp,in_port=9 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6461.005s, table=0, n_packets=15,
> n_bytes=630, priority=10,arp,in_port=11 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=9620.388s, table=0, n_packets=514,
> n_bytes=49656, priority=0 actions=NORMAL
>  cookie=0xbc2e9105125301dc, duration=9619.277s, table=0, n_packets=0,
> n_bytes=0, priority=20,mpls actions=resubmit(,10)
>  cookie=0xbc2e9105125301dc, duration=6712.111s, table=0, n_packets=25,
> n_bytes=2674, priority=9,in_port=7 actions=resubmit(,25)
>  cookie=0xbc2e9105125301dc, duration=6559.621s, table=0, n_packets=24,
> n_bytes=2576, priority=9,in_port=9 actions=resubmit(,25)
>  

Re: [openstack-dev] [vitrage] relationship_type in static_datasources

2016-08-30 Thread Afek, Ifat (Nokia - IL)
Hi Yujun,

From: Yujun Zhang
Date: Monday, 29 August 2016 at 11:59

entities:
 - type: switch
   name: switch-1
   id: switch-1 # should be same as name
   state: available
   relationships:
 - type: nova.host
   name: host-1
   id: host-1 # should be same as name
   is_source: true # entity is `source` in this relationship
   relation_type: attached
 - type: switch
   name: switch-2
   id: switch-2 # should be same as name
   is_source: false # entity is `target` in this relationship
   relation_type: backup

I think that’s the idea, instead of making this assumption in the code.

But I wonder why the static physical configuration file use a different format 
from vitrage template definitions[1]

[1] 
https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst

What do you mean? The purpose of the templates is to describe the 
condition-action behaviour, wheres the purpose of the static configuration is 
to define resources to be added to vitrage graph. Can you please explain how 
you would make the formats more similar?

Best Regards,
Ifat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][cinder] Clone feature toggle not in clone tests

2016-08-30 Thread Jay S. Bryant

Ben,

Thank you very much for the history on the driver here.  This was all 
news to me.  It helps me to understand why cloning was listed as 
supported but was no longer implemented.  Also explains why the simpler 
'cp' based solution hasn't been implemented.  I was wondering why that 
hadn't just been done.


IBM is planning to take over the process of developing/maintaining the 
NFS driver, at least in the near term. Erlon has also been helping with 
this.


We would really like to avoid the driver being removed.  Perhaps I will 
send a separate 'State of the NFS driver' update e-mail to the mailing 
list to explain what I think the state of the driver is and what I see 
to be the plan going forward.


Thanks,

Jay


On 08/25/2016 12:06 PM, Ben Swartzlander wrote:


Originally the NFS driver did support snapshots, but it was 
implemented by just 'cp'ing the file containing the raw bits. This 
works fine (if inefficiently) for unattached volumes, but if you do 
this on an attached volume the snapshot won't be crash consistent at all.


It was decided that we could do better for attached volumes by 
switching to qcow2 and relying on nova to perform the snapshots. Based 
on this, the bad snapshot implementation was removed.


However, for a variety of reasons the nova-assisted snapshot 
implementation has remained unmerged for 2+ years and the NFS driver 
has been an exception to the rules for that whole time.


I would like to see that exception end in the near future with either 
the removal of the driver or the completion of the Nova-assisted 
snapshot implementation, and it doesn't really matter to me which.


There is a 3rd alternative which would be to modify the NFS driver to 
require a specific filesystem that supports snapshots (there are a few 
choices here, but definitely NOT ext4). Unfortunately those of us who 
work for storage vendors aren't motivated to make such a modification 
because it would be effectively creating more competition for 
ourselves. The only way this could happen is if someone not working 
for a storage vendor takes this on.


-Ben

On August 25, 2016 10:39:35 AM Erlon Cruz  wrote:


Hi Jordan, Slade,

Currently NFS driver does not support cloning neither snapshots 
(which are the base for implementing cloning). AFAIC, the NFS driver 
was in Cinder before the minimum requirements being discussed and 
set, so, it just stood there with the features it already supported.


There is currently this job 
'gate-tempest-dsvm-full-devstack-plugin-nfs-nv'[1] that by the way 
are failing in the same test you mentioned tough passing the snapshot 
tests (not shure how the configuration is doing that) and a work[2] 
in progress to support the snapshot feature.


So, Jordan, I think its OK to allow tempest to skip this tests, 
provided that at least in the NFS driver, tempest isn't being an 
enforcement to Cinder minimum features requirements.


Erlon


[1] 
http://logs.openstack.org/86/147186/25/experimental/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/b149960/

[2] https://review.openstack.org/#/c/147186/

On Wed, Aug 24, 2016 at 6:34 PM, Jordan Pittier 
> wrote:



On Wed, Aug 24, 2016 at 6:06 PM, Slade Baumann
> wrote:

I am attempting to disable clone tests in tempest as they aren't
functioning in NFS. But the tests test_volumes_clone.py and
test_volumes_clone_negative.py don't have the "clone" feature
toggle in them. I thought it obvious that if clone is disabled
in tempest, the tests that simply clone should be disabled.

So I put up a bug and fix for it, but have been talking with
Jordan Pittier and he suggested I come to the mailing list to
get this figured out.

I'm not asking for reviews, unless you want to give them.
I'm simply asking if this is the right way to go about this
or if there is something else I need to do to get this into
Tempest.

Here are the bug and fix:
https://bugs.launchpad.net/tempest/+bug/1615770

https://review.openstack.org/#/c/358813/


I would appreciate any suggestion or direction in this problem.

For extra reference, the clone toggle flag was added here:
https://bugs.launchpad.net/tempest/+bug/1488274


Hi,
Thanks for starting this thread. My point about this patch is, as
"volume clone" is part of the core requirements [1] every Cinder
drive must support, I don't see a need for a feature flag. The
feature flag already exists, but that doesn't mean we should
encourage its usage.

Now, if this really helps the NFS driver (although I don"t know
why we couldn't support clone with 

[openstack-dev] [new][openstackclient] os-client-config 1.21.0 release (newton)

2016-08-30 Thread no-reply
We are high-spirited to announce the release of:

os-client-config 1.21.0: OpenStack Client Configuation Library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-client-config

With package available at:

https://pypi.python.org/pypi/os-client-config

Please report issues through launchpad:

http://bugs.launchpad.net/os-client-config

For more details, please see below.

Changes in os-client-config 1.20.1..1.21.0
--

2b52bcf Add prompting for KSA options
72c1cd9 Clean up vendor support list


Diffstat (except docs and test files)
-

os_client_config/config.py| 23 +-
3 files changed, 88 insertions(+), 56 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Python 3 usage and kuryr-kubernetes

2016-08-30 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

This email is gonna be a bit long, so I'll put it in parts.

Kubernetes integration components
==

As you know, we are now in the process of upstreaming the Kuryr Kubernetes
PoC that the Kuryr team at Midokura did. This PoC upstreaming effort has
two main parts:

Kuryr watcher: Based on Python3 asyncio, it connects to the ?watch=true
Kubernetes resource endpoints, then passes the seen events to translators
that end up calling Neutron. With the Neutron resource information returned
by the translators, the watching coroutines update the resource that
originated the event.

Kuryr CNI: Py2 and Py3 compatible. It is called by Kubernetes' Kubelet with
the noop container so that the CNI driver does the network plumbing for it.
Basically we use openstack/kuryr binding code to bind Pod veths to Neutron
ports.

Upstream Deployment design
==

In the Kuryr-Kubernetes integration vision, Kuryr CNI is installed wherever
Kubelet is and the Kuryr watcher (or watchers once we support HA) runs in a
container somewhere that can access the Kubernetes, Neutron and Keystone
APIs (which does not need to be able to access the watcher host on anything
else that established connections). The idea behind allowing it to be in a
non-privileged container somewhere is that in this way you do not need to
make Neutron/Keystone accessible from the Kubernetes worker nodes, just
like for a lot of Nova compute deployments (obviously, depending on you
networking vendor, you have rabbitmq agent access to Neutron).

If one does not need the extra isolation for the Kuryr Watcher, the Watcher
containers could even be started by Kubernetes and the CNI driver would
just let the watcher container in the Host networking instead of on the
Overlay, so Kubernetes would manage the integration deployment.

OpenStack distributions: when the rubber meets the road
==

If the OpenStack distros, however, would prefer not to run Kuryr Watcher
containerized or they want to, as they probably should, build their own
container (rather than the upstream kuryr/kubernetes one in dockerhub that
is based on alpine linux), they would need to have Python3.5 support. I
understand that at the moment from the most popular OpenStack distros, only
one has Python 3.5 supported.

You can imagine where this is heading... These are the options that I can
see:

a) Work with the OpenStack distributions to ensure python3.5 support is
reached soon for Kuryr and its dependencies (some listed below):

   - babel
   - oslo.concurrency
   - oslo.config
   - oslo.log
   - oslo.utils
   - pbr
   - pyroute2
   - python-neutronclient
   - python-keystoneclient

This also implies that distros should adopt a policy about having OpenStack
services running in Python2, some in Python3, as I think the best is to
have each project move at its own speed (within reason).

b) As Ilya Chukhnakov from Mirantis proposed, drop Python3 for now and
reimplement it with python-requests and eventlet. He'll work on a PoC to
see its feasibility and how it compares to the asyncio based one.
Personal position
=

I see this as a good opportunity for the OpenStack community at wide to
start having Python3-first (and even python3 only) services and allow
OpenStack projects to take full advantage of all the good things Python3
has to offer and move forward with the rest of the Python community.

There's been some efforts in the part in some projects [1][2] but it seems
implementation was deferred indefinitely probably to the same distribution
issue that we face now.

In light of the recent discussions in this mailing list and the decision
taken by the Technical Committee [3] about alternative languages. I think
it would be very good for the community to set an official plan and
incentivize the projects to move to Python3 in future releases
(unfortunately, library projects like clients and oslo will most likely
have to keep python2 for longer, but it is probably for the best).

While such position is not taken, I would like to hear what the rest of the
Kuryr (and the rest of OpenStack) has to say about the matter and we should
at least evaluate the possibility of having to do the option (b) above.

Sorry for the long wall of text, and looking forward to discuss options (a)
and (b) both in these thread and in the next Kuryr weekly meeting this
coming Monday,

Toni

[1]
https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio#What.27s_wrong_with_eventlet.3F
[2] https://blueprints.launchpad.net/oslo.messaging/+spec/asyncio-executor
[3]
http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-02-20.01.log.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Feature freeze exception for OVN

2016-08-30 Thread Babu Shanmugam

Hi,

The THT patch for OVN [1] has no more dependencies with the recent 
merging of [2]. The changes in heat templates does not have any impact 
on the existing templates as major of the changes are in new templates 
and will be used only when OVN is enabled.


It would be nice to have OVN templates for the upcoming release 
considering that OVN's first official release is due shortly. I am sure 
[1] will have some review comments and is unlikely to get merged by 
today, but will you be able to consider a freeze exception for this feature.



Thank you,
Babu


[1] - https://review.openstack.org/307734

[2] - https://review.openstack.org/314875


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Jiří Stránský

 expression: $.data.l.reduce($1.mergeWith($2))


Or maybe it's better with seed value for reduce, just in case:

$.data.l.reduce($1.mergeWith($2), {})


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Jiří Stránský


On 30.8.2016 10:17, Steven Hardy wrote:




Yeah, that gets us closer, but we do need to handle more than one value
(list entry) per key, e.g:

 data:
   l:
 - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
   "tripleo_packages_node_names": ["a0", "a1", "a2"]
 - "nova_compute_node_names": ["b0"]
   "tripleo_packages_node_names": ["b0"]

Output needs to be like:

 "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
 "tripleo_packages_node_names": ["a0", "a1", "a2", "b0"]
 "nova_compute_node_names": ["b0"]



Hoping this could do it:

[stack@instack ~]$ cat yaq.yaml
heat_template_version: 2016-10-14

outputs:
  debug:
value:
  yaql:
expression: $.data.l.reduce($1.mergeWith($2))
data:
  l:
- "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
  "tripleo_packages_node_names": ["a0", "a1", "a2"]
- "nova_compute_node_names": ["b0"]
  "tripleo_packages_node_names": ["b0"]


[stack@instack ~]$ heat output-show yaq debug
WARNING (shell) "heat output-show" is deprecated, please use "openstack 
stack output show" instead

{
  "gnocchi_metricd_node_names": [
"a0",
"a1",
"a2"
  ],
  "tripleo_packages_node_names": [
"a0",
"a1",
"a2",
"b0"
  ],
  "nova_compute_node_names": [
"b0"
  ]
}

Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-30 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2016-08-26 11:50:24 -0400:
> 
> On 08/25/2016 01:13 PM, Steve Martinelli wrote:
> > The keystone team is pursuing a trigger-based approach to support
> > rolling, zero-downtime upgrades. The proposed operator experience is
> > documented here:
> >
> >   http://docs.openstack.org/developer/keystone/upgrading.html
> >
> > This differs from Nova and Neutron's approaches to solve for rolling
> > upgrades (which use oslo.versionedobjects), however Keystone is one of
> > the few services that doesn't need to manage communication between
> > multiple releases of multiple service components talking over the
> > message bus (which is the original use case for oslo.versionedobjects,
> > and for which it is aptly suited). Keystone simply scales horizontally
> > and every node talks directly to the database.
> 
> 
> Hi Steve -
> 
> I'm a strong proponent of looking into the use of triggers to smooth 
> upgrades between database versions.Even in the case of projects 
> using versioned objects, it still means a SQL layer has to include 
> functionality for both versions of a particular schema change which 
> itself is awkward.   I'm also still a little worried that not every case 
> of this can be handled by orchestration at the API level, and not as a 
> single SQL layer method that integrates both versions of a schema change.
> 

Speaking as an operator, I'd rather have awkwardness happen in safe, warm
development, rather than in the cold, dirty, broken world of operations.

Speaking as a former DBA: Triggers introduce emergent behaviors and
complicate scaling and reasonable debugging in somewhat hidden ways that
can frustrate even the most experienced DBA. We've discussed FK's before,
and how they are a 1:1 trade-off of integrity vs. performance, and thus
deserve more scrutiny than they're typically given. Well IMO, triggers are
a 1:10 trade off between development complexity, and debugging complexity.

Speaking as a developer: Every case can in fact be handled simply and
in code without the database's help if we're willing to accept a small
level of imperfection and redundancy.

> Using triggers would resolve the issue of SQL-specific application code 
> needing to refer to two versions of a schema at once, at least for those 
> areas where triggers and SPs can handle it.   In the "ideal", it means 
> all the Python code can just refer to one version of a schema, and nuts 
> and bolts embedded into database migrations would handle all the 
> movement between schema versions, including the phase between expand and 
> contract.   Not that I think the "ideal" is ever going to be realized 
> 100%, but maybe in some / many places, this can work.
> 

As someone else brought up, this is an unnecessarily bleak view of how database
migrations work.

It's simple, these are the holy SQL schema commandments:

Don't delete columns, ignore them.
Don't change columns, create new ones.
When you create a column, give it a default that makes sense.
Do not add new foreign key constraints.

Following these commandments, one can run schema changes at any time. A
new schema should be completely ignorable by older code, because their
columns keep working, and no new requirements are introduced. New code
can deal with defaulted new columns gracefully.

Of course, once one can be certain that all app code is updated, one can
drop old columns and tables, and add FK constraints (if you so desire,
I personally think they're a waste of precious DB resources, but that
is a much more religious debate and I accept that it's not part of
this debate).

> So if Keystone wants to be involved in paving the way for working with 
> triggers, IMO this would benefit other projects in that they could 
> leverage this kind of functionality in those places where it makes sense.
> 
> The problem of "zero downtime database migrations" is an incredibly 
> ambitious goal and I think it would be wrong to exclude any one 
> particular technique in pursuing this.  A real-world success story would 
> likely integrate many different techniques as they apply to specific 
> scenarios, and triggers and SPs IMO are a really major one which I 
> believe can be supported.
> 

I don't think it's all that ambitious to think we can just use tried and
tested schema evolution techniques that work for everyone else.

> >
> > Database triggers are obviously a new challenge for developers to write,
> > honestly challenging to debug (being side effects), and are made even
> > more difficult by having to hand write triggers for MySQL, PostgreSQL,
> > and SQLite independently (SQLAlchemy offers no assistance in this case),
> > as seen in this patch:
> 
> So I would also note that we've been working on the availability of 
> triggers and stored functions elsewhere, a very raw patch that is to be 
> largely rolled into oslo.db is here:
> 
> https://review.openstack.org/#/c/314054/
> 
> This patch makes use of an Alembic pattern called 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread lebre . adrien


- Mail original -
> De: "Andrew Laski" 
> À: openstack-dev@lists.openstack.org
> Envoyé: Mardi 30 Août 2016 15:03:35
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> 
> 
> On Tue, Aug 30, 2016, at 05:36 AM, lebre.adr...@free.fr wrote:
> > Dear all
> > 
> > Sorry my lack of reactivity, I 've been out for the few last days.
> > 
> > According to the different replies, I think we should enlarge the
> > discussion and not stay on the vCPE use-case, which is clearly
> > specific
> > and represents only one use-case among the ones we would like to
> > study.
> > For instance we are in touch with NRENs in France and Poland that
> > are
> > interested to deploy up to one rack in each of their largest PoP in
> > order
> > to provide a distributed IaaS platform  (for further informations
> > you can
> > give a look to the presentation we gave during the last summit [1]
> > [2]).
> > 
> > The two questions were:
> > 1./ Understand whether the fog/edge computing use case is in the
> > scope of
> > the Architecture WG and if not, do we need a massively distributed
> > WG?
> 
> Besides the question of which WG this might fall under is the
> question
> of how any of the work groups are going to engage with the project
> communities. There is a group of developers pushing forward on
> cellsv2
> in Nova there should be some level of engagement between them and
> whomever is discussing the fog/edge computing use case. To me it
> seems
> like there's some level of overlap between the efforts even if
> cellsv2
> is not a full solution. But whatever conversations are taking place
> about fog/edge or large scale distributed use cases seem  to be
> happening in channels that I am not aware of, and I haven't heard any
> other cells developers mention them either.
> 

I can only agree !
Actually we organised an informal exchange with Sylvain Bauza in July in order 
to get additional information regarding the Cell V2 
architecture/implementation.  From our point of view, such changes in the code 
can help us toward our ultimate goal of managing remote DCs in an efficient 
manner (i.e by mitigating for instance the inter-sites traffic). 


> So let's please find a way for people who are interested in these use
> cases to talk to the developers who are working on similar things.

What is your proposal ? any particular ideas in mind?  

Ad_rien_

> 
> 
> > 2./ How can we coordinate our actions with the ones performed in
> > the
> > Architecture WG?
> > 
> > Regarding 1./, according to the different reactions, I propose to
> > write a
> > first draft in an etherpard to present the main goal of the
> > Massively
> > distributed WG and how people interested by such discussions can
> > interact
> > (I will paste the link to the etherpad by tomorrow).
> > 
> > Regarding 2./,  I mentioned the Architecture WG because we do not
> > want to
> > develop additional software layers like Tricircle or other
> > solutions (at
> > least for the moment).
> > The goal of the WG is to conduct studies and experiments to
> > identify to
> > what extent current mechanisms can satisfy the needs of such a
> > massively
> > distributed use-cases and what are the missing elements.
> > 
> > I don't want to give to many details in the present mail in order
> > to stay
> > as consice as possible (details will be given in the proposal).
> > 
> > Best regards,
> > Adrien
> > 
> > [1] https://youtu.be/1oaNwDP661A?t=583 (please just watch the
> > use-case
> > introduction ;  the distribution of the DB  was one possible
> > revision of
> > Nova and according to the cell V2 changes it is probably now
> > deprecated).
> > [2] https://hal.inria.fr/hal-01320235
> > 
> > - Mail original -
> > > De: "Peter Willis" 
> > > À: "OpenStack Development Mailing List (not for usage questions)"
> > > 
> > > Envoyé: Mardi 30 Août 2016 11:24:00
> > > Objet: Re: [openstack-dev] [all][massively
> > > distributed][architecture]Coordination between actions/WGs
> > > 
> > > 
> > > 
> > > Colleagues,
> > > 
> > > 
> > > An interesting discussion, the only question appears to be
> > > whether
> > > vCPE is a suitable use case as the others do appear to be cloud
> > > use
> > > cases. Lots of people assume CPE == small residential devices
> > > however CPE covers a broad spectrum of appliances. Some of our
> > > customers' premises are data centres, some are HQs, some are
> > > campuses, some are branches. For residential CPE we use the
> > > Broadband Forum's CPE Wide Area Network management protocol
> > > (TR-069), which may be easier to modify to handle virtual
> > > machines/containers etc. than to get OpenStack to scale to
> > > millions
> > > of nodes. However that still leaves us with the need to manage a
> > > stack of servers in thousands of telephone exchanges, central
> > > offices or even cell-sites, running multiple work 

[openstack-dev] [new][ironic] virtualbmc 0.1.0 release

2016-08-30 Thread no-reply
We are happy to announce the release of:

virtualbmc 0.1.0: Create virtual BMCs for controlling virtual
instances via IPMI

This is the first release of virtualbmc.

For more details, please see below.

Changes in virtualbmc d10b57d95acd35e47f51aba1fe9a98ccf0de08d3..0.1.0
-

9f4d478 Use upper-constraints for all tox targets
147dd42 Remove unused releasenote setup
e3a4d8b Set IPMI session timeout for the virtual BMCs
db5bdf9 Use constraints for all the things
02f4489 Bump pyghmi version to 1.0.3
af00ea2 Add unittests for the config.py module
ec1595b Add unittests for the vbmc.py module
5325880 Raise an exception for domains already registered
b71669a Add "cover" to .gitignore
bba706e Add unittests for the manager.py module
03cda55 "vbmc list" to sort the result by domain name
b14ae0b Add unittests for the cmd/vbmc.py module
5d90759 Add unittests for the utils.py module
85a0b20 Restructure the repository according to OpenStack
05ffd8b Mask passwords before displaying them
c2d6399 Bump the version of the project to 0.0.4
b7455ff Add support for SASL authentication with libvirt
03ee8d0 Add support for parsing multiple values
4a35626 Add --version parameter
d8f01e0 Clarify the 'address' parameter
6df7622 Bump the version of the project to 0.0.3
634ffd2 Fix "show" command
162aec3 Bump the version of the project to 0.0.2
89128be Add config.py module
7f81c4b Create utils.py module
937dcd0 Split VirtualBMCManager to its own file
dc0efbe Add better logs
3c3df66 Make the XML handling more resilient to failures
f842117 Check VM state prior to power it on/off
f6e7153 Add the "vbmc" utility
d985f52 Add a SIGINT handler
aff825c Initial Commit with code




Requirements updates


diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 000..55907e5
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,9 @@
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
+
+pbr>=1.6 # Apache-2.0
+six>=1.9.0 # MIT
+libvirt-python>=1.2.5 # LGPLv2+
+pyghmi>=1.0.3 # Apache-2.0
+PrettyTable>=0.7,<0.8  # BSD
diff --git a/test-requirements.txt b/test-requirements.txt
new file mode 100644
index 000..7c40c36
--- /dev/null
+++ b/test-requirements.txt
@@ -0,0 +1,18 @@
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
+
+hacking>=0.10.2,<0.11 # Apache-2.0
+
+coverage>=3.6 # Apache-2.0
+doc8 # Apache-2.0
+python-subunit>=0.0.18 # Apache-2.0/BSD
+sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 # BSD
+oslosphinx>=2.5.0,!=3.4.0 # Apache-2.0
+oslotest>=1.10.0 # Apache-2.0
+testrepository>=0.0.18 # Apache-2.0/BSD
+testscenarios>=0.4 # Apache-2.0/BSD
+testtools>=1.4.0 # MIT
+os-testr>=0.4.1 # Apache-2.0
+mock>=1.2 # BSD
+



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FFE request for Manila CephFS Native backend integration

2016-08-30 Thread Erno Kuvaja
On Tue, Aug 30, 2016 at 9:42 AM, Erno Kuvaja  wrote:
> On Tue, Aug 30, 2016 at 9:09 AM, Steven Hardy  wrote:
>> On Tue, Aug 30, 2016 at 06:32:16AM +0100, Erno Kuvaja wrote:
>>> On Fri, Aug 19, 2016 at 9:53 AM, Erno Kuvaja  wrote:
>>> > Hi all,
>>> >
>>> > I'm still working on getting all pieces together for the Manila CephFS
>>> > driver integration and realizing that we have about a week of busy
>>> > gating left 'till FF and the changes are not reviewed yet, I'd like to
>>> > ask community consider the feature for Feature Freeze Exception.
>>> >
>>> > I'm confident that I will get all the bits together over next week or
>>> > so, but I'm far from confident that we will have them merged in time.
>>> > I would like to see this feature making Newton still.
>>> >
>>> > Best,
>>> > Erno (jokke) Kuvaja
>>>
>>> The last commit for this feature is in review [0], pending the
>>> decision how & if we split these backends in THT.
>>>
>>> [0] https://review.openstack.org/#/c/358525/
>>
>> I'm fine with a FFE for this since it's low risk and disabled by default.
>>
>> Can you please create a blueprint in launchpad if there's not one already,
>> as it seems like this work has been done without any associated BP, and
>> this makes it hard to track around release time.
>>
>> Thanks,
>>
>> Steve
>>
>
> Sure Steve,
>
> I get it done today.
>
> Thanks,
> Erno

The blueprint can be found from
https://blueprints.launchpad.net/tripleo/+spec/manila-cephfs-integration

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] openstack/kuryr-libnetwork dropping 'common' subpackage

2016-08-30 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

There is a proposal for dropping the 'common' openstack/kuryr-libnetwork
package and move its pieces in the parent kuryr_libnetwork package [1].

The idea behind 'common' was that it would serve the current purpose
developed by the openstack/kuryr repository, i.e., to serve as a place to
put the utilities/config used by all the Kuryr integrations (be it
libnetwork, k8s, fuxi). Since we were advised and decided to split
openstack/kuryr into:

- openstack/kuryr: Kuryr library for common functionality and configurations
- openstack/kuryr-libnetwork: For Docker libnetwork specific code

The meaningful options for what to do about the 'common' package are two:

- Repurpose it to contain common code for different kuryr-libnetwork
drivers and refactor kuryr in plugins IPAM, Remote Driver.
- Move the common modules to the parent package.

Personally, I could see a nice clean separation with the former, but I have
to say that at the current stage, after all the friction we got from the
repository split, the best option in my mind is to go with the latter
option.

[1] https://review.openstack.org/#/c/361567/

Antoni Segura Puimedon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0070] Bandit versions lower than 1.1.0 do not escape HTML in issue reports

2016-08-30 Thread Luke Hinds
Bandit versions lower than 1.1.0 do not escape HTML in issue reports
---

### Summary ###

Bandit versions lower than 1.1.0 have a bug in the HTML report formatter
that does not escape HTML in issue context snippets. This could lead to
an XSS if HTML reports are hosted as part of a CI pipeline.

### Affected Services / Software ###

Bandit: < 1.1.0

### Discussion ###

Bandit versions lower than 1.1.0 have a bug in the HTML report formatter
that does not escape HTML in issue context snippets. This could lead to
an XSS attack if HTML reports are hosted as part of a CI pipeline
because HTML in the source code would be copied verbatim into the report.

For example:

  import subprocess
  subprocess.Popen("alert(1)", shell=True)

Will cause "alert(1)" to be inserted into the HTML
report. This issue could allow for arbitrary code injection into CI/CD
pipelines that feature accessible HTML reports generated from Bandit runs.

### Recommended Actions ###

Update bandit to version 1.1.0 or greater.

### Contacts / References ###
Author: Tim Kelsey , HPE
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0070
Original LaunchPad Bug : https://bugs.launchpad.net/bandit/+bug/1612988
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
CVE: N/A


0x3C202614.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Andrew Laski


On Tue, Aug 30, 2016, at 05:36 AM, lebre.adr...@free.fr wrote:
> Dear all 
> 
> Sorry my lack of reactivity, I 've been out for the few last days.
> 
> According to the different replies, I think we should enlarge the
> discussion and not stay on the vCPE use-case, which is clearly specific
> and represents only one use-case among the ones we would like to study.
> For instance we are in touch with NRENs in France and Poland that are
> interested to deploy up to one rack in each of their largest PoP in order
> to provide a distributed IaaS platform  (for further informations you can
> give a look to the presentation we gave during the last summit [1] [2]).
> 
> The two questions were: 
> 1./ Understand whether the fog/edge computing use case is in the scope of
> the Architecture WG and if not, do we need a massively distributed WG? 

Besides the question of which WG this might fall under is the question
of how any of the work groups are going to engage with the project
communities. There is a group of developers pushing forward on cellsv2
in Nova there should be some level of engagement between them and
whomever is discussing the fog/edge computing use case. To me it seems
like there's some level of overlap between the efforts even if cellsv2
is not a full solution. But whatever conversations are taking place
about fog/edge or large scale distributed use cases seem  to be
happening in channels that I am not aware of, and I haven't heard any
other cells developers mention them either.

So let's please find a way for people who are interested in these use
cases to talk to the developers who are working on similar things.


> 2./ How can we coordinate our actions with the ones performed in the
> Architecture WG? 
> 
> Regarding 1./, according to the different reactions, I propose to write a
> first draft in an etherpard to present the main goal of the Massively
> distributed WG and how people interested by such discussions can interact
> (I will paste the link to the etherpad by tomorrow). 
> 
> Regarding 2./,  I mentioned the Architecture WG because we do not want to
> develop additional software layers like Tricircle or other solutions (at
> least for the moment). 
> The goal of the WG is to conduct studies and experiments to identify to
> what extent current mechanisms can satisfy the needs of such a massively
> distributed use-cases and what are the missing elements.  
> 
> I don't want to give to many details in the present mail in order to stay
> as consice as possible (details will be given in the proposal).
> 
> Best regards, 
> Adrien 
> 
> [1] https://youtu.be/1oaNwDP661A?t=583 (please just watch the use-case
> introduction ;  the distribution of the DB  was one possible revision of
> Nova and according to the cell V2 changes it is probably now deprecated). 
> [2] https://hal.inria.fr/hal-01320235
> 
> - Mail original -
> > De: "Peter Willis" 
> > À: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Envoyé: Mardi 30 Août 2016 11:24:00
> > Objet: Re: [openstack-dev] [all][massively 
> > distributed][architecture]Coordination between actions/WGs
> > 
> > 
> > 
> > Colleagues,
> > 
> > 
> > An interesting discussion, the only question appears to be whether
> > vCPE is a suitable use case as the others do appear to be cloud use
> > cases. Lots of people assume CPE == small residential devices
> > however CPE covers a broad spectrum of appliances. Some of our
> > customers' premises are data centres, some are HQs, some are
> > campuses, some are branches. For residential CPE we use the
> > Broadband Forum's CPE Wide Area Network management protocol
> > (TR-069), which may be easier to modify to handle virtual
> > machines/containers etc. than to get OpenStack to scale to millions
> > of nodes. However that still leaves us with the need to manage a
> > stack of servers in thousands of telephone exchanges, central
> > offices or even cell-sites, running multiple work loads in a
> > distributed fault tolerant manner.
> > 
> > 
> > Best Regards,
> > Peter.
> > 
> > 
> > On Tue, Aug 30, 2016 at 4:48 AM, joehuang < joehu...@huawei.com >
> > wrote:
> > 
> > 
> > Hello, Jay,
> > 
> > > The Telco vCPE and Mobile "Edge cloud" (hint: not a cloud) use
> > > cases
> > 
> > Do you mean Mobile Edge Computing for Mobile "Edge cloud"? If so,
> > it's cloud. The introduction slides [1] can help you to learn the
> > use cases quickly, there are lots of material in ETSI website[2].
> > 
> > [1]
> > http://www.etsi.org/images/files/technologies/MEC_Introduction_slides__SDN_World_Congress_15-10-14.pdf
> > [2]
> > http://www.etsi.org/technologies-clusters/technologies/mobile-edge-computing
> > 
> > And when we talk about massively distributed cloud, vCPE is only one
> > of the scenarios( now in argue - ing ), but we can't forget that
> > there are other scenarios like vCDN, vEPC, vIMS, MEC, IoT etc.
> > Architecture level discussion is 

Re: [openstack-dev] [TripleO] FFE request for Ironic composable services

2016-08-30 Thread Steven Hardy
On Tue, Aug 30, 2016 at 10:20:00AM +0200, Dmitry Tantsur wrote:
> Hi all!
> 
> Bare metal provisioning is a hot topic right now. These services are also
> required for Dan's heat-driven undercloud work. The majority of changes has
> landed already, but there a few changes waiting on puppet-ironic changes.
> 
> The feature is low-impact as it's disabled by default and mostly merged
> anyway. The blueprint is
> https://blueprints.launchpad.net/tripleo/+spec/ironic-integration (it does
> miss a few links, probably I forgot to tag the patches - see below).
> 
> The missing bits are:
> 
> 1. (i)PXE configuration
> 
> puppet-tripleo: https://review.openstack.org/#/c/361109/
> THT: https://review.openstack.org/362148
> blocked by puppet-ironic patch https://review.openstack.org/354125 which
> passes CI and is on review currently.
> 
> 2. Potential problems with networking
> 
> Dan proposed a fix https://review.openstack.org/#/c/361459/
> I'm currently trying to test it locally, then we can merge it.
> 
> 3. Documentation
> 
> The patch is https://review.openstack.org/354016
> I'm keeping it WIP to see which of the above changes actually land.

+1, I think this is fine as a FFE as it's very nearly landed and as you say
is low-risk.

Lets see if we can land these remaining pieces for newton-3 (planned to tag
this tomorrow), if not then I'm fine with a FFE.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Requirement for bug when reno is present

2016-08-30 Thread Dave Walker
On 30 August 2016 at 11:42, Paul Bourke  wrote:

> Kolla,
>
> Do people feel we still want to require a bug-id in the commit message for
> features, when reno notes are present? My understanding is that till now
> we've required people to add bugs for non trivial features in order to
> track them as part of releases. Does/should reno supersede this?
>
> -Paul
>

I'm guess you raised this because my recent comment on a change you did...
but actually, I agree with you.  I don't think it is a good process, but
standardisation is key.

The issue comes around because Kolla wanted to use bugs to track potential
backports to stable/*.  However, I think this is generally overrated and
the Change-ID is suitable for this purpose.

I really hate raising bugs "just because", when realistically many of them
are not bugs and contain one-line style "Add this $feature" bug
description.  It just burns through Launchpad bug numbers, and will likely
never be looked at again.

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Requirement for bug when reno is present

2016-08-30 Thread Paul Bourke

Kolla,

Do people feel we still want to require a bug-id in the commit message 
for features, when reno notes are present? My understanding is that till 
now we've required people to add bugs for non trivial features in order 
to track them as part of releases. Does/should reno supersede this?


-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][barbican][cinder][cloudkitty][ironic][magnum][monasca][searchlight][senlin][solum][swift][tripleo][watcher][winstackers] tags needed to be considered part of Newton

2016-08-30 Thread Ivan Kolodyazhny
On Mon, Aug 29, 2016 at 10:37 PM, Sean McGinnis 
wrote:

> python-cinderclient this week. It just doesn't have enough activity to
> have needed a release any earlier
>

+1 to Sean on it. All features were merged last few weeks and it sounds
good to get it released both with python-cinderclient.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][kolla] Live migration, VNC / SPICE address?

2016-08-30 Thread Koniszewski, Pawel
Dave,

Thanks for pointing this out, looks like it is a regression in nova introduced 
during Newton cycle. At some point we moved checks of graphic and serial 
consoles to the live migration pre-checks on source node, but we never moved 
piece of code that populates this data while executing pre-checks on 
destination node.

I proposed a fix, you can take a look whether this fixes an issue that you 
observed - https://review.openstack.org/#/c/362756/. If yes I will update 
appropriate unit tests.
The bug is tracked here https://bugs.launchpad.net/nova/+bug/1618392

Kind Regards,
Pawel Koniszewski

From: Dave Walker [mailto:em...@daviey.com]
Sent: Tuesday, August 30, 2016 10:11 AM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [nova][kolla] Live migration, VNC / SPICE address?

Hi,

In Kolla, we are having an issue with Nova's VNC / SPICE address and live 
migration.  Currently, we are declaring the IP address for vncserver_listen on 
each node (via ansible).  However, when a live migration is performed, it fails 
due to this address not being available.

The hack is to switch the vncserver_listen to be 0.0.0.0, but this is horribly 
insecure and breaks the network isolation that kolla supports.

Looking at the relevant code, this looks like it should be functional via 
VIR_DOMAIN_XML_MIGRATABLE, but it doesn't seem to be working.

Could someone from Nova help us determine the cause?  We are tracking this as 
bug 1596724

https://github.com/openstack/nova/blob/04cef3b5d03be3db7efab6896de867fc2cbbd03a/nova/virt/libvirt/driver.py#L5393

https://github.com/openstack/nova/blob/04cef3b5d03be3db7efab6896de867fc2cbbd03a/nova/virt/libvirt/driver.py#L5754

Thanks

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Thierry Carrez
lebre.adr...@free.fr wrote:
> [...]
> According to the different replies, I think we should enlarge the discussion 
> and not stay on the vCPE use-case, which is clearly specific and represents 
> only one use-case among the ones we would like to study. For instance we are 
> in touch with NRENs in France and Poland that are interested to deploy up to 
> one rack in each of their largest PoP in order to provide a distributed IaaS 
> platform  (for further informations you can give a look to the presentation 
> we gave during the last summit [1] [2]).

+1

I think working on supporting more distributed clouds is worthwhile
because the technology would enable a lot of new use cases. Centering
the discussion on the Telco industry's specific vCPEs use case is
unnecessarily limiting...

> [...]
> Regarding 2./,  I mentioned the Architecture WG because we do not want to 
> develop additional software layers like Tricircle or other solutions (at 
> least for the moment). 
> The goal of the WG is to conduct studies and experiments to identify to what 
> extent current mechanisms can satisfy the needs of such a massively 
> distributed use-cases and what are the missing elements. 

Agreed that a bottom-up, incremental improvement strategy sounds more
likely to succeed in an established project like OpenStack (compared to
a big-bang top-bottom re-architecture).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread lebre . adrien
Dear all 

Sorry my lack of reactivity, I 've been out for the few last days.

According to the different replies, I think we should enlarge the discussion 
and not stay on the vCPE use-case, which is clearly specific and represents 
only one use-case among the ones we would like to study. For instance we are in 
touch with NRENs in France and Poland that are interested to deploy up to one 
rack in each of their largest PoP in order to provide a distributed IaaS 
platform  (for further informations you can give a look to the presentation we 
gave during the last summit [1] [2]).

The two questions were: 
1./ Understand whether the fog/edge computing use case is in the scope of the 
Architecture WG and if not, do we need a massively distributed WG? 
2./ How can we coordinate our actions with the ones performed in the 
Architecture WG? 

Regarding 1./, according to the different reactions, I propose to write a first 
draft in an etherpard to present the main goal of the Massively distributed WG 
and how people interested by such discussions can interact (I will paste the 
link to the etherpad by tomorrow). 

Regarding 2./,  I mentioned the Architecture WG because we do not want to 
develop additional software layers like Tricircle or other solutions (at least 
for the moment). 
The goal of the WG is to conduct studies and experiments to identify to what 
extent current mechanisms can satisfy the needs of such a massively distributed 
use-cases and what are the missing elements.  

I don't want to give to many details in the present mail in order to stay as 
consice as possible (details will be given in the proposal).

Best regards, 
Adrien 

[1] https://youtu.be/1oaNwDP661A?t=583 (please just watch the use-case 
introduction ;  the distribution of the DB  was one possible revision of Nova 
and according to the cell V2 changes it is probably now deprecated). 
[2] https://hal.inria.fr/hal-01320235

- Mail original -
> De: "Peter Willis" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Mardi 30 Août 2016 11:24:00
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> 
> 
> Colleagues,
> 
> 
> An interesting discussion, the only question appears to be whether
> vCPE is a suitable use case as the others do appear to be cloud use
> cases. Lots of people assume CPE == small residential devices
> however CPE covers a broad spectrum of appliances. Some of our
> customers' premises are data centres, some are HQs, some are
> campuses, some are branches. For residential CPE we use the
> Broadband Forum's CPE Wide Area Network management protocol
> (TR-069), which may be easier to modify to handle virtual
> machines/containers etc. than to get OpenStack to scale to millions
> of nodes. However that still leaves us with the need to manage a
> stack of servers in thousands of telephone exchanges, central
> offices or even cell-sites, running multiple work loads in a
> distributed fault tolerant manner.
> 
> 
> Best Regards,
> Peter.
> 
> 
> On Tue, Aug 30, 2016 at 4:48 AM, joehuang < joehu...@huawei.com >
> wrote:
> 
> 
> Hello, Jay,
> 
> > The Telco vCPE and Mobile "Edge cloud" (hint: not a cloud) use
> > cases
> 
> Do you mean Mobile Edge Computing for Mobile "Edge cloud"? If so,
> it's cloud. The introduction slides [1] can help you to learn the
> use cases quickly, there are lots of material in ETSI website[2].
> 
> [1]
> http://www.etsi.org/images/files/technologies/MEC_Introduction_slides__SDN_World_Congress_15-10-14.pdf
> [2]
> http://www.etsi.org/technologies-clusters/technologies/mobile-edge-computing
> 
> And when we talk about massively distributed cloud, vCPE is only one
> of the scenarios( now in argue - ing ), but we can't forget that
> there are other scenarios like vCDN, vEPC, vIMS, MEC, IoT etc.
> Architecture level discussion is still necessary to see if current
> design and new proposals can fulfill the demands. If there are lots
> of proposals, it's good to compare the pros. and cons, and which
> scenarios the proposal work, which scenario the proposal can't work
> very well.
> 
> ( Hope this reply in the thread :) )
> 
> Best Regards
> Chaoyi Huang(joehuang)
> 
> From: Jay Pipes [ jaypi...@gmail.com ]
> Sent: 29 August 2016 18:48
> To: openstack-dev@lists.openstack.org
> 
> 
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
> 
> On 08/27/2016 11:16 AM, HU, BIN wrote:
> > The challenge in OpenStack is how to enable the innovation built on
> > top of OpenStack.
> 
> No, that's not the challenge for OpenStack.
> 
> That's like saying the challenge for gasoline is how to enable the
> innovation of a jet engine.
> 
> > So telco use cases is not only the innovation built on top of
> > OpenStack. Instead, telco use cases, e.g. Gluon (NFV networking),
> > 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Peter Willis
Colleagues,

An interesting discussion, the only question appears to be whether vCPE is
a suitable use case as the others do appear to be cloud use cases. Lots of
people assume CPE == small residential devices however CPE covers a broad
spectrum of appliances. Some of our customers' premises are data centres,
some are HQs, some are campuses, some are branches. For residential CPE we
use the Broadband Forum's CPE Wide Area Network management protocol
(TR-069), which may be easier to modify to handle virtual
machines/containers etc. than to get OpenStack to scale to millions of
nodes. However that still leaves us with the need to manage a stack of
servers in thousands of telephone exchanges, central offices or even
cell-sites, running multiple work loads in a distributed fault tolerant
manner.

Best Regards,
Peter.

On Tue, Aug 30, 2016 at 4:48 AM, joehuang  wrote:

> Hello, Jay,
>
> > The Telco vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases
>
> Do you mean Mobile Edge Computing for Mobile "Edge cloud"? If so, it's
> cloud. The introduction slides [1]  can help you to learn the use cases
> quickly, there are lots of material in ETSI website[2].
>
> [1] http://www.etsi.org/images/files/technologies/MEC_
> Introduction_slides__SDN_World_Congress_15-10-14.pdf
> [2] http://www.etsi.org/technologies-clusters/technologies/mobile-edge-
> computing
>
> And when we talk about massively distributed cloud, vCPE is only one of
> the scenarios( now in argue - ing ), but we can't forget that there are
> other scenarios like  vCDN, vEPC, vIMS, MEC, IoT etc. Architecture level
> discussion is still necessary to see if current design and new proposals
> can fulfill the demands. If there are lots of proposals, it's good to
> compare the pros. and cons, and which scenarios the proposal work, which
> scenario the proposal can't work very well.
>
> ( Hope this reply in the thread :) )
>
> Best Regards
> Chaoyi Huang(joehuang)
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: 29 August 2016 18:48
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination
> between actions/WGs
>
> On 08/27/2016 11:16 AM, HU, BIN wrote:
> > The challenge in OpenStack is how to enable the innovation built on top
> of OpenStack.
>
> No, that's not the challenge for OpenStack.
>
> That's like saying the challenge for gasoline is how to enable the
> innovation of a jet engine.
>
> > So telco use cases is not only the innovation built on top of OpenStack.
> Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile
> Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in
> OpenStack itself. If OpenStack don't address those basic requirements,
>
> That's the thing, Bin, those are *not* "basic" requirements. The Telco
> vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases are asking
> for fundamental architectural and design changes to the foundational
> components of OpenStack. Instead of Nova being designed to manage a
> bunch of hardware in a relatively close location (i.e. a datacenter or
> multiple datacenters), vCPE is asking for Nova to transform itself into
> a micro-agent that can be run on an Apple Watch and do things in
> resource-constrained environments that it was never built to do.
>
> And, honestly, I have no idea what Gluon is trying to do. Ian sent me
> some information a while ago on it. I read it. I still have no idea what
> Gluon is trying to accomplish other than essentially bypassing Neutron
> entirely. That's not "innovation". That's subterfuge.
>
> > the innovation will never happen on top of OpenStack.
>
> Sure it will. AT and BT and other Telcos just need to write their own
> software that runs their proprietary vCPE software distribution
> mechanism, that's all. The OpenStack community shouldn't be relied upon
> to create software that isn't applicable to general cloud computing and
> cloud management platforms.
>
> > An example is - self-driving car is built on top of many technologies,
> such as sensor/camera, AI, maps, middleware etc. All innovations in each
> technology (sensor/camera, AI, map, etc.) bring together the innovation of
> self-driving car.
>
> Yes, indeed, but the people who created the self-driving car software
> didn't ask the people who created the cameras to write the software for
> them that does the self-driving.
>
> > WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built
> on top of OpenStack.
>
> You are defining "innovation" in an odd way, IMHO. "Innovation" for the
> vCPE use case sounds a whole lot like "rearchitect your entire software
> stack so that we don't have to write much code that runs on set-top boxes."
>
> Just being honest,
> -jay
>
> > Thanks
> > Bin
> > -Original Message-
> > From: Edward Leafe [mailto:e...@leafe.com]
> > Sent: Saturday, August 27, 2016 10:49 AM
> > To: OpenStack 

Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-30 Thread Flavio Percoco

On 25/08/16 13:13 -0400, Steve Martinelli wrote:

The keystone team is pursuing a trigger-based approach to support rolling,
zero-downtime upgrades. The proposed operator experience is documented here:

 http://docs.openstack.org/developer/keystone/upgrading.html

This differs from Nova and Neutron's approaches to solve for rolling
upgrades (which use oslo.versionedobjects), however Keystone is one of the
few services that doesn't need to manage communication between multiple
releases of multiple service components talking over the message bus (which
is the original use case for oslo.versionedobjects, and for which it is
aptly suited). Keystone simply scales horizontally and every node talks
directly to the database.

Database triggers are obviously a new challenge for developers to write,
honestly challenging to debug (being side effects), and are made even more
difficult by having to hand write triggers for MySQL, PostgreSQL, and
SQLite independently (SQLAlchemy offers no assistance in this case), as
seen in this patch:

 https://review.openstack.org/#/c/355618/

However, implementing an application-layer solution with
oslo.versionedobjects is not an easy task either; refer to Neutron's
implementation:


https://review.openstack.org/#/q/topic:bp/adopt-oslo-versioned-objects-for-db

Our primary concern at this point are how to effectively test the triggers
we write against our supported database systems, and their various
deployment variations. We might be able to easily drop SQLite support (as
it's only supported for our own test suite), but should we expect variation
in support and/or actual behavior of triggers across the MySQLs, MariaDBs,
Perconas, etc, of the world that would make it necessary to test each of
them independently? If you have operational experience working with
triggers at scale: are there landmines that we need to be aware of? What is
it going to take for us to say we support *zero* dowtime upgrades with
confidence?


Hey Steve, Dolph,

Thanks for sending this out. There's been some discussions in the Glance
community about how we can implement rolling upgrades and it seems like Glance's
case is very similar to keystone's.

I'll make sure folks in the glance community are aware of this thread and reach
out.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support specified volume_type when boot instance, do we like it?

2016-08-30 Thread Zhenyu Zheng
Dear all,

Thanks all for the reply, I have read the etherpad note and there seems no
working BP/SPEC now.
So I have updated one BP/SPEC from my colleague put up for Mitaka with
microversion implementation for Ocata release:
BP:
https://blueprints.launchpad.net/nova/+spec/add-volume-type-when-boot-instances
SPEC: https://review.openstack.org/#/c/362698/

I'm aiming to implement this useful feature for O release :-)

Thanks,

Kevin Zheng

On Tue, Aug 30, 2016 at 3:35 AM, Sean McGinnis 
wrote:

> On Mon, Aug 29, 2016 at 09:29:57AM -0400, Andrew Laski wrote:
> >
> >
> >
> > On Mon, Aug 29, 2016, at 09:06 AM, Jordan Pittier wrote:
> > >
> > >
> > > On Mon, Aug 29, 2016 at 8:50 AM, Zhenyu Zheng
> > >  wrote:
> > >> Hi, all
> > >>
> > >> Currently we have customer demands about adding parameter
> > >> "volume_type" to --block-device to provide the support of specified
> > >> storage backend to boot instance. And I find one newly drafted
> > >> Blueprint that aiming to address the same feature:
> > >> https://blueprints.launchpad.net/nova/+spec/support-boot-
> instance-set-store-type
> > >> ;
> > >>
> > >> As I know this is kind of "proxy" feature for cinder and we don't
> > >> like it in general, but as the boot from volume functional was
> > >> already there, so maybe it is OK to support another parameter?
> > >>
> > >> So, my question is that what are your opinions about this in general?
> > >> Do you like it or it will not be able to got approved at all?
> > >>
> > >> Thanks,
> > >>
> > >> Kevin Zheng
> > >
> > > Hi,
> > > I think it's not a great idea. Not only for the reason you mention,
> > > but also because the "nova boot" command is already way to complicated
> > > with way to many options. IMO we should only add support for new
> > > features, not "features" we can have by other means, just for
> > > convenience.
> >
> > I completely agree with this. However I have some memory of us
> > saying(in Austin?) that adding volume_type would be acceptable since
> > it's a clear oversight in the list of parameters for specifying a block
> > device. So while I greatly dislike Nova creating volumes and would
> > rather users pass in pre-created volume ids I would support adding this
> > parameter. I do not support continuing to add parameters if Cinder adds
> > parameters though.
> >
>
> FWIW, I get asked the question on the Cinder side of how to specify
> which volume type to use when booting from a Cinder volume on a fairly
> regular basis.
>
> I agree with the approach of not adding more proxy functionality in
> Nova, but since this is an existing feature that is missing expected
> functionality, I would like to see this get in.
>
> Just my $0.02.
>
> Sean
>
> >
> > >
> > >
> > > -
> > > 
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-
> > > requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FFE request for Manila CephFS Native backend integration

2016-08-30 Thread Erno Kuvaja
On Tue, Aug 30, 2016 at 9:09 AM, Steven Hardy  wrote:
> On Tue, Aug 30, 2016 at 06:32:16AM +0100, Erno Kuvaja wrote:
>> On Fri, Aug 19, 2016 at 9:53 AM, Erno Kuvaja  wrote:
>> > Hi all,
>> >
>> > I'm still working on getting all pieces together for the Manila CephFS
>> > driver integration and realizing that we have about a week of busy
>> > gating left 'till FF and the changes are not reviewed yet, I'd like to
>> > ask community consider the feature for Feature Freeze Exception.
>> >
>> > I'm confident that I will get all the bits together over next week or
>> > so, but I'm far from confident that we will have them merged in time.
>> > I would like to see this feature making Newton still.
>> >
>> > Best,
>> > Erno (jokke) Kuvaja
>>
>> The last commit for this feature is in review [0], pending the
>> decision how & if we split these backends in THT.
>>
>> [0] https://review.openstack.org/#/c/358525/
>
> I'm fine with a FFE for this since it's low risk and disabled by default.
>
> Can you please create a blueprint in launchpad if there's not one already,
> as it seems like this work has been done without any associated BP, and
> this makes it hard to track around release time.
>
> Thanks,
>
> Steve
>

Sure Steve,

I get it done today.

Thanks,
Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-30 Thread Chris Dent

On Mon, 29 Aug 2016, Matt Riedemann wrote:

2. Chris is going to cleanup the devstack change that adds the placement 
service:


https://review.openstack.org/#/c/342362/

The main issue is there isn't a separate placement database, at least not by 
default, so Chris has to take that into account. In Newton, by default, the 
Nova API DB will be used for the placement service. You can optionally 
configure a separate placement database with the API DB schema, but we're not 
going to test with that as the default in devstack in Newton since that's 
most likely not what deployers would be doing in Newton as the placement 
service is still optional.


New version of this was just pushed up, making it default to the api
db. For reference I do simple testing of it using:

https://gist.github.com/cdent/a9590764fbc7402d450fa36df14f35e0

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Creating VM error: Insufficient compute resources

2016-08-30 Thread Stephen Finucane
On Thu, 2016-08-25 at 14:11 +0400, Fawaz Mohammed wrote:
> Have you enabled huge page on the host level?
> Do you have enough vn.nr_hugepages?
> As per your requirements, you need a host with 512 hugepage (1G/2M).
> Check your host's /etc/sysctl.conf file and see vm.nr_hugepages
> value.

As a more immediate test, you can unset the 'hw:mem_page_size'
parameter and see if it boots. If it does, it's the hugepages as
suggested above.

Stephen

> On Aug 25, 2016 1:15 PM, "zhi"  wrote:
> > hi, all
> > 
> >     I plan to create VM with huge page. And I created a new flavor
> > like this:
> > 
> > $ nova flavor-show ed8dccd2-adbe-44ee-9e4f-391d045d3653
> > ++-
> > -
> > ---+
> > | Property                   | Value                              
> >                                                                    
> >     |
> > ++-
> > -
> > ---+
> > | OS-FLV-DISABLED:disabled   | False                              
> >                                                                    
> >     |
> > | OS-FLV-EXT-DATA:ephemeral  | 0                                  
> >                                                                    
> >     |
> > | disk                       | 30                                  
> >                                                                    
> >    |
> > | extra_specs                |
> > {"aggregate_instance_extra_specs:pinned": "true", "hw:cpu_policy":
> > "dedicated", "hw:mem_page_size": "2048"} |
> > | id                         | ed8dccd2-adbe-44ee-9e4f-391d045d3653 
> >                                                                    
> >    |
> > | name                       | m1.vm_2                            
> >                                                                    
> >     |
> > | os-flavor-access:is_public | True                                
> >                                                                    
> >    |
> > | ram                        | 1024                                
> >                                                                    
> >    |
> > | rxtx_factor                | 1.0                                
> >                                                                    
> >     |
> > | swap                       |                                    
> >                                                                    
> >     |
> > | vcpus                      | 4                                  
> >                                                                    
> >     |
> > ++-
> > -
> > ---+
> > 
> > Then I create a VM by using this flavor and creating fail. The
> > error message is :
> > "
> > {"message": "Build of instance ada7ac22-1052-44e1-b4a5-c21221dbab87 
> > was re-scheduled: Insufficient compute resources: Requested
> > instance NUMA topology cannot fit the given
> >  host NUMA topology.", "code": 500, "details": "  File
> > \"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line
> > 1905, in _do_build_and_run_instance
> > "
> > 
> > And, my compute node's numa info is:
> > 
> > $ numactl --hardware
> > available: 2 nodes (0-1)
> > node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
> > node 0 size: 32543 MB
> > node 0 free: 28307 MB
> > node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
> > node 1 size: 32768 MB
> > node 1 free: 29970 MB
> > node distances:
> > node   0   1
> >   0:  10  21
> >   1:  21  10
> > 
> > Qemu version is "QEMU emulator version 2.1.2 (qemu-kvm-ev-2.1.2-
> > 23.el7.1)". And libvirtd version is "1.2.17". 
> > 
> > 
> > Did anyone meet the same error like me?
> > 
> > 
> > 
> > B.R.
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] FFE request for Ironic composable services

2016-08-30 Thread Dmitry Tantsur

Hi all!

Bare metal provisioning is a hot topic right now. These services are 
also required for Dan's heat-driven undercloud work. The majority of 
changes has landed already, but there a few changes waiting on 
puppet-ironic changes.


The feature is low-impact as it's disabled by default and mostly merged 
anyway. The blueprint is 
https://blueprints.launchpad.net/tripleo/+spec/ironic-integration (it 
does miss a few links, probably I forgot to tag the patches - see below).


The missing bits are:

1. (i)PXE configuration

puppet-tripleo: https://review.openstack.org/#/c/361109/
THT: https://review.openstack.org/362148
blocked by puppet-ironic patch https://review.openstack.org/354125 which 
passes CI and is on review currently.


2. Potential problems with networking

Dan proposed a fix https://review.openstack.org/#/c/361459/
I'm currently trying to test it locally, then we can merge it.

3. Documentation

The patch is https://review.openstack.org/354016
I'm keeping it WIP to see which of the above changes actually land.

Thanks for considering it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Steven Hardy
On Mon, Aug 29, 2016 at 04:03:27PM +0200, Thomas Herve wrote:
> On Mon, Aug 29, 2016 at 3:16 PM, Steven Hardy  wrote:
> > On Mon, Aug 29, 2016 at 07:07:09AM +0200, Thomas Herve wrote:
> >> dict($.groupBy($.keys().toList()[0], $.values().toList()[0][0]))
> >>
> >> ought to work, I believe?
> >
> > So, as it turns out, my example above was bad, and groupBy only works if
> > you have a list of maps with exactly one key, we actually need this:
> >
> >   # Example of tripleo format
> >   # We need an output of
> >   # "gnocchi_metricd_node_names": ["overcloud-controller-0"]
> >   # "tripleo_packages_node_names": ["overcloud-controller-0", 
> > "overcloud-compute-0"]
> >   # "nova_compute_node_names": ["overcloud-compute-0"]
> >   debug_tripleo:
> > value:
> >   yaql:
> > expression: dict($.data.l.groupBy($.keys().toList()[0], 
> > $.values().toList()[0][0]))
> > data:
> >   l:
> > - "gnocchi_metricd_node_names": ["overcloud-controller-0"]
> >   "tripleo_packages_node_names": ["overcloud-controller-0"]
> > - "nova_compute_node_names": ["overcloud-compute-0"]
> >   "tripleo_packages_node_names": ["overcloud-compute-0"]
> >
> > So, I'm back to wondering how we make the intermediate assignement of 
> > tripleo_packages_node_names
> 
> Well I didn't know all the constraints :).

Heh, thanks for the help, I appreciate it!

> $.selectMany($.items()).groupBy($[0], $[1][0])
> 
> is another attempt. It won't work if you have more than one value per
> key in the original data, but I think it will handle multiple keys.

Yeah, that gets us closer, but we do need to handle more than one value
(list entry) per key, e.g:

 data:
   l:
 - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
   "tripleo_packages_node_names": ["a0", "a1", "a2"]
 - "nova_compute_node_names": ["b0"]
   "tripleo_packages_node_names": ["b0"]

Output needs to be like:

 "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
 "tripleo_packages_node_names": ["a0", "a1", "a2", "b0"]
 "nova_compute_node_names": ["b0"]

I'm almost tempted to just write a map_deep_merge patch for heat, but I'm
guessing we won't be able to land it for newton at this point anyway.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Requesting FFE for improved Swift deployments

2016-08-30 Thread Steven Hardy
On Mon, Aug 29, 2016 at 09:32:09PM +0200, Christian Schwede wrote:
> Hello,
> 
> kindly asking for a FFE for a required setting to improve Swift-based
> TripleO deployments:
> 
> https://review.openstack.org/#/c/358643/

This looks like it's associated with a bug, and if it's a bugfix then no
FFE is required.

The only question is do we target this bug at rc1?

https://bugs.launchpad.net/tripleo/+bug/1609421

Or would it be better to raise another one specific to that fix (as the bug
above appears to reference a number of interrelated issues)?

> This is required to land the last patch in a series of TripleO-doc patches:
> 
> https://review.openstack.org/#/c/293311/
> https://review.openstack.org/#/c/360353/
> https://review.openstack.org/#/c/361032/
> 
> Current idea is to automate the described manual actions for Ocata.
> There was some discussion on the ML as well:

+1, landing the docs improvements for newton then automating in ocata
sounds good too me.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][kolla] Live migration, VNC / SPICE address?

2016-08-30 Thread Dave Walker
Hi,

In Kolla, we are having an issue with Nova's VNC / SPICE address and live
migration.  Currently, we are declaring the IP address for vncserver_listen
on each node (via ansible).  However, when a live migration is performed,
it fails due to this address not being available.

The hack is to switch the vncserver_listen to be 0.0.0.0, but this is
horribly insecure and breaks the network isolation that kolla supports.

Looking at the relevant code, this looks like it should be functional
via VIR_DOMAIN_XML_MIGRATABLE, but it doesn't seem to be working.

Could someone from Nova help us determine the cause?  We are tracking this
as bug 1596724

https://github.com/openstack/nova/blob/04cef3b5d03be3db7efab6896de867fc2cbbd03a/nova/virt/libvirt/driver.py#L5393

https://github.com/openstack/nova/blob/04cef3b5d03be3db7efab6896de867fc2cbbd03a/nova/virt/libvirt/driver.py#L5754

Thanks

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FFE request for Manila CephFS Native backend integration

2016-08-30 Thread Steven Hardy
On Tue, Aug 30, 2016 at 06:32:16AM +0100, Erno Kuvaja wrote:
> On Fri, Aug 19, 2016 at 9:53 AM, Erno Kuvaja  wrote:
> > Hi all,
> >
> > I'm still working on getting all pieces together for the Manila CephFS
> > driver integration and realizing that we have about a week of busy
> > gating left 'till FF and the changes are not reviewed yet, I'd like to
> > ask community consider the feature for Feature Freeze Exception.
> >
> > I'm confident that I will get all the bits together over next week or
> > so, but I'm far from confident that we will have them merged in time.
> > I would like to see this feature making Newton still.
> >
> > Best,
> > Erno (jokke) Kuvaja
> 
> The last commit for this feature is in review [0], pending the
> decision how & if we split these backends in THT.
> 
> [0] https://review.openstack.org/#/c/358525/

I'm fine with a FFE for this since it's low risk and disabled by default.

Can you please create a blueprint in launchpad if there's not one already,
as it seems like this work has been done without any associated BP, and
this makes it hard to track around release time.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] relationship_type in static_datasources

2016-08-30 Thread Rosensweig, Elisha (Nokia - IL)
Yes. Please add it to the file

/workspace/dev/vitrage/vitrage/tests/resources/mock_configurations/driver/driver_switch_snapshot_dynamic.json

under the "relationships" section, just like in your commit.

If you need help understanding how to work with the mock_sync, let me know.

Elisha

From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
Sent: Tuesday, August 30, 2016 9:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] relationship_type in static_datasources

I added a new key 'is_source' to static physical configuration [1] and the test 
currently fails.

Not sure we need to update mock_sync or not.

[1] 
https://review.openstack.org/#/c/362525/1/vitrage/tests/resources/static_datasources/switch_to_host_datasource.yaml

On Tue, Aug 30, 2016 at 2:53 PM Rosensweig, Elisha (Nokia - IL) 
> wrote:
What is the problem you are running into with mock_sync?
Elisha

From: Yujun Zhang 
[mailto:zhangyujun+...@gmail.com]
Sent: Tuesday, August 30, 2016 5:09 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] relationship_type in static_datasources

Patch work in progress [1] but local test fails [2].

It seems to be caused by the mock_sync.

I'm still looking into it. Any help would be appreciated.

[1] https://review.openstack.org/#/c/362525
[2] http://pastebin.com/iepqxUAP


On Mon, Aug 29, 2016 at 4:59 PM Yujun Zhang 
> wrote:
Thanks, Alexey. Point 1 and 3 are pretty clear.

As for point 2, if I understand it correctly, you are suggesting to modify the 
static_physical.yaml as following

entities:
 - type: switch
   name: switch-1
   id: switch-1 # should be same as name
   state: available
   relationships:
 - type: nova.host
   name: host-1
   id: host-1 # should be same as name

   is_source: true # entity is `source` in this relationship

   relation_type: attached

 - type: switch

   name: switch-2

   id: switch-2 # should be same as name

   is_source: false # entity is `target` in this relationship
   relation_type: backup
But I wonder why the static physical configuration file use a different format 
from vitrage template definitions[1]

[1] 
https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst

On Sun, Aug 28, 2016 at 4:14 PM Weyl, Alexey (Nokia - IL) 
> wrote:
Hi Yujun,

In order for the static_physical to work for different datasources without the 
restrictions, you need to do the following changes:
Go to the static_physical transformer:

1.   Remove the methods: _register_relations_direction, 
_find_relation_direction_source.

2.   Add to the static_physical.yaml for each definition also a field for 
direction which will indicate the source and the destination between the 
datasources.

3.   In method: _create_neighbor, remove the usage of method 
_find_relation_direction_source, and use the new definition from the yaml file 
here to decide the edge direction.

Is it ok?


From: Yujun Zhang 
[mailto:zhangyujun+...@gmail.com]
Sent: Friday, August 26, 2016 4:22 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] relationship_type in static_datasources

Lost in the code...It seems the datasource just construct the entities and send 
them over event bus to entity graph processor. I need to dig further to find 
out the exact point the "backup" relationship is filtered.

I think we should some how keep the validation of relationship type. It is so 
easy to make typo when creating the template manually (I did this quite 
often...).

My idea is to delegate the validation to datasource instead of enumerating all 
constants it in evaluator. I think this will introduce better extensibility. 
Any comments?

On Thu, Aug 25, 2016 at 1:32 PM Weyl, Alexey (Nokia - IL) 
> wrote:
Hi Yujun,

You can find the names of the lables in the constants.py file.

In addition, the restriction on the physical_static datasource is done in it’s 
driver.py.

Alexey

From: Yujun Zhang 
[mailto:zhangyujun+...@gmail.com]
Sent: Thursday, August 25, 2016 4:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] relationship_type in static_datasources

Hi, Ifat,

I searched for edge_labels in the project. It seems it is validated only in 
`vitrage/evaluator/template_validation/template_syntax_validator.py`. Where is 
such restriction applied in static_datasources?

--
Yujun

On Wed, Aug 24, 2016 at 3:19 PM Afek, Ifat (Nokia - IL) 
> wrote:
Hi Yujun,


[openstack-dev] [vitrage] inspecting external openstack environment

2016-08-30 Thread Yujun Zhang
My purpose is to inspect an **existing** openstack environment with vitrage.

Do I have to install vitrage on the target environment or it can be done by
proper configuration?

--
Yujun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] relationship_type in static_datasources

2016-08-30 Thread Yujun Zhang
I added a new key 'is_source' to static physical configuration [1] and the
test currently fails.

Not sure we need to update mock_sync or not.

[1]
https://review.openstack.org/#/c/362525/1/vitrage/tests/resources/static_datasources/switch_to_host_datasource.yaml

On Tue, Aug 30, 2016 at 2:53 PM Rosensweig, Elisha (Nokia - IL) <
elisha.rosensw...@nokia.com> wrote:

> What is the problem you are running into with mock_sync?
>
> Elisha
>
>
>
> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> *Sent:* Tuesday, August 30, 2016 5:09 AM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [vitrage] relationship_type in
> static_datasources
>
>
>
> Patch work in progress [1] but local test fails [2].
>
>
>
> It seems to be caused by the mock_sync.
>
>
>
> I'm still looking into it. Any help would be appreciated.
>
>
>
> [1] https://review.openstack.org/#/c/362525
>
> [2] http://pastebin.com/iepqxUAP
>
>
>
>
>
> On Mon, Aug 29, 2016 at 4:59 PM Yujun Zhang 
> wrote:
>
> Thanks, Alexey. Point 1 and 3 are pretty clear.
>
>
>
> As for point 2, if I understand it correctly, you are suggesting to modify
> the static_physical.yaml as following
>
> entities:
>  - type: switch
>name: switch-1
>id: switch-1 # should be same as name
>state: available
>relationships:
>  - type: nova.host
>name: host-1
>id: host-1 # should be same as name
>
> *   is_source: true # entity is `source` in this relationship*
>
>relation_type: attached
>
>  - type: switch
>
>name: switch-2
>
>id: switch-2 # should be same as name
>
>
> *   is_source: false # entity is `target` in this relationship*   
> relation_type: backup
>
> But I wonder why the static physical configuration file use a different
> format from vitrage template definitions[1]
>
>
>
> [1]
> https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst
>
>
>
> On Sun, Aug 28, 2016 at 4:14 PM Weyl, Alexey (Nokia - IL) <
> alexey.w...@nokia.com> wrote:
>
> Hi Yujun,
>
>
>
> In order for the static_physical to work for different datasources without
> the restrictions, you need to do the following changes:
>
> Go to the static_physical transformer:
>
> 1.   Remove the methods: _register_relations_direction,
> _find_relation_direction_source.
>
> 2.   Add to the static_physical.yaml for each definition also a field
> for direction which will indicate the source and the destination between
> the datasources.
>
> 3.   In method: _create_neighbor, remove the usage of method
> _find_relation_direction_source, and use the new definition from the yaml
> file here to decide the edge direction.
>
>
>
> Is it ok?
>
>
>
> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> *Sent:* Friday, August 26, 2016 4:22 AM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [vitrage] relationship_type in
> static_datasources
>
>
>
> Lost in the code...It seems the datasource just construct the entities and
> send them over event bus to entity graph processor. I need to dig further
> to find out the exact point the "backup" relationship is filtered.
>
>
>
> I think we should some how keep the validation of relationship type. It is
> so easy to make typo when creating the template manually (I did this quite
> often...).
>
>
>
> My idea is to delegate the validation to datasource instead of enumerating
> all constants it in evaluator. I think this will introduce better
> extensibility. Any comments?
>
>
>
> On Thu, Aug 25, 2016 at 1:32 PM Weyl, Alexey (Nokia - IL) <
> alexey.w...@nokia.com> wrote:
>
> Hi Yujun,
>
>
>
> You can find the names of the lables in the constants.py file.
>
>
>
> In addition, the restriction on the physical_static datasource is done in
> it’s driver.py.
>
>
>
> Alexey
>
>
>
> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> *Sent:* Thursday, August 25, 2016 4:50 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [vitrage] relationship_type in
> static_datasources
>
>
>
> Hi, Ifat,
>
>
>
> I searched for edge_labels in the project. It seems it is validated only
> in `vitrage/evaluator/template_validation/template_syntax_validator.py`.
> Where is such restriction applied in static_datasources?
>
>
>
> --
>
> Yujun
>
>
>
> On Wed, Aug 24, 2016 at 3:19 PM Afek, Ifat (Nokia - IL) <
> ifat.a...@nokia.com> wrote:
>
> Hi Yujun,
>
>
>
> Indeed, we have some restrictions on the relationship types that can be
> used in the static datasources. I think we should remove these
> restrictions, and allow any kind of relationship type.
>
>
>
> Best regards,
>
> Ifat.
>
>
>
> *From: *Yujun Zhang
> *Date: *Monday, 22 August 2016 at 08:37
>
> I'm following the sample configuration in docs [1] to verify how static
> datasources works.
>
>
>
> It seems `backup` relationship is not displayed in 

Re: [openstack-dev] [nova] no nova bugs team meeting today

2016-08-30 Thread Markus Zoeller
Yeah, no meeting again, same reason :/

-- 
Regards, Markus Zoeller (markus_z)


On 23.08.2016 14:21, Markus Zoeller wrote:
> I get dragged into some internal stuff recently and didn't have time to
> prepare anything or host the meeting.
> 
> One noteworthy thing though, please tag any bug report, which
> potentially blocks the RC in a few weeks, with "newton-rc-potential":
> 
> https://bugs.launchpad.net/nova/+bugs?field.tag=newton-rc-potential
> 
> There's less than 3 weeks left until RC1 target week starts.
> 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] relationship_type in static_datasources

2016-08-30 Thread Rosensweig, Elisha (Nokia - IL)
What is the problem you are running into with mock_sync?
Elisha

From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
Sent: Tuesday, August 30, 2016 5:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] relationship_type in static_datasources

Patch work in progress [1] but local test fails [2].

It seems to be caused by the mock_sync.

I'm still looking into it. Any help would be appreciated.

[1] https://review.openstack.org/#/c/362525
[2] http://pastebin.com/iepqxUAP


On Mon, Aug 29, 2016 at 4:59 PM Yujun Zhang 
> wrote:
Thanks, Alexey. Point 1 and 3 are pretty clear.

As for point 2, if I understand it correctly, you are suggesting to modify the 
static_physical.yaml as following

entities:
 - type: switch
   name: switch-1
   id: switch-1 # should be same as name
   state: available
   relationships:
 - type: nova.host
   name: host-1
   id: host-1 # should be same as name

   is_source: true # entity is `source` in this relationship

   relation_type: attached

 - type: switch

   name: switch-2

   id: switch-2 # should be same as name

   is_source: false # entity is `target` in this relationship
   relation_type: backup
But I wonder why the static physical configuration file use a different format 
from vitrage template definitions[1]

[1] 
https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst

On Sun, Aug 28, 2016 at 4:14 PM Weyl, Alexey (Nokia - IL) 
> wrote:
Hi Yujun,

In order for the static_physical to work for different datasources without the 
restrictions, you need to do the following changes:
Go to the static_physical transformer:

1.   Remove the methods: _register_relations_direction, 
_find_relation_direction_source.

2.   Add to the static_physical.yaml for each definition also a field for 
direction which will indicate the source and the destination between the 
datasources.

3.   In method: _create_neighbor, remove the usage of method 
_find_relation_direction_source, and use the new definition from the yaml file 
here to decide the edge direction.

Is it ok?


From: Yujun Zhang 
[mailto:zhangyujun+...@gmail.com]
Sent: Friday, August 26, 2016 4:22 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] relationship_type in static_datasources

Lost in the code...It seems the datasource just construct the entities and send 
them over event bus to entity graph processor. I need to dig further to find 
out the exact point the "backup" relationship is filtered.

I think we should some how keep the validation of relationship type. It is so 
easy to make typo when creating the template manually (I did this quite 
often...).

My idea is to delegate the validation to datasource instead of enumerating all 
constants it in evaluator. I think this will introduce better extensibility. 
Any comments?

On Thu, Aug 25, 2016 at 1:32 PM Weyl, Alexey (Nokia - IL) 
> wrote:
Hi Yujun,

You can find the names of the lables in the constants.py file.

In addition, the restriction on the physical_static datasource is done in it’s 
driver.py.

Alexey

From: Yujun Zhang 
[mailto:zhangyujun+...@gmail.com]
Sent: Thursday, August 25, 2016 4:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] relationship_type in static_datasources

Hi, Ifat,

I searched for edge_labels in the project. It seems it is validated only in 
`vitrage/evaluator/template_validation/template_syntax_validator.py`. Where is 
such restriction applied in static_datasources?

--
Yujun

On Wed, Aug 24, 2016 at 3:19 PM Afek, Ifat (Nokia - IL) 
> wrote:
Hi Yujun,

Indeed, we have some restrictions on the relationship types that can be used in 
the static datasources. I think we should remove these restrictions, and allow 
any kind of relationship type.

Best regards,
Ifat.

From: Yujun Zhang
Date: Monday, 22 August 2016 at 08:37
I'm following the sample configuration in docs [1] to verify how static 
datasources works.

It seems `backup` relationship is not displayed in the entity graph view and 
neither is it included in topology show.

There is an enumeration for edge labels [2]. Should relationship in static 
datasource be limited to it?

[1] 
https://github.com/openstack/vitrage/blob/master/doc/source/static-physical-config.rst
[2] 
https://github.com/openstack/vitrage/blob/master/vitrage/common/constants.py#L49
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: