Re: [openstack-dev] [neutron] change in argument type for allocate_partially_specified_segment

2017-01-25 Thread Anna Taraday
OK, this makes sense. Thanks for clarification!

On Wed, Jan 25, 2017 at 10:50 PM Ihar Hrachyshka 
wrote:

> On Tue, Jan 24, 2017 at 10:29 PM, Anna Taraday
>  wrote:
> > Thanks for bringing this up!
> >
> > I was assuming that from Ocata everyone should switch from usage 'old'
> > TunnelTypeDriver to updated one.
>
> I am not sure. We haven't marked the 'old' one with any deprecation
> warnings, did we? For Ocata at least, both classes will be available
> for use. In Pike, we can look at cleaning up the old class (either
> through deprecation warning and removal in Q; or using
> NeutronLibImpact process).
>
> >
> > Revering both back to session means reverting all refactor and this is
> not
> > in line with enginefacade work and as I remember some of OVO patches we
> > waiting for this refactor too.
> >
> > I we can duplicate methods or we can check type of the argument if
> session
> > or context and proceed differently. I will push patch for this ASAP.
> >
>
> I reviewed the patch, I think it's a good path forward, thanks.
>
> Ihar
>
-- 
Regards,
Ann Taraday
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] To rootwrap or piggyback privsep helpers?

2017-01-25 Thread Michael Still
I think #3 is the right call for now. The person we had working on privsep
has left the company, and I don't have anyone I could get to work on this
right now. Oh, and we're out of time.

Michael

On Thu, Jan 26, 2017 at 3:49 PM, Matt Riedemann  wrote:

> The patch to add support for ephemeral storage with the Virtuozzo config
> is using the privsep helper from os-brick to run a new ploop command as
> root:
>
> https://review.openstack.org/#/c/312488/
>
> I've objected to this because I'm pretty sure this is not how we intended
> to be using privsep in Nova. The privsep helper in os-brick should be for
> privileged commands that os-brick itself needs to run, and was for things
> that used to have to be carried in both nova and cinder rootwrap filters.
>
> I know we also want new things in nova that require root access to execute
> commands to run privsep, but we haven't had anything do that yet, and we've
> said we'd like an example before making it a hard rule. But we're finding
> it hard to put our foot down on the first one (I remember we allowed
> something in with rootwrap in Newton because we didn't want to block on
> privsep).
>
> With feature freeze coming up tomorrow, however, I'm now torn on how to
> handle this. The options I see are:
>
> 1. Block this until it's properly using privsep in Nova, effectively
> killing it's chances to make Ocata.
>
> 2. Allow the patch as-is with how it's re-using the privsep helper from
> os-brick.
>
> 3. Change the patch to just use rootwrap with a new compute.filters entry,
> no privsep at all - basically how we used to always do this stuff.
>
> In the interest of time, and not seeing anyone standing up to lead the
> charge on privsep conversion in Nova in the immediate future, I'm learning
> toward just doing #3 but wanted to get other opinions.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] To rootwrap or piggyback privsep helpers?

2017-01-25 Thread Matt Riedemann
The patch to add support for ephemeral storage with the Virtuozzo config 
is using the privsep helper from os-brick to run a new ploop command as 
root:


https://review.openstack.org/#/c/312488/

I've objected to this because I'm pretty sure this is not how we 
intended to be using privsep in Nova. The privsep helper in os-brick 
should be for privileged commands that os-brick itself needs to run, and 
was for things that used to have to be carried in both nova and cinder 
rootwrap filters.


I know we also want new things in nova that require root access to 
execute commands to run privsep, but we haven't had anything do that 
yet, and we've said we'd like an example before making it a hard rule. 
But we're finding it hard to put our foot down on the first one (I 
remember we allowed something in with rootwrap in Newton because we 
didn't want to block on privsep).


With feature freeze coming up tomorrow, however, I'm now torn on how to 
handle this. The options I see are:


1. Block this until it's properly using privsep in Nova, effectively 
killing it's chances to make Ocata.


2. Allow the patch as-is with how it's re-using the privsep helper from 
os-brick.


3. Change the patch to just use rootwrap with a new compute.filters 
entry, no privsep at all - basically how we used to always do this stuff.


In the interest of time, and not seeing anyone standing up to lead the 
charge on privsep conversion in Nova in the immediate future, I'm 
learning toward just doing #3 but wanted to get other opinions.


--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Latest and greatest on trying to get n-sch to require placement

2017-01-25 Thread Matt Riedemann

This is my public hand off to Sylvain for the work done tonight.

Starting with the multinode grenade failure in the nova patch to 
integrate placement with the filter scheduler:


https://review.openstack.org/#/c/417961/

The test_schedule_to_all_nodes tempest test was failing in there because 
that test explicitly forces hosts using AZs to build two instances. 
Because we didn't have nova.conf on the Newton subnode in the multinode 
grenade job configured to talk to placement, there was no resource 
provider for that Newton subnode when we started running smoke tests 
after the upgrade to Ocata, so that test failed since the request to the 
subnode had a NoValidHost (because no resource provider was checking in 
from the Newton node).


Grenade is not topology aware so it doesn't know anything about the 
subnode. When the subnode is stacked, it does so via a post-stack hook 
script that devstack-gate writes into the grenade run, so after stacking 
the primary Newton node, it then uses Ansible to ssh into the subnode 
and stack Newton there too:


https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L629

logs.openstack.org/61/417961/26/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/15545e4/logs/grenade.sh.txt.gz#_2017-01-26_00_26_59_296

And placement was optional in Newton so, you know, problems.

Some options came to mind:

1. Change the test to not be a smoke test which would exclude it from 
running during grenade. QA would barf on this.


2. Hack some kind of pre-upgrade callback from d-g into grenade just for 
configuring placement on the compute subnode. This would probably 
require adding a script to devstack just so d-g has something to call so 
we could keep branch logic out of d-g, like what we did for the 
discover_hosts stuff for cells v2. This is more complicated than what I 
wanted to deal with tonight with limited time on my hands.


3. Change the nova filter scheduler patch to fallback to get all compute 
nodes if there are no resource providers. We've already talked about 
this a few times already in other threads and I consider it a safety net 
we'd like to avoid if all else fails. If we did this, we could 
potentially restrict it to just the forced-host case...


4. Setup the Newton subnode in the grenade run to configure placement, 
which I think we can do from d-g using the features yaml file. That's 
what I opted to go with and the patch is here:


https://review.openstack.org/#/c/425524/

I've made the nova patch dependent on that *and* the other grenade patch 
to install and configure placement on the primary node when upgrading 
from Newton to Ocata.


--

That's where we're at right now. If #4 fails, I think we are stuck with 
adding a workaround for #3 into Ocata and then remove that in Pike when 
we know/expect computes to be running placement (they would be in our 
grenade runs from ocata->pike at least).


--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] API models [was: PTL candidacy]

2017-01-25 Thread Kevin Benton
>This is why it should be easy to do.  The easier it is, the more Neutron
work you'll have time for, in fact.

I agree that writing them should be easy. But I don't want to enable
operators to just tweak config files to alter APIs. So +1000 to developer
productivity. I just don't want it to circumvent sane API reviews.

>You have a team of core devs that can examine those proposed APIs - which
are in a simple DSL - and decide if they should be added to a repository,
which you can look at as an approval step when you decide they're worthy.

This sounds good. What I want to ensure is that step of review between the
experimentation bit and operators using it bit.

>Making it hard to write an API is putting a barrier to entry and
experimentation in the way.  Making it hard to standardise that API - by,
for instance, putting technical requirements on it that it be supportable
and maintainable, generally usable and of no impact to people who don't use
it - that's far more useful.

Agree.

>You refer to it as 'configuration defined' but the API description is not
a configuration language to be changed on a whim by the end user - it's a
DSL, it's code (of a sort), it's a part of the thing you ship.

It's configuration defined in the sense that two people using gluon source
from different YAML configurations can end up with APIs that are completely
different and incompatible. That's problematic from an API user's
perspective because "this operator uses gluon" is sort of a useless
statement for understanding how to interact with the system.


>Again, I agree with this.  But again - one way of making standards is to
have a documented standard and two independent implementations.  We don't
see that today because that would be a huge effort.

Requiring two implementations is definitely something I don't want to
require. I would rather agree upon and accept experimental APIs that only
have one implementation than have a vendor do it on their own. It's usually
pretty easy to recognize when things are getting too vendor-specific even
if you don't have the implementation to test with.

On Wed, Jan 25, 2017 at 7:22 PM, Ian Wells  wrote:

> On 25 January 2017 at 18:07, Kevin Benton  wrote:
>
>> >Setting aside all the above talk about how we might do things for a
>> moment: to take one specific feature example, it actually took several
>> /years/ to add VLAN-aware ports to OpenStack.  This is an example of a
>> feature that doesn't affect or interest the vast majority of the user
>> community
>>
>> Now that it has been standardized on, other services can actually use it
>> to build things (e.g. Kuryr, Octavia). If each vendor just went and wrote
>> up their own extension for this, no project could reliably build anything
>> on top of it that would work with multiple backend vendors.
>>
>
> In the weirdest way, I think we're agreeing.  The point is that the API
> definition is what matters and the API definition is what we have to
> agree.  That means that we should make it easy to develop the API
> definition so that we can standardise it promptly, and it also means that
> we would like to have the opportunity to agree that extensions are
> 'standard' (versus today - write an extension attribute that works for you,
> call the job done).  It certainly doesn't remove that standardisation step,
> which is why I say that governance, here is key - choosing wisely the
> things we agree are standards.
>
> To draw an IETF parallel for a moment, it's easy to write and implement a
> networking protocol - you need no help or permission - and easy to submit a
> draft that describes that protocol.  It's considerably harder to turn that
> draft into an accepted standard.
>
> >and perhaps we need a governance model around ensuring that they are sane
>> and resuable.
>> >Gluon, specifically, is about solving exclusively the technical question
>> of how to introduce independent APIs
>>
>> Doesn't that make Gluon support the opposite of having a governance model
>> and standardization for APIs?
>>
>
> No.  If you want an in-house API then off you go, write your own.  If you
> want to experiment in code, off you go, experimentation is good.  This is
> why it should be easy to do.  The easier it is, the more Neutron work
> you'll have time for, in fact.
>
> But I can't unilaterally make something standard by saying it is, and
> that's where the community can and should get involved.  You have a team of
> core devs that can examine those proposed APIs - which are in a simple DSL
> - and decide if they should be added to a repository, which you can look at
> as an approval step when you decide they're worthy.
>
> Making it hard to write an API is putting a barrier to entry and
> experimentation in the way.  Making it hard to standardise that API - by,
> for instance, putting technical requirements on it that it be supportable
> and maintainable, generally usable and of no impact to people who don't use
> 

[openstack-dev] [daisycloud-core] Weekly meeting 0800UTC Jan. 27 & Feb. 3 cancelled

2017-01-25 Thread hu.zhijiang
Hi Team,




Weekly meetings on Jan. 27 & Feb. 3 have been cancelled for Chinese New Year 
vacation.














B. R.,

Zhijiang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Performance] Gathering quota usage data in Horizon

2017-01-25 Thread Lingxian Kong
Hi, guys,

Sorry for recalling this thread after 1 year, but we are currently
suffering from the poor performance issue for our public cloud.

As usage of our customers keeps growing, we are at a stage that should
seriously pay more attention to horizon performance problem, so Google took
me to this email after a lot of search.

Currently, when loading a page that may contain some buttons for
creating/allocating resource (e.g. 'Access & Security'), horizon will check
the quota usage first to see if a specific button should be disabled or
not, and the checkings just happen *in sequence*, which makes things even
worse.

What's more, the quota usage query in horizon is included in one
function[1], it will invoke Nova, Cinder, Neutron (perhaps more in future)
APIs to get usage of bunch of resources, rather than the resource that page
is rendering, which is another flaw IMHO. I know that this function call is
already put in cache, but most of our customers' pain just come from the
first click.

So, I have a few questions:

1. Does horizon support some config option that could disable quota check?
As a public cloud, it doesn't make much sense that usage should be limited,
and we have a monitoring tool that will increase quotas automatically when
customer's usage will hit the quota limit. So, getting rid of that check
will save our customers appreciable mass of waiting time.

2. Another option is to support get quota usage for specific resource
rather than all the resources, e.g. when loading floating ip tab, horizon
only get floating ip quota usage from Neutron, which has only 2 api calls.

3. I found this FFE[2] which is great (also replied), but splitting tabs is
not the end, more effort should be put into the performance improvement.

4. Some other trivial improvement like this:
https://review.openstack.org/425494

[1]: https://github.com/openstack/horizon/blob/master/
openstack_dashboard/usage/quotas.py#L396
[2]:
http://openstack.markmail.org/thread/ra3brm6voo4ouxtx#query:+page:1+mid:oata2tifthnhy5b7+state:results


Cheers,
Lingxian Kong (Larry)

On Wed, Dec 23, 2015 at 9:50 PM, Timur Sufiev  wrote:

> Duncan,
>
> Thank you for the suggestion, will do.
>
> On Wed, 23 Dec 2015 at 10:55, Duncan Thomas 
> wrote:
>
>> On a cloud with a large number of tenants, this is going to involve a
>> large number of API calls. I'd suggest you put a spec into cinder to add an
>> API call for getting the totals straight out of the DB - it should be easy
>> enough to add.
>>
>> On 18 December 2015 at 20:35, Timur Sufiev  wrote:
>>
>>> Matt,
>>>
>>> actually Ivan (Ivan, thanks a lot!) showed me the exact cinderclient
>>> call that I needed. Now I know how to retrieve Cinder quota usage info
>>> per-tenant, seems that to retrieve the same info cloud-wide I should sum up
>>> all the available tenant usages.
>>>
>>> With Cinder quota usages being sorted out, my next goal is Nova and
>>> Neutron. As for Neutron, there are plenty of quota-related calls I'm going
>>> to play with next week, perhaps there is something suitable for my use
>>> case. But as for Nova, I haven't found something similar to 'usage' of
>>> cinderclient call, so help from someone familiar with Nova is very
>>> appreciated :).
>>>
>>> [0] https://github.com/openstack/python-cinderclient/blob/ma
>>> ster/cinderclient/v2/quotas.py#L36
>>>
>>> On Fri, Dec 18, 2015 at 5:17 PM Matt Riedemann <
>>> mrie...@linux.vnet.ibm.com> wrote:
>>>


 On 12/17/2015 2:40 PM, Ivan Kolodyazhny wrote:
 > Hi Timur,
 >
 > Did you try this Cinder API [1]?  Here [2] is cinderclient output.
 >
 >
 >
 > [1]
 > https://github.com/openstack/python-cinderclient/blob/master
 /cinderclient/v2/quotas.py#L33
 > [2] http://paste.openstack.org/show/482225/
 >
 > Regards,
 > Ivan Kolodyazhny,
 > http://blog.e0ne.info/
 >
 > On Thu, Dec 17, 2015 at 8:41 PM, Timur Sufiev  > wrote:
 >
 > Hello, folks!
 >
 > I'd like to initiate a discussion of the feature request I'm going
 > to make on behalf of Horizon to every core OpenStack service which
 > supports Quota feature, namely Cinder, Nova and Neutron.
 >
 > Although all three services' APIs support special calls to get
 > current quota limitations (Nova and Cinder allows to get and
 update
 > both per-tenant and default cloud-wide limitations, Neutron allows
 > to do it only for per-tenant limitations), there is no special
 call
 > in any of these services to get current per-tenant usage of quota.
 > Because of that Horizon needs to get, say for 'volumes' quota, a
 > list of Cinder volumes in the current tenant and then just
 calculate
 > its length [1]. When there are really a lot of entities in tenant
 -
 > 

Re: [openstack-dev] [neutron] API models [was: PTL candidacy]

2017-01-25 Thread Ian Wells
On 25 January 2017 at 18:07, Kevin Benton  wrote:

> >Setting aside all the above talk about how we might do things for a
> moment: to take one specific feature example, it actually took several
> /years/ to add VLAN-aware ports to OpenStack.  This is an example of a
> feature that doesn't affect or interest the vast majority of the user
> community
>
> Now that it has been standardized on, other services can actually use it
> to build things (e.g. Kuryr, Octavia). If each vendor just went and wrote
> up their own extension for this, no project could reliably build anything
> on top of it that would work with multiple backend vendors.
>

In the weirdest way, I think we're agreeing.  The point is that the API
definition is what matters and the API definition is what we have to
agree.  That means that we should make it easy to develop the API
definition so that we can standardise it promptly, and it also means that
we would like to have the opportunity to agree that extensions are
'standard' (versus today - write an extension attribute that works for you,
call the job done).  It certainly doesn't remove that standardisation step,
which is why I say that governance, here is key - choosing wisely the
things we agree are standards.

To draw an IETF parallel for a moment, it's easy to write and implement a
networking protocol - you need no help or permission - and easy to submit a
draft that describes that protocol.  It's considerably harder to turn that
draft into an accepted standard.

>and perhaps we need a governance model around ensuring that they are sane
> and resuable.
> >Gluon, specifically, is about solving exclusively the technical question
> of how to introduce independent APIs
>
> Doesn't that make Gluon support the opposite of having a governance model
> and standardization for APIs?
>

No.  If you want an in-house API then off you go, write your own.  If you
want to experiment in code, off you go, experimentation is good.  This is
why it should be easy to do.  The easier it is, the more Neutron work
you'll have time for, in fact.

But I can't unilaterally make something standard by saying it is, and
that's where the community can and should get involved.  You have a team of
core devs that can examine those proposed APIs - which are in a simple DSL
- and decide if they should be added to a repository, which you can look at
as an approval step when you decide they're worthy.

Making it hard to write an API is putting a barrier to entry and
experimentation in the way.  Making it hard to standardise that API - by,
for instance, putting technical requirements on it that it be supportable
and maintainable, generally usable and of no impact to people who don't use
it - that's far more useful.

We can't stop people writing bad APIs.  What we have in place today doesn't
stop that, as made by the Contrail point previously.  But that has no
relevance to the difficulty of writing APIs, which slows down both good and
bad implementations equally.


> >I do not care if we make Neutron extensible in some different way to
> permit this, if we start a new project or whatever, I just want it to
> happen.
>
> I would like to promote experimentation with APIs with Neutron as well,
> but I don't think it's ever going to be as flexible as the
> configuration-defined behavior Gluon allows.
>

You refer to it as 'configuration defined' but the API description is not a
configuration language to be changed on a whim by the end user - it's a
DSL, it's code (of a sort), it's a part of the thing you ship.


> My goal is getting new features that extend beyond one backend and without
> some community agreement on new APIs, I don't see how that's possible.
>

Again, I agree with this.  But again - one way of making standards is to
have a documented standard and two independent implementations.  We don't
see that today because that would be a huge effort.
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-01-25 Thread Takashi Yamamoto
hi,

On Sat, Jan 14, 2017 at 2:17 AM, Doug Hellmann  wrote:
> Excerpts from Dariusz Śmigiel's message of 2017-01-13 09:11:01 -0600:
>> 2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
>> > hi,
>> >
>> > On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  wrote:
>> >> Hi
>> >>
>> >> As of today, the project neutron-vpnaas is no longer part of the neutron
>> >> governance. This was a decision reached after the project saw a dramatic
>> >> drop in active development over a prolonged period of time.
>> >>
>> >> What does this mean in practice?
>> >>
>> >> From a visibility point of view, release notes and documentation will no
>> >> longer appear on openstack.org as of Ocata going forward.
>> >> No more releases will be published by the neutron release team.
>> >> The neutron team will stop proposing fixes for the upstream CI, if not
>> >> solely on a voluntary basis (e.g. I still felt like proposing [2]).
>> >>
>> >> How does it affect you, the user or the deployer?
>> >>
>> >> You can continue to use vpnaas and its CLI via the python-neutronclient 
>> >> and
>> >> expect it to work with neutron up until the newton
>> >> release/python-neutronclient 6.0.0. After this point, if you want a 
>> >> release
>> >> that works for Ocata or newer, you need to proactively request a release
>> >> [5], and reach out to a member of the neutron release team [3] for 
>> >> approval.
>> >
>> > i want to make an ocata release. (and more importantly the stable branch,
>> > for the benefit of consuming subprojects)
>> > for the purpose, the next step would be ocata-3, right?
>>
>> Hey Takashi,
>> If you want to release new version of neutron-vpnaas, please look at [1].
>> This is the place, which you need to update and based on provided
>> details, tags and branches will be cut.
>>
>> [1] 
>> https://github.com/openstack/releases/blob/master/deliverables/ocata/neutron-vpnaas.yaml
>
> Unfortunately, since vpnaas is no longer part of an official project,
> we won't be using the releases repository to manage and publish
> information about the releases. It'll need to be done by hand.

who can/should do it by hand?

>
> Doug
>
>>
>> BR, Dariusz
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Kevin Benton
>There is no such thing as arbitrary API. It is like one person's treasure
is other person's trash no body loves to create arbitrary APIs - there
are genuine needs.

Arbitrary in the sense that new APIs can be developed on-demand to solve
one specific use case that leads to fragmentation and incoherency of the
larger API.

>Some times we fail to understand requirements, other times the
requirements are not articulated clearly, which could lead to impressions
that arbitrary things are being added.

These are the reasons I don't want it to be so easy to define new APIs out
of tree. We can end up with 3 vendors trying to solve the same problem for
their customers and each will come up with a completely different
API/abstraction level that's tied to the vendor implementation.

If we have community discussions around adding new APIs in Neutron lib,
there is a documented review that has all of the requirements and use cases
articulated clearly.

>As newer workloads/technologies evolve, the need to orchestrate them
requires flexibility in the API.

I disagree, I think it means we need to add new Neutron APIs to orchestrate
them. Allowing each vendor to bypass the Neutron API entirely and come up
with their own API for orchestrating these workloads will just lead to
fragmentation.

>As I read/understand more about Gluon, that is being pushed by both
Operators/Users and Vendors.

I haven't seen many users pushing for a networking API that is different on
every cloud they use...

On Wed, Jan 25, 2017 at 10:06 AM, Sukhdev Kapur 
wrote:

> Folks, this is a great discussion. I hope this leads us to some good
> consensus and direction :-)
> I would suggest that we discuss this in upcoming PTG meeting as well.
>
>
> On Wed, Jan 25, 2017 at 5:20 AM, Kevin Benton  wrote:
>
>> >So I'm not sure that Kevin and Thierry's answers address Sukhdev's
>> point.
>>
>> I stated that I am happy to develop new APIs in Neutron. "So I'm all for
>> developing new APIs *as a community*"...
>>
>
> +1
>
>
>>
>> The important distinction I am making is that we can make new APIs (and
>> we do with routed networks as you mentioned, VLAN aware VMs, etc), but I
>> don't want to see the project just become a framework to make it even
>> easier than it is to define an arbitrary networking API.
>>
>
> There is no such thing as arbitrary API. It is like one person's treasure
> is other person's trash no body loves to create arbitrary APIs - there
> are genuine needs. Some times we fail to understand requirements, other
> times the requirements are not articulated clearly, which could lead to
> impressions that arbitrary things are being added.
>
>
>
>> >But I think that the point that Sukhdev raised - about other networking
>> projects being suggested because of Neutron being perceived as not flexible
>> enough
>>
>> I'm explicitly stating that if someone wants Neutron to become more
>> flexible to develop arbitrary APIs that diverge across deployments even
>> more, that's not something I'm going to support. However, making it
>> flexible for operators/users by adding new vendor-agnostic APIs is
>> something I will encourage.
>>
>
>> The reason I am stressing that distinction is because we have vendors
>> that want something like Gluon that allows them to define new arbitrary
>> APIs without upstreaming anything or working with the community to
>> standardize anything.
>>
> I understand that may be useful for some artisanal NFV workloads, but
>> that's not the direction I want to take Neutron.
>>
>
> OpenStack community consists of vendors and operators/users to facilitate
> and adoption of newer technologies as they evolve. As newer
> workloads/technologies evolve, the need to orchestrate them requires
> flexibility in the API. Labeling them as an arbitrary API  just because
> they do not fall into traditional L2/L3 networking model) is a harsh
> characterization.
>
>
>
>> Flexibility for operators/users = GOOD
>> Flexibility for vendor API injection = BAD
>>
>
> As I read/understand more about Gluon, that is being pushed by both
> Operators/Users and Vendors. I wonder which one is GOOD and which one is
> BAD :-):-)
>
> cheers..
> -Sukhdev
>
>
>
>
>>
>> On Wed, Jan 25, 2017 at 4:55 AM, Neil Jerram  wrote:
>>
>>> On Wed, Jan 25, 2017 at 10:20 AM Thierry Carrez 
>>> wrote:
>>>
 Kevin Benton wrote:
 > [...]
 > The Neutron API is already very extensible and that's problematic.
 Right
 > now a vendor can write an out-of-tree service plugin or driver that
 adds
 > arbitrary fields and endpoints to the API that results in whatever
 > behavior they want. This is great for vendors because they can
 directly
 > expose features without having to make them vendor agnostic. However,
 > it's terrible for operators because it leads to lock-in and terrible
 for
 > the users because it leads to cross-cloud compatibility 

[openstack-dev] [QA] Meeting Thursday Jan 26th at 9:00 UTC

2017-01-25 Thread Ghanshyam Mann
Hello everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, Jan 26th at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:

* 
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_January_26th_2017_.280900_UTC.29

*

Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the next
meeting will be at:

04:00 EST

18:00 JST

18:30 ACST

11:00 CEST

04:00 CDT
02:00 PDT

-gmann
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] FFE Request

2017-01-25 Thread Adrian Turjak
I've posted some comments on the API Access patch.


The blueprint was saying that 'API Access' would be both at the Project
level, but the way panel groups worked meant that setting the 'default'
panel group didn't work when that dashboard already had panel groups,
since the default panel group was annoyingly hidden away because of
somewhat odd template logic.

I submitted a bug report here:
https://bugs.launchpad.net/horizon/+bug/1659456

And proposed a fix for that here:
https://review.openstack.org/#/c/425486

With that change the default group panels are not hidden, and displayed
at the same level as the other panel groups. This then allows us to move
API Access to the top level where the blueprint says. This makes much
more sense since API Access isn't a compute only thing.

On 26/01/17 12:02, Fox, Kevin M wrote:
> Big Thanks! from me too. The old UI here was very unintuitive, so I
> had to field a lot of questions related to it. This is great. :)
>
> Kevin
> 
> *From:* Lingxian Kong [anlin.k...@gmail.com]
> *Sent:* Wednesday, January 25, 2017 2:23 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [horizon] FFE Request
>
> Hi, Rob,
>
> First, thanks for your work!
>
> What's your plan for the other two tabs (security group, floatingip)?
> I could see the split is very helpful no matter from performance
> perspective and both useful from end user's perspective.
>
> BTW, a huge +1 for this FFE!
>
>
>
>
> Cheers,
> Lingxian Kong (Larry)
>
> On Thu, Jan 26, 2017 at 9:01 AM, Adrian Turjak
> > wrote:
>
> +1
>
> We very much need this as the performance of that panel is awful.
> This solves that problem while being a fairly minor code change
> which also provides much better UX.
>
>
> On 26/01/2017 8:07 AM, Rob Cresswell  > wrote:
>
> o/ 
>
> I'd like to request an FFE
> on 
> https://blueprints.launchpad.net/horizon/+spec/reorganise-access-and-security
> 
> .
> This blueprint splits up the access and security tabs into 4
> distinct panels. The first two patches
> are https://review.openstack.org/#/c/408247
> 
> and https://review.openstack.org/#/c/425345/
>  
>
> It's low risk; no API layer changes, mostly just moving code
> around.
>
> Rob
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] PTL Candidacy

2017-01-25 Thread Kevin Benton
I think we may also want to bring this up in a cross-project session at the
PTG. We are definitely to blame for some of the instability, but towards
the end of this cycle I have noticed lots of issues with HTTPS connection
errors, etc that don't seem to be related to Neutron at all. The squad team
will have to have members from other projects. :)

On Wed, Jan 25, 2017 at 10:29 AM, Ihar Hrachyshka 
wrote:

> On Tue, Jan 24, 2017 at 12:26 PM, Morales, Victor
>  wrote:
> > Given the latest issues related with the memory consumption[1] in CI
> jobs, I’m just wondering if you have a plan to deal and/or improve it in
> Neutron.
>
> AFAIU the root cause is still not clear, and we don't know if it's
> Neutron or job setup that triggers the OOM. I think we all see that
> the gate is not healthy lately (it's not just tempest, functional
> failure rate is also pretty bad). We need a squad team with clear
> ownership for failure tracking to get back to normal.
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] API models [was: PTL candidacy]

2017-01-25 Thread Kevin Benton
>Obviously this is close to my interests, and I see Kevin's raised Gluon as
the bogeyman (which it isn't trying to be).

No, I was pointing out that it's not what I want Neutron to become. It's
entire purpose is to make defining new APIs as simple as writing YAML (i.e.
networking APIs as a service). I suspect there will always be deployments
where features need to be redefined at runtime for solving very specific
use cases, but I don't want that to be the default model of networking in
OpenStack.

>Setting aside all the above talk about how we might do things for a
moment: to take one specific feature example, it actually took several
/years/ to add VLAN-aware ports to OpenStack.  This is an example of a
feature that doesn't affect or interest the vast majority of the user
community

Now that it has been standardized on, other services can actually use it to
build things (e.g. Kuryr, Octavia). If each vendor just went and wrote up
their own extension for this, no project could reliably build anything on
top of it that would work with multiple backend vendors.

>and perhaps we need a governance model around ensuring that they are sane
and resuable.
>Gluon, specifically, is about solving exclusively the technical question
of how to introduce independent APIs

Doesn't that make Gluon support the opposite of having a governance model
and standardization for APIs?

>I do not care if we make Neutron extensible in some different way to
permit this, if we start a new project or whatever, I just want it to
happen.

I would like to promote experimentation with APIs with Neutron as well, but
I don't think it's ever going to be as flexible as the
configuration-defined behavior Gluon allows. My goal is getting new
features that extend beyond one backend and without some community
agreement on new APIs, I don't see how that's possible.



On Jan 25, 2017 12:44, "Ian Wells"  wrote:

I would certainly be interested in dicussing this, though I'm not currently
signed up for the PTG.  Obviously this is close to my interests, and I see
Kevin's raised Gluon as the bogeyman (which it isn't trying to be).

Setting aside all the above talk about how we might do things for a moment:
to take one specific feature example, it actually took several /years/ to
add VLAN-aware ports to OpenStack.  This is an example of a feature that
doesn't affect or interest the vast majority of the user community, and
which is almost certainly not installed in the cloud you're currently
working on, and you probably have no intention of ever programming, and
which even had cross-vendor support.  It's useful and there are times that
you can't do without it; there were people ready to write the code.  So why
was it so very hard?


I hope we will all agree on these points:

- Neutron's current API of networks, subnets and ports is fine for what it
does.  We like it, we write apps using it, it doesn't need to change
- The backward compatibility and common understanding of Neutron's API is
paramount - applications should work everywhere, and should continue to
work as Neutron evolves
- Some people want to do different things with networks, and that doesn't
make them bad people
- What is important about APIs is that they are *not* tied to an
implementation or reinvented by every backend provider, but commonly agreed

This says we find pragmatic ways to introduce sane, consumable APIs for any
new thing we want to do, and perhaps we need a governance model around
ensuring that they are sane and resuable.  None of this says that every API
should fit neatly into the network/port/subnet model we have - it was
designed for, and is good at describing, L2 broadcast domains.  (Gluon,
specifically, is about solving exclusively the technical question of how to
introduce independent APIs, and not the governance one of how to avoid
proliferation.)

For any new feature, I would suggest that we fold it in to the current API
if it's widely useful and closely compatible with the current model.  There
are clearly cases where changing the current API in complex ways to serve
1% of the audience is not necessary and not helpful, and I think this is
where our problems arise.  And by 'the current API' I mean the existing
network/port/subnet model that is currently the only way to describe how
traffic moves from one port to another.  I do not mean 'we must start
another project' or 'we must have another endpoint'.

However, we should have a way to avoid affecting this API if it makes no
sense to put it there.  We should find a way of offering new forwarding
features without finding increasingly odd ways of making networks, ports
and subnets serve the purpose.  They were created to describe L2 overlays,
which is still mostly what they are used to do - to the point that the most
widely used plugin by far is the modular *L2* plugin.  It's hardly a
surprise that these APIs don't map to every possible networking setup in
the world.  My argument is that it's 

Re: [openstack-dev] [kolla][kolla-ansible] How to restart working on a patch started before repo split?

2017-01-25 Thread Hiroki Ito

Hi Paul,

> To my knowledge it's not possible to change the target repository for an
> open patch. What we've done so far is to abandon the in progress one,
> cherry-pick to the correct repository and start a new review (adding the
> original author as a co-author of course).

Thanks for sharing your knowledge. I'll try it.

Hiroki


On 2017/01/25 19:31, Paul Bourke wrote:

Hi Hiroki,

To my knowledge it's not possible to change the target repository for an
open patch. What we've done so far is to abandon the in progress one,
cherry-pick to the correct repository and start a new review (adding the
original author as a co-author of course).

Hope that helps,
-Paul

On 25/01/17 09:02, Hiroki Ito wrote:

Hi Kolla,

I would like to restart working on the following BP[0] and patch[1]. To
do so, I have to amend the patch and send to kolla-ansible since the
repository have been split into kolla and kolla-ansible, right?

[0]
https://blueprints.launchpad.net/kolla-ansible/+spec/graceful-shutdown
[1] https://review.openstack.org/#/c/391420/

Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Hiroki Ito
NTT Software Innovation Center
TEL: +81-422-59-4180
E-mail: ito.hir...@lab.ntt.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][networking-vpp]networking-vpp for VPP 17.01 is available

2017-01-25 Thread Ian Wells
In conjunction with the release of VPP 17.01, I'd like to invite you all to
try out networking-vpp for VPP 17.01.  VPP is a fast userspace forwarder
based on the DPDK toolkit, and uses vector packet processing algorithms to
minimise the CPU time spent on each packet and maximise throughput.
networking-vpp is a ML2 mechanism driver that controls VPP on your control
and compute hosts to provide fast L2 forwarding under Neutron.

The latest version has been updated to work with the new featuers of VPP
17.01, including security group support based on VPP's ACL functionality.

The README [1] explains how you can try out VPP using devstack, which is
now pleasantly simple; the devstack plugin will deploy the mechanism driver
and VPP itself and should give you a working system with a minimum of
hassle.

We plan on continuing development between now and VPP's 17.04 release in
April.  There are several features we're planning to work on (you'll find a
list in our RFE bugs at [2]), and we welcome anyone who would like to come
help us.

Everyone is welcome to join our new biweekly IRC meetings, Monday 0800 PST
= 1600 GMT, due to start next Monday.
-- 
Ian.

[1]https://github.com/openstack/networking-vpp/blob/17.01/README.rst
[2]
https://bugs.launchpad.net/networking-vpp/+bugs?orderby=milestone_name=0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][barbican][octavia]certs don't get deregistered in barbican after lbaas listener delete

2017-01-25 Thread Jiahao Liang (Frankie)
Thanks rm_work.

I also notice something need to be handled properly.

For barbican, the delete_cert() only deregisters the cert without actually
delete it. That's safe for us to call during
delete_listener()/delete_loadbalancer().

But if the user uses other cert_manager by any chance, say the
local_cert_manager, the same delete_cert() method will do a real delete of
the cert.

Probably we need to implement register_consumer()/remove_consumer() method
for cert_manager and call them in neutron_lbaas as well.


On Wed, Jan 25, 2017 at 10:48 Adam Harwell  wrote:

> I've got this on my list of things to look at -- I don't know if it was
> you I was talking with on IRC the other day about this issue, but I'm
> definitely aware of it. As soon as we are past the Ocata feature freeze
> crunch, I'll take a closer look.
>
> My gut says that we should be calling the delete (which is not a real
> delete) when the LB is deleted, and not doing so is a bug, but I'll need to
> double check the logic as it has been a while since I touched this.
>
> --Adam (rm_work)
>
> On Mon, Jan 23, 2017, 18:38 Jiahao Liang (Frankie) <
> gzliangjia...@gmail.com> wrote:
>
> Hi community,
>
> I created a loadbalancer with a listener with protocol as
> "TERMINATED_HTTPS" and specify --default-tls-container-ref with a ref of
> secret container from Barbican.
> However, after I deleted the listener, the lbaas wasn't removed from
> barbican container consumer list.
>
> $openstack secret container get http://192.168.20.24:9311/v1/
> containers/453e8905-d42b-43bd-9947-50e3acf499f4
> ++--
> ---+
> | Field  | Value
> |
> ++--
> ---+
> | Container href | http://192.168.20.24:9311/v1/
> containers/453e8905-d42b-43bd-9947-50e3acf499f4|
> | Name   | tls_container2
>  |
> | Created| 2017-01-19 12:44:07+00:00
> |
> | Status | ACTIVE
>  |
> | Type   | certificate
> |
> | Certificate| http://192.168.20.24:9311/v1/
> secrets/bfc2bf01-0f23-4105-bf09-c75839b6b4cb   |
> | Intermediates  | None
>  |
> | Private Key| http://192.168.20.24:9311/v1/
> secrets/c85d150e-ec84-42e0-a65f-9c9ec19767e1   |
> | PK Passphrase  | None
>  |
> | *Consumers  | {u'URL':
> u'lbaas://RegionOne/loadbalancer/5e7768b9-7aa9-4146-8a71-6291353b447e',
> u'name': u'lbaas'}*
>
>
> I went through the neutron-lbaas code base. We did register consumer
> during the creation of "TERMINATED_HTTPS" listener in [1]. But we somehow
> doesn't deregister it during the deletion in [1]: https://github.com/
> openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/
> services/loadbalancer/plugin.py#L642
> get_cert() register lbaas as a consumer for barbican cert_manager.  (
> https://github.com/openstack/neutron-lbaas/blob/
> stable/mitaka/neutron_lbaas/common/cert_manager/barbican_
> cert_manager.py#L177)
> [2]: https://github.com/openstack/neutron-lbaas/blob/
> stable/mitaka/neutron_lbaas/services/loadbalancer/plugin.py#L805
> we probably need to call delete_cert from barbican cert_manager to remove
> the consumer. (https://github.com/openstack/neutron-lbaas/blob/stable/
> mitaka/neutron_lbaas/common/cert_manager/barbican_cert_manager.py#L187)
>
>
> My questions are:
> 1. is that a bug?
> 2. or is it a intentional design letting the vendor driver to handle it?
>
> It looks more like a bug to me.
>
> Any thoughts?
>
>
> Best,
> Jiahao
> --
>
> *梁嘉豪/Jiahao LIANG (Frankie) *
> Email: gzliangjia...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Pike PTG Etherpad

2017-01-25 Thread Jeremy Stanley
Just a heads-up I've been meaning to send for a while... as
discussed in the last month of Infra meetings we've got a pad here
for people to pitch ideas of things they want to collaborate on in
the Infra team space at the PTG on Monday and Tuesday:

https://etherpad.openstack.org/p/infra-ptg-pike

It's pretty much a free-for-all; as long as there are at least two
people who want to work together on a topic and it's Infra-related
we'll do our best to accommodate. It's also listed with all the
others so you don't need to remember the pad name:

https://wiki.openstack.org/wiki/PTG/Pike/Etherpads

I'm looking forward to seeing lots of you in a few weeks! I and a
number of other Infra team members will be around for the full week
so if there's any related discussions you want to have with your own
teams just give us a heads up and we can try to have someone with
some Infra-sense pop in and help.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] [ptl] PTL Candidacy for Pike

2017-01-25 Thread Serg Melikyan
Felipe,

thank you so much for all your efforts so far, I am sure that you will
be a great leader for the Murano!

On Tue, Jan 24, 2017 at 10:58 AM, MONTEIRO, FELIPE C  wrote:
> Hi,
>
>
>
> My name is Felipe Monteiro. I have decided to run for Murano PTL for the
> Pike
>
> release cycle. I would like to continue to grow Murano in order to make the
>
> project more stable, mature and performant, while exploring avenues to
> evolve
>
> the project.
>
>
>
> I’ve been working on the project for the past couple months. Working with
>
> Murano is probably the most exciting thing I do professionally. When I first
>
> joined OpenStack, I was overwhelmed by the sheer volume of code, as well
>
> as the complexity involved in how the various services interact. Thanks
>
> to the welcoming culture surrounding Murano, I was immediately given help,
>
> advice and guidance, which, after a couple months of persistence and hard
> work,
>
> has enabled me to feel like a solid contributor within OpenStack.
>
>
>
> Over the past couple months, I’ve been heavily invested in improving
> Murano’s
>
> testing framework, in order to make the project more stable and more mature.
>
> This has resulted in Murano and Murano Dashboard’s unit test coverage having
>
> approximately doubled, as well as having various gaps in Murano Dashboard’s
>
> functional testing filled via Selenium tests. I also regularly code review
>
> other developers' work and take on the arduous task of editing Murano's
>
> documentation, because, as a native speaker of English, I really have no
> excuse
>
> but to help out in this regard.
>
>
>
> I’m really excited to work with Murano further, and to grow the community
>
> surrounding it. The more companies that contribute to Murano, the stronger
> the
>
> community around it will be, capable of maintaining and enhancing Murano for
>
> years to come.
>
>
>
> The main goals I’d like to focus on for Murano during Pike are:
>
> * Significantly improve Murano Tempest testing. This includes integrating
>
>Murano with a new OpenStack project called Patrole. Patrole, a Tempest
>
>plugin responsible for testing Role-Based Access Control, was spearheaded
>
>and founded by AT developers and contributors. Its purpose is to help
>
>harden the security underpinning various OpenStack services – Murano
>
>included.
>
>  * Significantly improve Murano documentation.
>
> * Improve Murano’s (particularly Murano Dashboard’s) integration with GLARE
> –
>
>there are a number of areas in the UI where GLARE is not supported.
>
>These areas should be fixed to provide users who use Murano and GLARE a
>
>better user experience.
>
> * Focus on addressing a number of outstanding bugs in Murano. These include
>
>adding SIGHUP signal support to Murano, support multiple rabbit hosts in
>
>RabbitMQ for Murano Agent, and addressing other UI bugs, like abandon
>
>environment hanging issues.
>
>
>
> My personal ambition leads me to want to also tackle the following items:
>
> * Possibly integrate Murano with other OpenStack projects, including
>
>Ceilometer, Search Light, or Kolla-Kubernetes.
>
>  * Investigate multi-cloud support for Murano, so that it works with
> different
>
>cloud services including VMWare and Amazon.
>
> * Increase developer contribution in Murano through my own coding efforts,
>
>as well as through AT’s resources.
>
>
>
> Thanks for taking the time to read through this roadmap and to consider my
>
> candidacy. Regardless of the outcome, I'm excited to continue contributing
>
> to Murano, be it code or code reviews.
>
>
>
> Thank You,
>
> Felipe Monteiro
>
> Associate-Applications Developer
>
> TDP – Emerging Technologies Program
>
> Centralized Development
>
> Office: +1 (678) 917-1767
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com | +1 (650) 440-8979

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] FFE Request

2017-01-25 Thread Fox, Kevin M
Big Thanks! from me too. The old UI here was very unintuitive, so I had to 
field a lot of questions related to it. This is great. :)

Kevin

From: Lingxian Kong [anlin.k...@gmail.com]
Sent: Wednesday, January 25, 2017 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [horizon] FFE Request

Hi, Rob,

First, thanks for your work!

What's your plan for the other two tabs (security group, floatingip)? I could 
see the split is very helpful no matter from performance perspective and both 
useful from end user's perspective.

BTW, a huge +1 for this FFE!




Cheers,
Lingxian Kong (Larry)

On Thu, Jan 26, 2017 at 9:01 AM, Adrian Turjak 
> wrote:
+1

We very much need this as the performance of that panel is awful. This solves 
that problem while being a fairly minor code change which also provides much 
better UX.


On 26/01/2017 8:07 AM, Rob Cresswell 
> wrote:
o/

I'd like to request an FFE on 
https://blueprints.launchpad.net/horizon/+spec/reorganise-access-and-security. 
This blueprint splits up the access and security tabs into 4 distinct panels. 
The first two patches are https://review.openstack.org/#/c/408247 and 
https://review.openstack.org/#/c/425345/

It's low risk; no API layer changes, mostly just moving code around.

Rob


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-25 Thread Serg Melikyan
I would like to comment a little bit regarding usage of Glare in
Murano and Mirantis OpenStack:

> How much have these projects adopted Glare?
Glare is preferred backend for storing murano packages, which provides
versioning capabilities (they are not available without it)

>Is Glare being deployed already?
Mirantis OpenStack 9.0 by default is deploying Murano with Glare used
as a backend.

>What projects are the main consumers of Glare?
Murano

On Wed, Jan 25, 2017 at 2:56 PM, Mike Perez  wrote:
> On 18:16 Jan 24, Mikhail Fedosin wrote:
>> Hey, Flavio :) Thanks for your questions!
>>
>> As you said currently only Nokia's adopting Glare for its own platform, but
>> if we talk about OpenStack, that I believe Mistral will start to use it
>> soon.
>> In my opinion Glare's adoption is low due to the fact that the project is
>> not included under Big Tent. I think it will be changed soon, because now
>> I'm finishing Glare v1 API proposal, and when it's done we will apply under
>> BT.
>>
>> About Glance v2 API - as I said they are +- the same with several cosmetic
>> differences (in Glance status is called 'queued', in Glare we renamed it to
>> 'drafted', and so on). For this reason I'm going to implement a middleware,
>> that will provide a full Image API v2 for Glare (even with unnecessary
>> '/v2' prefix) and glance clients will be able to communicate with it as
>> with Glance. It's definitely doable and we can discuss it more detailed
>> during the PTG.
>
> Both Flavio and Doug asked you to expand on the issues with the Glance API. 
> Can
> you please expand on that.
>
> --
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Ian Wells
On 25 January 2017 at 14:17, Monty Taylor  wrote:

> > Adding an additional networking project to try to solve this will only
> > make things work. We need one API. If it needs to grow features, it
> > needs to grow features - but they should be features that all of
> > OpenStack users get.
>
> WORSE - will make things WORSE - not work. Sorry for potentially
> completely misleading typo.
>

I should perhaps make clear that whenever I talk about 'other networking
APIs' I am not saying 'I think we should throw Neutron away' or 'we should
invent a shiny new API and compete with Neutron'.  I am saying that there's
value keeping the API of new features from intersecting the old when we add
things that are logically well separated from what we currently have.  As
it is, when you extend Neutron using current techniques, you have to touch
several elements of the existing API and you have no separation -
effectively you have to build outcroppings onto an API monolith.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Planning for the Pike PTG

2017-01-25 Thread Lance Bragstad
I think the keystone team is in the same spot.

We have an etherpad [0] for jotting down ideas, but we haven't parsed it or
grouped it in into topics yet. I think we were going to start working on
that next week since we're still in the middle of wrapping up the last few
bits for ocata-3. I was just talking to Steve about this the other day,
trying to figure the cross-project stuff out since the keystone team has a
couple things we want to bounce off other projects (specifically nova and
cinder).

[0] https://etherpad.openstack.org/p/keystone-pike-ptg

On Wed, Jan 25, 2017 at 4:43 PM, Ken'ichi Ohmichi 
wrote:

> I am preparing for PTG sessions.
> How much capacity is in each room ? 30 people or more?
>
> We might want to have different discussions or coding meetups in
> parallel in the same room, because each developer concentrates on
> different working topics (Tempest, Devstack, Grenade, Patrole, etc
> under QA project).
> So it is nice to make a few circles with desks and chairs for each topics.
> Maybe this is a common thing in the other projects and I'd like to
> know how many topics can run in parallel from the room capacity.
>
> Thanks
> Ken Ohmichi
>
> ---
>
>
>
> 2017-01-04 2:16 GMT-08:00 Thierry Carrez :
> > Matt Riedemann wrote:
> >> I haven't been living under a rock but I'm not aware of any major
> >> announcements regarding session planning for the PTG - has that happened
> >> somewhere and I'm just in the dark?
> >>
> >> I'm mostly wondering about the Monday and Tuesday cross-project sessions
> >> - are those time-boxed sessions like at the summit and will have a
> >> schedule? Or are we just playing fast and loose and hoping someone will
> >> lead us out of a hallway and into a room for Major Synergy (tm)?
> >
> > There are no "cross-project sessions" on Monday-Tuesday. There are a
> > number of horizontal team meetings (Infrastructure, QA, Documentation,
> > Security, Oslo...), transverse team meetings (Horizon, Kolla,
> > OpenStackClient, AppCatalog, RPM packaging...), and workgroup meetings
> > (Architecture WG, Stewardship WG, Interop WG...). All of those are full
> > days (or full 2-days) in a room owned by a given team (and PTL or chair)
> > and they are free to organize in whatever way they see fit (there are no
> > time-boxed sessions, so we expect most teams to use an etherpad-based
> > open agenda).
> >
> > We'll also have a room (or two) dedicated to the Pike goals (currently
> > under discussion) -- whoever wants to meet and make quick progress on
> > the Pike goals during the PTG should be able to find facilitators there.
> > We are still waiting on the final list of goals to formally make
> > progress on that front.
> >
> > Additionally from Monday to Thursday we'll have one openly-scheduled
> > fishbowl room, in case we need to have specific discussions. Think a
> > Cinder/Nova/os-brick discussion outside of the Nova and Cinder-specific
> > rooms, but for which you'd rather not all stand in the hallway. For that
> > room I thought we could set up an etherpad with time slots and let
> > people schedule topics there on the spot... But I'm happy to take
> > suggestions.
> >
> >> I see project teams are working on getting etherpads together for
> >> topics, including myself, which got me thinking about how to plan the
> >> Wed-Friday sessions which for a midcycle meetup would normally be a list
> >> of topics that we'd go through in order (or by priority) but not
> >> time-boxed or scheduled. But then I got thinking about how the PTG is
> >> right before we start working on Pike, so I'm now thinking we need more
> >> structure than what we did at the midcycles, and more like what we do at
> >> the design summit with respect to scheduled discussions about things
> >> that are going to be worked on in the upcoming release, figuring out
> >> goals, determining review priorities, etc - which is actually a lot more
> >> work to plan and schedule ahead of time, especially when we consider
> >> (vertical) cross-project sessions like between nova/cinder or
> nova/neutron.
> >
> > One of the goals of splitting the Design Summit into PTG & Forum is to
> > separate the "feedback/requirements gathering" phase (at the Forum) from
> > the "let's plan and bootstrap the actual work" phase (at the PTG). The
> > Pike PTG is arguably a transition PTG, since we won't have had a "Forum"
> > in the months before. The PTG is still happening at a point where most
> > people already started working on Pike though (rather than "right before
> > we start working on Pike"), and ideally should be focused on
> > implementation plans, review priorities and getting things done, without
> > the constraints of time-boxed slots.
> >
> > That said, you should definitely take advantage of having everyone
> > around (and with less scheduling constraints compared to Summit) to
> > discuss inter-project questions (think Nova/Neutron or Nova/Cinder). You
> > can hold those 

Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-25 Thread Mike Perez
On 18:16 Jan 24, Mikhail Fedosin wrote:
> Hey, Flavio :) Thanks for your questions!
> 
> As you said currently only Nokia's adopting Glare for its own platform, but
> if we talk about OpenStack, that I believe Mistral will start to use it
> soon.
> In my opinion Glare's adoption is low due to the fact that the project is
> not included under Big Tent. I think it will be changed soon, because now
> I'm finishing Glare v1 API proposal, and when it's done we will apply under
> BT.
> 
> About Glance v2 API - as I said they are +- the same with several cosmetic
> differences (in Glance status is called 'queued', in Glare we renamed it to
> 'drafted', and so on). For this reason I'm going to implement a middleware,
> that will provide a full Image API v2 for Glare (even with unnecessary
> '/v2' prefix) and glance clients will be able to communicate with it as
> with Glance. It's definitely doable and we can discuss it more detailed
> during the PTG.

Both Flavio and Doug asked you to expand on the issues with the Glance API. Can
you please expand on that.

-- 
Mike Perez


pgp07yDGWQQEK.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Planning for the Pike PTG

2017-01-25 Thread Ken'ichi Ohmichi
I am preparing for PTG sessions.
How much capacity is in each room ? 30 people or more?

We might want to have different discussions or coding meetups in
parallel in the same room, because each developer concentrates on
different working topics (Tempest, Devstack, Grenade, Patrole, etc
under QA project).
So it is nice to make a few circles with desks and chairs for each topics.
Maybe this is a common thing in the other projects and I'd like to
know how many topics can run in parallel from the room capacity.

Thanks
Ken Ohmichi

---



2017-01-04 2:16 GMT-08:00 Thierry Carrez :
> Matt Riedemann wrote:
>> I haven't been living under a rock but I'm not aware of any major
>> announcements regarding session planning for the PTG - has that happened
>> somewhere and I'm just in the dark?
>>
>> I'm mostly wondering about the Monday and Tuesday cross-project sessions
>> - are those time-boxed sessions like at the summit and will have a
>> schedule? Or are we just playing fast and loose and hoping someone will
>> lead us out of a hallway and into a room for Major Synergy (tm)?
>
> There are no "cross-project sessions" on Monday-Tuesday. There are a
> number of horizontal team meetings (Infrastructure, QA, Documentation,
> Security, Oslo...), transverse team meetings (Horizon, Kolla,
> OpenStackClient, AppCatalog, RPM packaging...), and workgroup meetings
> (Architecture WG, Stewardship WG, Interop WG...). All of those are full
> days (or full 2-days) in a room owned by a given team (and PTL or chair)
> and they are free to organize in whatever way they see fit (there are no
> time-boxed sessions, so we expect most teams to use an etherpad-based
> open agenda).
>
> We'll also have a room (or two) dedicated to the Pike goals (currently
> under discussion) -- whoever wants to meet and make quick progress on
> the Pike goals during the PTG should be able to find facilitators there.
> We are still waiting on the final list of goals to formally make
> progress on that front.
>
> Additionally from Monday to Thursday we'll have one openly-scheduled
> fishbowl room, in case we need to have specific discussions. Think a
> Cinder/Nova/os-brick discussion outside of the Nova and Cinder-specific
> rooms, but for which you'd rather not all stand in the hallway. For that
> room I thought we could set up an etherpad with time slots and let
> people schedule topics there on the spot... But I'm happy to take
> suggestions.
>
>> I see project teams are working on getting etherpads together for
>> topics, including myself, which got me thinking about how to plan the
>> Wed-Friday sessions which for a midcycle meetup would normally be a list
>> of topics that we'd go through in order (or by priority) but not
>> time-boxed or scheduled. But then I got thinking about how the PTG is
>> right before we start working on Pike, so I'm now thinking we need more
>> structure than what we did at the midcycles, and more like what we do at
>> the design summit with respect to scheduled discussions about things
>> that are going to be worked on in the upcoming release, figuring out
>> goals, determining review priorities, etc - which is actually a lot more
>> work to plan and schedule ahead of time, especially when we consider
>> (vertical) cross-project sessions like between nova/cinder or nova/neutron.
>
> One of the goals of splitting the Design Summit into PTG & Forum is to
> separate the "feedback/requirements gathering" phase (at the Forum) from
> the "let's plan and bootstrap the actual work" phase (at the PTG). The
> Pike PTG is arguably a transition PTG, since we won't have had a "Forum"
> in the months before. The PTG is still happening at a point where most
> people already started working on Pike though (rather than "right before
> we start working on Pike"), and ideally should be focused on
> implementation plans, review priorities and getting things done, without
> the constraints of time-boxed slots.
>
> That said, you should definitely take advantage of having everyone
> around (and with less scheduling constraints compared to Summit) to
> discuss inter-project questions (think Nova/Neutron or Nova/Cinder). You
> can hold those within your room if you think all team members should
> follow them, or take advantage of the aforementioned extra fishbowl room
> to hold those.
>
>> In other words, the fact I haven't had anxiety yet about planning the
>> PTG makes me anxious that I'm falling way behind already.
>
> I don't think you are way behind. Now is a good time to brainstorm on an
> etherpad what your team needs to discuss and do during those days. If
> you identify inter-project discussions, there is still time to reach out
> to those other teams to make sure it's on their radar as well, and
> arrange a common time for the discussion. I like to think we can achieve
> that without the stress and constraints of strict centralized
> scheduling, using a more peer-to-peer/unconference approach to magically
> make 

Re: [openstack-dev] [horizon] FFE Request

2017-01-25 Thread Richard Jones
Hi Rob, FFE granted for those two in-flight patches.

On 26 January 2017 at 09:23, Lingxian Kong  wrote:
> Hi, Rob,
>
> First, thanks for your work!
>
> What's your plan for the other two tabs (security group, floatingip)? I
> could see the split is very helpful no matter from performance perspective
> and both useful from end user's perspective.
>
> BTW, a huge +1 for this FFE!
>
>
>
>
> Cheers,
> Lingxian Kong (Larry)
>
> On Thu, Jan 26, 2017 at 9:01 AM, Adrian Turjak 
> wrote:
>>
>> +1
>>
>> We very much need this as the performance of that panel is awful. This
>> solves that problem while being a fairly minor code change which also
>> provides much better UX.
>>
>>
>> On 26/01/2017 8:07 AM, Rob Cresswell  wrote:
>>
>> o/
>>
>> I'd like to request an FFE on
>> https://blueprints.launchpad.net/horizon/+spec/reorganise-access-and-security.
>> This blueprint splits up the access and security tabs into 4 distinct
>> panels. The first two patches are https://review.openstack.org/#/c/408247
>> and https://review.openstack.org/#/c/425345/
>>
>> It's low risk; no API layer changes, mostly just moving code around.
>>
>> Rob
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] FFE Request

2017-01-25 Thread Lingxian Kong
Hi, Rob,

First, thanks for your work!

What's your plan for the other two tabs (security group, floatingip)? I
could see the split is very helpful no matter from performance perspective
and both useful from end user's perspective.

BTW, a huge +1 for this FFE!




Cheers,
Lingxian Kong (Larry)

On Thu, Jan 26, 2017 at 9:01 AM, Adrian Turjak 
wrote:

> +1
>
> We very much need this as the performance of that panel is awful. This
> solves that problem while being a fairly minor code change which also
> provides much better UX.
>
>
> On 26/01/2017 8:07 AM, Rob Cresswell  wrote:
>
> o/
>
> I'd like to request an FFE on https://blueprints.
> launchpad.net/horizon/+spec/reorganise-access-and-security. This
> blueprint splits up the access and security tabs into 4 distinct panels.
> The first two patches are https://review.openstack.org/#/c/408247 and
> https://review.openstack.org/#/c/425345/
>
> It's low risk; no API layer changes, mostly just moving code around.
>
> Rob
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Monty Taylor
On 01/25/2017 08:16 AM, Monty Taylor wrote:
> On 01/24/2017 06:42 PM, Sukhdev Kapur wrote:
>>
>> Ihar and Kevin, 
>>
>> As our potential future PTLs, I would like to draw your attention to one
>> of the critical issue regarding Neutron as "the" networking service in
>> OpenStack. 
>>
>> I keep hearing off and on that Neutron is not flexible to address many
>> networking use cases and hence a new (or additional) networking project
>> is needed. For example, to address the use cases around NFV (rigid
>> resource inter-dependencies).  Another one keeps popping up is that it
>> is very hard/difficult to add/enhance Neutron API - hence, I picked this
>> one goal called out in Ihar's candidacy. 
> 
> Adding an additional networking project to try to solve this will only
> make things work. We need one API. If it needs to grow features, it
> needs to grow features - but they should be features that all of
> OpenStack users get.

WORSE - will make things WORSE - not work. Sorry for potentially
completely misleading typo.

>> I would really like us to discuss this issue head-on and see what is
>> missing in Neutron APIs and what would take to make them extensible so
>> that vendors do not run around trying to figure out alternative
>> solutions
> 
> +100
> 
>> cheers..
>> -Sukhdev
>>
>>
>>  
>>
>> * Explore alternative ways to evolve Neutron API.  Piling up
>> extensions and allowing third parties to completely change core
>> resource behaviour is not ideal, and now that api-ref and API
>> consolidation effort in neutron-lib are closer to completion, we may
>> have better answers to outstanding questions than during previous
>> attempts to crack the nut. I would like to restart the discussion some
>> time during Pike.
>>
>>
>>
>>  
>>
>>
>> Thanks for attention,
>> Ihar
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Running experimental OVB and not OVB jobs separately.

2017-01-25 Thread Emilien Macchi
On Wed, Jan 25, 2017 at 3:42 PM, Sagi Shnaidman  wrote:
> HI, all
>
> I'd like to propose a bit different approach to run experimental jobs in
> TripleO CI.
> As you know we have OVB jobs and not-OVB jobs, and different pipelines for
> running these two types of them.
>
> What is current flow:
> if you need to run experimental jobs, you write comment with "check
> experimental" and all types of jobs will run - both OVB and not-OVB.
>
> What is proposal:
> for running OVB jobs only, you'll need to leave comment "check
> experimental-tripleo", for running non-OVB jobs only you'll still write
> "check experimental".
> For running all experimental jobs OVB and not-OVB just leave two comments:
> check experimental-tripleo
> check experimental
>
> From what I observed people usually want to run one-two of experimental jobs
> and usually one type of them. So this more explicit run can save us
> expensive OVB resources.
> If this not a case and you prefer to run all experimental jobs we have at
> once, please provide a feedback and I'll take it back.
>
> Patch about the topic: https://review.openstack.org/#/c/425184/

If infra is ok, I'm fine whith this change.
FWIW, infra rejected the proposal of having recheck-tripleo (to run
tripleo jobs running in RH1 cloud only).
Let's see how it goes now.

> Thanks
> --
> Best regards
> Sagi Shnaidman
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][CI] Running experimental OVB and not OVB jobs separately.

2017-01-25 Thread Sagi Shnaidman
HI, all

I'd like to propose a bit different approach to run experimental jobs in
TripleO CI.
As you know we have OVB jobs and not-OVB jobs, and different pipelines for
running these two types of them.

What is current flow:
if you need to run experimental jobs, you write comment with "check
experimental" and all types of jobs will run - both OVB and not-OVB.

What is proposal:
for running OVB jobs only, you'll need to leave comment "check
experimental-tripleo", for running non-OVB jobs only you'll still write
"check experimental".
For running all experimental jobs OVB and not-OVB just leave two comments:
check experimental-tripleo
check experimental

>From what I observed people usually want to run one-two of experimental
jobs and usually one type of them. So this more explicit run can save us
expensive OVB resources.
If this not a case and you prefer to run all experimental jobs we have at
once, please provide a feedback and I'll take it back.

Patch about the topic: https://review.openstack.org/#/c/425184/

Thanks
-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] API models [was: PTL candidacy]

2017-01-25 Thread Ian Wells
I would certainly be interested in dicussing this, though I'm not currently
signed up for the PTG.  Obviously this is close to my interests, and I see
Kevin's raised Gluon as the bogeyman (which it isn't trying to be).

Setting aside all the above talk about how we might do things for a moment:
to take one specific feature example, it actually took several /years/ to
add VLAN-aware ports to OpenStack.  This is an example of a feature that
doesn't affect or interest the vast majority of the user community, and
which is almost certainly not installed in the cloud you're currently
working on, and you probably have no intention of ever programming, and
which even had cross-vendor support.  It's useful and there are times that
you can't do without it; there were people ready to write the code.  So why
was it so very hard?


I hope we will all agree on these points:

- Neutron's current API of networks, subnets and ports is fine for what it
does.  We like it, we write apps using it, it doesn't need to change
- The backward compatibility and common understanding of Neutron's API is
paramount - applications should work everywhere, and should continue to
work as Neutron evolves
- Some people want to do different things with networks, and that doesn't
make them bad people
- What is important about APIs is that they are *not* tied to an
implementation or reinvented by every backend provider, but commonly agreed

This says we find pragmatic ways to introduce sane, consumable APIs for any
new thing we want to do, and perhaps we need a governance model around
ensuring that they are sane and resuable.  None of this says that every API
should fit neatly into the network/port/subnet model we have - it was
designed for, and is good at describing, L2 broadcast domains.  (Gluon,
specifically, is about solving exclusively the technical question of how to
introduce independent APIs, and not the governance one of how to avoid
proliferation.)

For any new feature, I would suggest that we fold it in to the current API
if it's widely useful and closely compatible with the current model.  There
are clearly cases where changing the current API in complex ways to serve
1% of the audience is not necessary and not helpful, and I think this is
where our problems arise.  And by 'the current API' I mean the existing
network/port/subnet model that is currently the only way to describe how
traffic moves from one port to another.  I do not mean 'we must start
another project' or 'we must have another endpoint'.

However, we should have a way to avoid affecting this API if it makes no
sense to put it there.  We should find a way of offering new forwarding
features without finding increasingly odd ways of making networks, ports
and subnets serve the purpose.  They were created to describe L2 overlays,
which is still mostly what they are used to do - to the point that the most
widely used plugin by far is the modular *L2* plugin.  It's hardly a
surprise that these APIs don't map to every possible networking setup in
the world.  My argument is that it's *sane* to want to invent quite
different APIs, it's not competition or dilution, and should we choose to
do so it's even a reasonable thing to do within Neutron's current framework.

I do not care if we make Neutron extensible in some different way to permit
this, if we start a new project or whatever, I just want it to happen. If
you think that the Gluon approach to this is the wrong way to make it
happen, and I'm seeing general consensus here that this could be done
within Neutron, then I welcome alternative suggestions.  But I do honestly
believe that we make our own pain by insisting on one API that must be
ridigly backward- and cross-compatible and simultaneously insisting that
all novel ideas be folded into it.
-- 
Ian.


On 25 January 2017 at 10:19, Sukhdev Kapur  wrote:

> Folks,
>
> This thread has gotten too long and hard to follow.
> It is clear that we should discuss/address this.
> My suggestion is that we organize a session in Atlanta PTG meeting and
> discuss this.
>
> I am going to add this on the Neutron etherpad - should this be included
> in any other session as well?
>
> -Sukhdev
>
>
>
>
> On Tue, Jan 24, 2017 at 12:33 AM, Ihar Hrachyshka 
> wrote:
>
>> Hi team,
>>
>> I would like to propose my PTL candidacy for Pike.
>>
>> Some of you already know me. If not, here is my brief OpenStack bio. I
>> joined the community back in Havana, and managed to stick around till
>> now. During the time, I fit several project roles like serving as a
>> Neutron liaison of different kinds (stable, oslo, release), fulfilling
>> my Neutron core reviewer duties, taking part in delivering some
>> longstanding features, leading QoS and upgrades subteams, as well as
>> being part of Neutron Drivers team. I also took part in miscellaneous
>> cross project efforts.
>>
>> I think my experience gives me broad perspective on how the OpenStack
>> community and 

Re: [openstack-dev] Is the gate stuck?

2017-01-25 Thread Neil Jerram
Thanks for the explanation.  I agree that it has indeed moved now!


On Wed, Jan 25, 2017 at 8:18 PM Jeremy Stanley  wrote:

> On 2017-01-25 20:05:04 + (+), Neil Jerram wrote:
> > I'm not experienced in reading these things, but it seems that nothing is
> > currently getting through the integrated gate, and from [1] it appears
> this
> > is because the gate-tempest-dsvm-neutron-full-ubuntu-xenial job of [2]
> has
> > hung.
> >
> > [1] http://status.openstack.org/zuul/
> > [2] https://review.openstack.org/422924
>
> It seems to be moving to me, you probably caught it while Zuul was
> calculating a large number of merge-check jobs from a previous
> merged change or similar. The very tiny "queue lengths" numbers in
> the top-left indicate if Zuul has an outstanding backlog of events
> or results to process which are not yet indicated in the displayed
> pipeline contents. They're usually around 0 but if you see them
> spike up (possibly into the thousands) it's going to take a bit of
> time to burn those down first and update the states of various
> active changes/pipelines.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the gate stuck?

2017-01-25 Thread Jeremy Stanley
On 2017-01-25 20:05:04 + (+), Neil Jerram wrote:
> I'm not experienced in reading these things, but it seems that nothing is
> currently getting through the integrated gate, and from [1] it appears this
> is because the gate-tempest-dsvm-neutron-full-ubuntu-xenial job of [2] has
> hung.
> 
> [1] http://status.openstack.org/zuul/
> [2] https://review.openstack.org/422924

It seems to be moving to me, you probably caught it while Zuul was
calculating a large number of merge-check jobs from a previous
merged change or similar. The very tiny "queue lengths" numbers in
the top-left indicate if Zuul has an outstanding backlog of events
or results to process which are not yet indicated in the displayed
pipeline contents. They're usually around 0 but if you see them
spike up (possibly into the thousands) it's going to take a bit of
time to burn those down first and update the states of various
active changes/pipelines.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is the gate stuck?

2017-01-25 Thread Neil Jerram
I'm not experienced in reading these things, but it seems that nothing is
currently getting through the integrated gate, and from [1] it appears this
is because the gate-tempest-dsvm-neutron-full-ubuntu-xenial job of [2] has
hung.

[1] http://status.openstack.org/zuul/
[2] https://review.openstack.org/422924
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] FFE Request

2017-01-25 Thread Adrian Turjak
+1We very much need this as the performance of that panel is awful. This solves that problem while being a fairly minor code change which also provides much better UX.On 26/01/2017 8:07 AM, Rob Cresswell  wrote:




o/ 


I'd like to request an FFE on https://blueprints.launchpad.net/horizon/+spec/reorganise-access-and-security. This blueprint splits up the access and security tabs
 into 4 distinct panels. The first two patches are https://review.openstack.org/#/c/408247 and https://review.openstack.org/#/c/425345/ 


It's low risk; no API layer changes, mostly just moving code around.


Rob



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ovo] unhashable type error

2017-01-25 Thread Ihar Hrachyshka
Looking at the code, I don't see a clear case to even use set() type
there. A list would seem to work just fine. Should we try to convert
to using lists there?

Nevertheless, we can look into extending the object base class for
hashing. I wonder though if it's something to tackle in Neutron scope.
Sounds more like an oslo.versionedobjects library feature request.

Ihar

On Wed, Jan 25, 2017 at 11:53 AM, Das, Anindita  wrote:
> Hi Ihar,
>
>
>
> While doing the integration for vlanallocation [1] I found that OVO
> associated with VlanAllocation throws “unhashable type” error with py35.
>
> The associated stack trace is here [2].
>
>
>
> To resolve this issue I added an equality and hash method in the
> vlanallocation OVO [3].
>
> My understanding is we will face the same problem with other OVO objects as
> well when we do similar operations on the object as in [1].
>
> Should we make all the OVO objects hashable or deal it case by case?
> Thoughts?
>
>
>
>
>
> [1]
> https://review.openstack.org/#/c/367810/28/neutron/plugins/ml2/drivers/type_vlan.py@77
>
> [2] http://paste.openstack.org/show/596281/
>
> [3]
> https://review.openstack.org/#/c/367810/28/neutron/objects/plugins/ml2/vlanallocation.py@38-55
>
>
>
> Thanks,
>
> Anindita (irc: dasanind)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [ovo] unhashable type error

2017-01-25 Thread Das, Anindita
Hi Ihar,

While doing the integration for vlanallocation [1] I found that OVO associated 
with VlanAllocation throws “unhashable type” error with py35.
The associated stack trace is here [2].

To resolve this issue I added an equality and hash method in the vlanallocation 
OVO [3].
My understanding is we will face the same problem with other OVO objects as 
well when we do similar operations on the object as in [1].
Should we make all the OVO objects hashable or deal it case by case? Thoughts?


[1] 
https://review.openstack.org/#/c/367810/28/neutron/plugins/ml2/drivers/type_vlan.py@77
[2] http://paste.openstack.org/show/596281/
[3] 
https://review.openstack.org/#/c/367810/28/neutron/objects/plugins/ml2/vlanallocation.py@38-55

Thanks,
Anindita (irc: dasanind)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-25 Thread Dan Sneddon
On 01/23/2017 11:03 AM, Emilien Macchi wrote:
> Greeting folks,
> 
> I would like to propose some changes in our core members:
> 
> - Remove Jay Dobies who has not been active in TripleO for a while
> (thanks Jay for your hard work!).
> - Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
> docker bits.
> - Add Steve Backer on os-collect-config and also docker bits in
> tripleo-common and tripleo-heat-templates.
> 
> Indeed, both Flavio and Steve have been involved in deploying TripleO
> in containers, their contributions are very valuable. I would like to
> encourage them to keep doing more reviews in and out container bits.
> 
> As usual, core members are welcome to vote on the changes.
> 
> Thanks,
> 

+1, thanks for all the work you did in the past, Jay!

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][docs] updating the minimum version of sphinx

2017-01-25 Thread Doug Hellmann
Excerpts from Andreas Jaeger's message of 2017-01-25 08:40:16 +0100:
> On 2017-01-25 08:24, Andreas Jaeger  wrote:
> > On 2017-01-25 05:03, Matthew Thode  wrote:
> >> On 01/24/2017 09:57 PM, Matthew Thode wrote:
> >>> Basically I'd like to ask the docs people if they are fine with updating
> >>> the minimum version of sphinx from sphinx>=1.2.1,!=1.3b1,<1.4 to
> >>> sphinx>=1.5.1.  This change seems fairly major, especially given that
> >>> there is no overlay between the 'before' and 'after' versions.
> >>>
> >>> I'd appreciate docs team reviews on this.  We plan on having a meeting
> >>> soon at 10:00 UTC in #openstack-meeting-alt if you care to join.
> >>>
> >>> https://review.openstack.org/418772
> >>>
> >>
> >> lesigh, was watching John's talk and mistyped...
> >> https://www.youtube.com/watch?v=HRRbogFZEcU
> >>
> >> retitled...
> > 
> > 
> > Docs is not part of requirements sync and has already switched - so this
> > is fine for documentation team.
> > 
> > Also, translations work fine...
> > 
> > So, go ahead - I gave my +1 on the review again,
> 
> I remember one thing when we switched: Doc team Sphinx Builds error out
> on warnings - and sphinx 1.5 gives a warning about :option: usage
> without a corresponding ".. option::" declaration.
> 
> Looking at
> http://codesearch.openstack.org/?q=%3Aoption%3A=nope==
> 
> We have quite a few projects that use :option: - and if those have
> warnings enabled, will fail.
> 
> So, might be better to do this after the Ocata release,
> 
> Andreas

Yes, let's please wait for this. It's going to be a bit of work to make
sure all of the various doc builds (API, developer, release notes) work
for all projects.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] No cells v2 meeting today 25-Jan

2017-01-25 Thread melanie witt

Hi all,

Apologies for the late notice, but there won't be a cells meeting this 
afternoon being that everyone is busy scrambling for the feature freeze 
deadline this week. Let's catch up after FF.


Thanks,
-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] FFE Request

2017-01-25 Thread Rob Cresswell
o/

I'd like to request an FFE on 
https://blueprints.launchpad.net/horizon/+spec/reorganise-access-and-security. 
This blueprint splits up the access and security tabs into 4 distinct panels. 
The first two patches are https://review.openstack.org/#/c/408247 and 
https://review.openstack.org/#/c/425345/

It's low risk; no API layer changes, mostly just moving code around.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Sukhdev Kapur
On Tue, Jan 24, 2017 at 5:04 PM, Kevin Benton  wrote:

> >I would really like us to discuss this issue head-on and see what is
> missing in Neutron APIs and what would take to make them extensible so that
> vendors do not run around trying to figure out alternative solutions
>
> The Neutron API is already very extensible and that's problematic. Right
> now a vendor can write an out-of-tree service plugin or driver that adds
> arbitrary fields and endpoints to the API that results in whatever behavior
> they want. This is great for vendors because they can directly expose
> features without having to make them vendor agnostic. However, it's
> terrible for operators because it leads to lock-in and terrible for the
> users because it leads to cross-cloud compatibility issues.
>
> For a concrete example of what I mean, take a look at this extension here:
> [1]. This is directly exposing vendor-specific things onto Neutron network
> API payloads. Nobody can build any tooling that depends on those fields
> without being locked into a specific vendor.
>

I do not believe this is a good example. I believe this is for monolithic
plugin (and probably pre-ML2 plugin time frame). If memory serves me right,
there were no specific guidelines at the time. I am sure there are many
other such examples relating to monolithic plugins.
However, I do get your point. Hence, the need to have a good look at the
API so that it can provide way for vendors and operators to expose the
strengths/features of their back-ends in a vendor agnostic fashion.

-Sukhdev



> So what I would like to encourage is bringing API extension work into
> Neutron-lib where we can ensure that the relevant abstractions are in place
> and it's not just a pass-through to a bunch of vendor-specific features. I
> would rather relax our constraint around requiring a reference
> implementation for new extensions in neutron-lib than continue to encourage
> developers to do expose whatever they want with the the existing extension
> framework.
>
> So I'm all for developing new APIs *as a community* to enable NFV use
> cases not supported by the current APIs. However, I don't want to encourage
> or make it easier for vendors to just build arbitrary extensions on top of
> Neutron that expose backend details.
>
> In my view, Neutron should provide a unified API for networking across
> OpenStack clouds, not a platform for writing deployment-specific networking
> APIs.
>
> 1. https://github.com/Juniper/contrail-neutron-plugin/blob/1
> 9ad4bcee4c1ff3bf2d2093e14727866412a694a/neutron_plugin_contr
> ail/extensions/contrail.py#L9-L22
>
> Cheers,
> Kevin Benton
>
> On Tue, Jan 24, 2017 at 3:42 PM, Sukhdev Kapur 
> wrote:
>
>>
>> Ihar and Kevin,
>>
>> As our potential future PTLs, I would like to draw your attention to one
>> of the critical issue regarding Neutron as "the" networking service in
>> OpenStack.
>>
>> I keep hearing off and on that Neutron is not flexible to address many
>> networking use cases and hence a new (or additional) networking project is
>> needed. For example, to address the use cases around NFV (rigid resource
>> inter-dependencies).  Another one keeps popping up is that it is very
>> hard/difficult to add/enhance Neutron API - hence, I picked this one goal
>> called out in Ihar's candidacy.
>>
>> I would really like us to discuss this issue head-on and see what is
>> missing in Neutron APIs and what would take to make them extensible so that
>> vendors do not run around trying to figure out alternative solutions
>>
>> cheers..
>> -Sukhdev
>>
>>
>>
>>
>>> * Explore alternative ways to evolve Neutron API.  Piling up
>>> extensions and allowing third parties to completely change core
>>> resource behaviour is not ideal, and now that api-ref and API
>>> consolidation effort in neutron-lib are closer to completion, we may
>>> have better answers to outstanding questions than during previous
>>> attempts to crack the nut. I would like to restart the discussion some
>>> time during Pike.
>>>
>>
>>
>>
>>
>>>
>>> Thanks for attention,
>>> Ihar
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [neutron] change in argument type for allocate_partially_specified_segment

2017-01-25 Thread Ihar Hrachyshka
On Tue, Jan 24, 2017 at 10:29 PM, Anna Taraday
 wrote:
> Thanks for bringing this up!
>
> I was assuming that from Ocata everyone should switch from usage 'old'
> TunnelTypeDriver to updated one.

I am not sure. We haven't marked the 'old' one with any deprecation
warnings, did we? For Ocata at least, both classes will be available
for use. In Pike, we can look at cleaning up the old class (either
through deprecation warning and removal in Q; or using
NeutronLibImpact process).

>
> Revering both back to session means reverting all refactor and this is not
> in line with enginefacade work and as I remember some of OVO patches we
> waiting for this refactor too.
>
> I we can duplicate methods or we can check type of the argument if session
> or context and proceed differently. I will push patch for this ASAP.
>

I reviewed the patch, I think it's a good path forward, thanks.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][barbican][octavia]certs don't get deregistered in barbican after lbaas listener delete

2017-01-25 Thread Adam Harwell
I've got this on my list of things to look at -- I don't know if it was you
I was talking with on IRC the other day about this issue, but I'm
definitely aware of it. As soon as we are past the Ocata feature freeze
crunch, I'll take a closer look.

My gut says that we should be calling the delete (which is not a real
delete) when the LB is deleted, and not doing so is a bug, but I'll need to
double check the logic as it has been a while since I touched this.

--Adam (rm_work)

On Mon, Jan 23, 2017, 18:38 Jiahao Liang (Frankie) 
wrote:

> Hi community,
>
> I created a loadbalancer with a listener with protocol as
> "TERMINATED_HTTPS" and specify --default-tls-container-ref with a ref of
> secret container from Barbican.
> However, after I deleted the listener, the lbaas wasn't removed from
> barbican container consumer list.
>
> $openstack secret container get
> http://192.168.20.24:9311/v1/containers/453e8905-d42b-43bd-9947-50e3acf499f4
>
> ++-+
> | Field  | Value
> |
>
> ++-+
> | Container href |
> http://192.168.20.24:9311/v1/containers/453e8905-d42b-43bd-9947-50e3acf499f4
>|
> | Name   | tls_container2
>  |
> | Created| 2017-01-19 12:44:07+00:00
> |
> | Status | ACTIVE
>  |
> | Type   | certificate
> |
> | Certificate|
> http://192.168.20.24:9311/v1/secrets/bfc2bf01-0f23-4105-bf09-c75839b6b4cb
>   |
> | Intermediates  | None
>  |
> | Private Key|
> http://192.168.20.24:9311/v1/secrets/c85d150e-ec84-42e0-a65f-9c9ec19767e1
>   |
> | PK Passphrase  | None
>  |
> | *Consumers  | {u'URL':
> u'lbaas://RegionOne/loadbalancer/5e7768b9-7aa9-4146-8a71-6291353b447e',
> u'name': u'lbaas'}*
>
>
> I went through the neutron-lbaas code base. We did register consumer
> during the creation of "TERMINATED_HTTPS" listener in [1]. But we somehow
> doesn't deregister it during the deletion in [1]:
> https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/services/loadbalancer/plugin.py#L642
> get_cert() register lbaas as a consumer for barbican cert_manager.  (
> https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/common/cert_manager/barbican_cert_manager.py#L177
> )
> [2]:
> https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/services/loadbalancer/plugin.py#L805
> we probably need to call delete_cert from barbican cert_manager to remove
> the consumer. (
> https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/common/cert_manager/barbican_cert_manager.py#L187
> )
>
>
> My questions are:
> 1. is that a bug?
> 2. or is it a intentional design letting the vendor driver to handle it?
>
> It looks more like a bug to me.
>
> Any thoughts?
>
>
> Best,
> Jiahao
> --
>
> *梁嘉豪/Jiahao LIANG (Frankie) *
> Email: gzliangjia...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] PTL Candidacy

2017-01-25 Thread Ihar Hrachyshka
On Tue, Jan 24, 2017 at 12:26 PM, Morales, Victor
 wrote:
> Given the latest issues related with the memory consumption[1] in CI jobs, 
> I’m just wondering if you have a plan to deal and/or improve it in Neutron.

AFAIU the root cause is still not clear, and we don't know if it's
Neutron or job setup that triggers the OOM. I think we all see that
the gate is not healthy lately (it's not just tempest, functional
failure rate is also pretty bad). We need a squad team with clear
ownership for failure tracking to get back to normal.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Core team changes / proposing Dharmendra Kushwaha

2017-01-25 Thread Sripriya Seetharam
+1 to Dharmendra, great addition.

Thank you very much Stephen for your contributions.

-Sripriya

On 1/24/17, 4:58 PM, "Sridhar Ramaswamy"  wrote:

Tackers,

I'd like to propose following changes to the Tacker core team.

Stephen Wong

After being associated with Tacker project from its genesis, Stephen
Wong (irc: s3wong) has decided to step down from the core-team. I
would like to thank Stephen for his contribution to Tacker,
particularly for his help navigating the initial days splitting off
Neutron and in re-launching this project in Vancouver summit for
TOSCA-based NFV Orchestration. His recent effort in writing the SFC
driver to support VNF Forwarding Graph is much appreciated. Thanks
Stephen!

Dharmendra Kushwaha

It gives me great pleasure to propose Dharmendra (irc:  dkushwaha) to
join the Tacker core team. Dharmendra's contributions to tacker
started in Jan 2016. He is an active contributor across the board [1]
in bug fixes, code cleanups and, most recently, as a lead author of
the Network Services Descriptor blueprint.

Also, Dharmendra recently stepped up to take care of bug triage for
Tacker. There is an uptick in deployment issues reported through LP
[2] and in irc - which itself is a good healthy thing. Now we need to
respond by fixing the issues reported promptly. Dharmendra’s help will
be immensely valuable here.

Existing cores - please vote +1 / -1.

thanks,
Sridhar

[1] 
https://urldefense.proofpoint.com/v2/url?u=http-3A__stackalytics.com_-3Fmodule-3Dtacker-2Dgroup-26user-5Fid-3Ddharmendra-2Dkushwaha-26metric-3Dmarks=DwIGaQ=IL_XqQWOjubgfqINi2jTzg=hROKXYYshyJWhFmsb7PFSLyhefOI0B-5pQmhOlAcwa8=2xUWkN09oB2b_eRh3Chk_UTkxfU9_RWz1pMhIbO6-ks=pER-09wTI4Zp_jj4e1nSkn8IZjkOpM5TEdEzR5NHeDY=
 
[2] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__answers.launchpad.net_tacker=DwIGaQ=IL_XqQWOjubgfqINi2jTzg=hROKXYYshyJWhFmsb7PFSLyhefOI0B-5pQmhOlAcwa8=2xUWkN09oB2b_eRh3Chk_UTkxfU9_RWz1pMhIbO6-ks=DfUsLy7WZx96tbMnvhC4VG4wZv6wRbc2GQviZzcucaA=
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=IL_XqQWOjubgfqINi2jTzg=hROKXYYshyJWhFmsb7PFSLyhefOI0B-5pQmhOlAcwa8=2xUWkN09oB2b_eRh3Chk_UTkxfU9_RWz1pMhIbO6-ks=ct4qjMgrzhDQIOMQJN2pFyHPbTWH2F0UciX8DVmPn5M=
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Sukhdev Kapur
Folks,

This thread has gotten too long and hard to follow.
It is clear that we should discuss/address this.
My suggestion is that we organize a session in Atlanta PTG meeting and
discuss this.

I am going to add this on the Neutron etherpad - should this be included in
any other session as well?

-Sukhdev




On Tue, Jan 24, 2017 at 12:33 AM, Ihar Hrachyshka 
wrote:

> Hi team,
>
> I would like to propose my PTL candidacy for Pike.
>
> Some of you already know me. If not, here is my brief OpenStack bio. I
> joined the community back in Havana, and managed to stick around till
> now. During the time, I fit several project roles like serving as a
> Neutron liaison of different kinds (stable, oslo, release), fulfilling
> my Neutron core reviewer duties, taking part in delivering some
> longstanding features, leading QoS and upgrades subteams, as well as
> being part of Neutron Drivers team. I also took part in miscellaneous
> cross project efforts.
>
> I think my experience gives me broad perspective on how the OpenStack
> community and more specifically Networking works, and what is expected
> from its PTL.
>
> First, let me describe my vision of PTL role.
>
> * It's not only about technical intricacies. It's also about people
> and procedures that make the project run smoothly day by day. The role
> of PTL is in empowering other team members, keeping track of any
> roadblocks and inconveniences that the team have, and working on
> streamlining workflow.
>
> * It's about delegation. We should make it a habit to look at the list
> of potential candidates for core membership and other leadership and
> administrative positions not just in times we are short on existing
> members.
>
> * PTL should be transparent and replaceable. I will work with outgoing
> PTL to capture oral wisdom and state of affairs into a publicly
> accessible project dashboard, and I will continue sharing such
> information with the team on constant basis. The project dashboard
> will give you answers to questions like: what's the project direction?
> what are current priorities, and where does each stand?  what's on PTL
> and Drivers' mind? which cross-project and governance initiatives are
> currently worked on? etc.
>
> As PTL, I'd like to focus on the following things:
>
> * Speed up the RFE/spec process. We should manage RFE/spec review
> pipeline in such a way so that initiatives that are given a green
> light on writing up design details get timely feedback and can get
> past spec process in reasonable time.  At the same time we should be
> honest to the community not to accept proposals that have high risk to
> fall through cracks due to lack of manpower. I will encourage usage of
> reviewday and other tools to keep focus of the team. We will mull over
> the idea of virtual code sprints for high demand topics.
>
> * Continue Stadium program without drastic changes of direction.
> Thanks to Armando, we filled the Stadium with tangible meaning. I want
> us to build on top of that foundation to drive consistency and high
> quality standards between participating projects.
>
> * Support Marketplace rework. With huge number of drivers and plugins
> available for Neutron, it became hard for operators to pick the right
> one and for vendors to advertise their features. There is strong
> vendor interest to improve situation there. We should boost Feature
> Classification work, and integrate it with how third party CI systems
> report test results so that they become useful for Marketplace.
>
> * Introduce Gate Keeper role. We've recently made significant progress
> in how we deal with gate: we expanded coverage to new types of jobs,
> we utilize Grafana and other community tools, we review gate-failure
> tagged bugs during weekly meetings. Sadly, sometimes gate becomes
> unstable, and it slows down work progress for everyone.  In such
> cases, it's all about timely addressing the issue. For that matter, I
> will work with the team on defining a new Gate Keeper role that would
> help prioritizing current gate issues, as well as proactively work
> with the team on gate instability vectors. I believe clear ownership
> is the key.
>
> * Make centralized L3 legacy indeed. We have DVR and HA in tree for
> quite some time. Both technologies are now mature enough, and are no
> longer mutually exclusive. Sadly, they are still second class citizens
> in our own gate.  My belief is we should give users a clear signal
> that old L3 is indeed legacy by switching our jobs to DVR+HA where
> applicable.  I am sure that will surface some previously unknown
> issues, but we'll be ready to tackle them.  While it's never black or
> white, I suggest we prioritize this work over adding new major L3
> features.
>
> * Support existing maintenance initiatives. I'd like us to close
> neutron-lib saga in Pike, and consider neutron-lib switch for a
> requirement to all Stadium projects starting from Queens. We should
> also close OSC transition 

Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Ihar Hrachyshka
On Wed, Jan 25, 2017 at 7:45 AM, Kevin Benton  wrote:
> LBaaS is a little special since Octavia will have it's own API endpoint
> completely that they will evolve on their own. The other spun-out projects
> (e.g. VPNaaS) will have the API defined in neutron-lib[1].

In a way, VPNaaS is also special, because it's an out-of-stadium repo
that nevertheless brings its own full blown API definition. We will
need to sort that out.

I see that you suggest we should work on bringing VPNaaS back into the
stadium during Pike. While I am not confidant that it may happen in a
single cycle (there were plenty of issues with the repo identified
during latest stadium assessment), I agree that, assuming we want that
API to exist in the future, it makes sense to bring it back in.
Especially if in the future we want to programmatically enforce single
source of API truth.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Sukhdev Kapur
Folks, this is a great discussion. I hope this leads us to some good
consensus and direction :-)
I would suggest that we discuss this in upcoming PTG meeting as well.


On Wed, Jan 25, 2017 at 5:20 AM, Kevin Benton  wrote:

> >So I'm not sure that Kevin and Thierry's answers address Sukhdev's point.
>
> I stated that I am happy to develop new APIs in Neutron. "So I'm all for
> developing new APIs *as a community*"...
>

+1


>
> The important distinction I am making is that we can make new APIs (and we
> do with routed networks as you mentioned, VLAN aware VMs, etc), but I don't
> want to see the project just become a framework to make it even easier than
> it is to define an arbitrary networking API.
>

There is no such thing as arbitrary API. It is like one person's treasure
is other person's trash no body loves to create arbitrary APIs - there
are genuine needs. Some times we fail to understand requirements, other
times the requirements are not articulated clearly, which could lead to
impressions that arbitrary things are being added.



> >But I think that the point that Sukhdev raised - about other networking
> projects being suggested because of Neutron being perceived as not flexible
> enough
>
> I'm explicitly stating that if someone wants Neutron to become more
> flexible to develop arbitrary APIs that diverge across deployments even
> more, that's not something I'm going to support. However, making it
> flexible for operators/users by adding new vendor-agnostic APIs is
> something I will encourage.
>

> The reason I am stressing that distinction is because we have vendors that
> want something like Gluon that allows them to define new arbitrary APIs
> without upstreaming anything or working with the community to standardize
> anything.
>
I understand that may be useful for some artisanal NFV workloads, but
> that's not the direction I want to take Neutron.
>

OpenStack community consists of vendors and operators/users to facilitate
and adoption of newer technologies as they evolve. As newer
workloads/technologies evolve, the need to orchestrate them requires
flexibility in the API. Labeling them as an arbitrary API  just because
they do not fall into traditional L2/L3 networking model) is a harsh
characterization.



> Flexibility for operators/users = GOOD
> Flexibility for vendor API injection = BAD
>

As I read/understand more about Gluon, that is being pushed by both
Operators/Users and Vendors. I wonder which one is GOOD and which one is
BAD :-):-)

cheers..
-Sukhdev




>
> On Wed, Jan 25, 2017 at 4:55 AM, Neil Jerram  wrote:
>
>> On Wed, Jan 25, 2017 at 10:20 AM Thierry Carrez 
>> wrote:
>>
>>> Kevin Benton wrote:
>>> > [...]
>>> > The Neutron API is already very extensible and that's problematic.
>>> Right
>>> > now a vendor can write an out-of-tree service plugin or driver that
>>> adds
>>> > arbitrary fields and endpoints to the API that results in whatever
>>> > behavior they want. This is great for vendors because they can directly
>>> > expose features without having to make them vendor agnostic. However,
>>> > it's terrible for operators because it leads to lock-in and terrible
>>> for
>>> > the users because it leads to cross-cloud compatibility issues.
>>>
>>> +1000 on this being a major problem in Neutron. Happy to see that you
>>> want to work on trying to reduce it.
>>
>>
>> The Neutron API is a model of what forms of connectivity can be
>> expressed, between instances and the outside world.  Once that model is
>> chosen, it is inevitably (and simultaneously):
>>
>> (a) overconstraining - in other words, there will be forms of
>> connectivity that someone could reasonably want, but that are not allowed
>> by the model
>>
>> (b) underconstraining - in other words, there will be nuances of
>> behaviour, delivered by a particular implementation, that are arguably
>> within what the model allows, but (as we're talking about semantics) it
>> would really be better to revise the API so that it can explicitly express
>> those nuances.
>>
>> Sometimes - since the semantics of the Neutron API are not precisely
>> documented - it's not clear which of these situations one is in.  But I
>> think that the point that Sukhdev raised - about other networking projects
>> being suggested because of Neutron being perceived as not flexible enough -
>> is to do with (a); whereas the points that Kevin and Thierry responded with
>> - ways that the API is already _too_ flexible - are to do with (b).  So I'm
>> not sure that Kevin and Thierry's answers address Sukhdev's point.
>>
>> It's possible for an API to have (a) and (b) problems simultaneously, and
>> to make progress on addressing them both.  In Neutron's case, a major
>> example of (a) has been the routed networks work, which (among other
>> things) generalized Neutron's network concept from being something that
>> always provides L2 adjacency between its ports, to something that 

Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Ihar Hrachyshka
Catching up on the thread, lots of good thoughts.

I don't think there is disagreement here around how Networking API
should evolve in terms of vendor extensions. As Kevin suggested, we
don't want to advertise API extensibility without Neutron team
supervision.

One of the reasons behind current api-ref effort that includes moving
API definitions from all stadium projects into neutron-lib - and
vouching for new API changes in scope of this single repo - is to
transform Neutron from a shallow API controller that allows to plug in
opaque API modules into a single API with consistent behavior, review
practices and standards.

I would go as far as to say that once that effort is complete, we
should start looking for ways to discourage (if not forbid) API
pluggability not proxied through proper in-neutron-lib review process.
It's one thing to allow plugging in different networking backends that
all implement the same Networking API behavior; and it's completely
different to allow those plugins to change Networking API to the point
where you can't even tell if it's open API guaranteed to work in other
environments.

Yes, there is concern that Networking API doesn't move as quick as
some vendors may want. We can address that with streamlining review
process for new API definitions. As long as an API proposal makes
sense not just for one specific backend, and is defined in general
enough terms, I think we can go with expedited review procedure.
Indeed we should probably reconsider how we enforce reference
implementation completion before an extension is allowed in.

Ihar

On Tue, Jan 24, 2017 at 3:42 PM, Sukhdev Kapur  wrote:
>
> Ihar and Kevin,
>
> As our potential future PTLs, I would like to draw your attention to one of
> the critical issue regarding Neutron as "the" networking service in
> OpenStack.
>
> I keep hearing off and on that Neutron is not flexible to address many
> networking use cases and hence a new (or additional) networking project is
> needed. For example, to address the use cases around NFV (rigid resource
> inter-dependencies).  Another one keeps popping up is that it is very
> hard/difficult to add/enhance Neutron API - hence, I picked this one goal
> called out in Ihar's candidacy.
>
> I would really like us to discuss this issue head-on and see what is missing
> in Neutron APIs and what would take to make them extensible so that vendors
> do not run around trying to figure out alternative solutions
>
> cheers..
> -Sukhdev
>
>
>
>>
>> * Explore alternative ways to evolve Neutron API.  Piling up
>> extensions and allowing third parties to completely change core
>> resource behaviour is not ideal, and now that api-ref and API
>> consolidation effort in neutron-lib are closer to completion, we may
>> have better answers to outstanding questions than during previous
>> attempts to crack the nut. I would like to restart the discussion some
>> time during Pike.
>
>
>
>
>>
>>
>> Thanks for attention,
>> Ihar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Propose Dharini Chandrasekar for Glance core

2017-01-25 Thread Chandrasekar, Dharini
My sincere Thanks to every Glance core. You all have inspired
me right from day one. I hope to perform my duties as a Glance
Core to the best of my abilities and help make Glance great.

Thanks,
Dharini Chandrasekar.






On 1/24/17, 07:36, "Brian Rosmaita"  wrote:

>I'd like to propose Dharini Chandrasekar (dharinic on IRC) for Glance
>core.  She has been an active reviewer and contributor to the Glance
>project during the Newton and Ocata cycles, has contributed to other
>OpenStack projects, and has represented Glance in some interactions with
>other project teams.  Additionally, she recently jumped in and saw
>through to completion a high priority feature for Newton when the
>original developer was unable to continue working on it.  Plus, she's
>willing to argue with me (and the other cores) about points of software
>engineering.  She will be a great addition to the Glance core reviewers
>team.
>
>If you have any concerns, please let me know.  I plan to add Dharini to
>the core list after this week's Glance meeting.
>
>thanks,
>brian
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Core team changes / proposing Dharmendra Kushwaha

2017-01-25 Thread Bharath Thiruveedula
+1 for both.

Regards

Bharath T
Imaginea Technologies Inc.

On Wed, Jan 25, 2017 at 8:19 PM, HADDLETON, Robert W (Bob) <
bob.haddle...@nokia.com> wrote:

> +1
>
> Thanks Stephen!  And welcome aboard Dharmendra!
>
> Bob
>
>
> On 1/24/2017 6:58 PM, Sridhar Ramaswamy wrote:
>
>> Tackers,
>>
>> I'd like to propose following changes to the Tacker core team.
>>
>> Stephen Wong
>>
>> After being associated with Tacker project from its genesis, Stephen
>> Wong (irc: s3wong) has decided to step down from the core-team. I
>> would like to thank Stephen for his contribution to Tacker,
>> particularly for his help navigating the initial days splitting off
>> Neutron and in re-launching this project in Vancouver summit for
>> TOSCA-based NFV Orchestration. His recent effort in writing the SFC
>> driver to support VNF Forwarding Graph is much appreciated. Thanks
>> Stephen!
>>
>> Dharmendra Kushwaha
>>
>> It gives me great pleasure to propose Dharmendra (irc:  dkushwaha) to
>> join the Tacker core team. Dharmendra's contributions to tacker
>> started in Jan 2016. He is an active contributor across the board [1]
>> in bug fixes, code cleanups and, most recently, as a lead author of
>> the Network Services Descriptor blueprint.
>>
>> Also, Dharmendra recently stepped up to take care of bug triage for
>> Tacker. There is an uptick in deployment issues reported through LP
>> [2] and in irc - which itself is a good healthy thing. Now we need to
>> respond by fixing the issues reported promptly. Dharmendra’s help will
>> be immensely valuable here.
>>
>> Existing cores - please vote +1 / -1.
>>
>> thanks,
>> Sridhar
>>
>> [1] http://stackalytics.com/?module=tacker-group_id=dharmen
>> dra-kushwaha=marks
>> [2] https://answers.launchpad.net/tacker
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo-heat-templates, vendor plugins and the new hiera hook

2017-01-25 Thread Giulio Fidente
On 01/25/2017 04:32 PM, Steven Hardy wrote:
> On Wed, Jan 25, 2017 at 02:59:42PM +0200, Marios Andreou wrote:
>> Hi, as part of the composable upgrades workflow shaping up for Newton to
>> Ocata, we need to install the new hiera hook that was first added with
>> [1] and disable the old hook and data as part of the upgrade
>> initialization [2]. Most of the existing hieradata was ported to use the
>> new hook in [3]. The deletion of the old hiera data is necessary for the
>> Ocata upgrade, but it also means it will break any plugins still using
>> the 'old' os-apply-config hiera hook.
>>
>> In order to be able to upgrade to Ocata any templates that define hiera
>> data need to be using the new hiera hook and then the overcloud nodes
>> need to have the new hook installed (installing is done in [2] as a
>> matter of necessity, and that is what prompted this email in the first
>> place). I've had a go at updating all the plugin templates that are
>> still using the old hiera data with a review at [4] which I have -1 for now.
>>
>> I'll try and reach out to some individuals more directly as well but
>> wanted to get the review at [4] and this email out as a first step,
> 
> Thanks for raising this marios, and yeah it's unfortunate as we've had to
> do a switch from the old to new hiera hook this release with out a
> transition where both work.
> 
> I think we probably need to do the following:
> 
> 1. Convert anything in t-h-t refering to the old hook to the new (seems you
> have this in progress, we need to ensure it all lands before ocata)

was working on this as well today for Ceph

https://review.openstack.org/#/c/425288/

thanks marios
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Preparations for Ocata release in RDO

2017-01-25 Thread Emilien Macchi
On Wed, Jan 25, 2017 at 12:13 PM, Javier Pena  wrote:
>> Hi,
>>
>> In RDO we are preparing for the incoming Ocata release. This means
>> we'll create a new RDO Trunk builder "centos-ocata" in the next few
>> days (It will be ready for next week). This builder will get content
>> from stable/ocata branches of projects as they become available and
>> fallback to master for those not branched yet. The repos created by
>> this worker will be published under
>> https://trunk.rdoproject.org/centos7-ocata/ (which will not longer be
>> a link to https://trunk.rdoproject.org/centos7-master/).
>>
>
> Hi all,
>
> The centos-ocata builder has been successfully bootstrapped, and its 
> repositories are available at https://trunk.rdoproject.org/centos7-ocata/

Ok, so here's the plan:

- release tripleoclient on Thursday afternoon (US east-coast)
- in the meantime, ask infra to land https://review.openstack.org/#/c/424622/
- once we have stable/ocata on tripleoclient & the infra patch merged,
run some tests in TripleO CI against stable/ocata.

>From there, evaluate what doesn't work (if any item), and communicate
with RDO folks so we can fix it during Friday.
I'll take care of all items in the plan, just keep in mind I'll ping
you when I have updates ;-)

> Regards,
> Javier
>
>> According to the feedback received during the last release cycle, we
>> are changing the way we do the transition this time so that
>> repositories content will be more accurate to what is expected at any
>> point in the cycle. Repos under centos-master will allways follow
>> master branch and centos-ocata will get stable/ocata as soon as repos
>> get the branch created.
>>
>> This has some implications from the upstream projects point of view:
>>
>> - Projects using repositories under
>> https://trunk.rdoproject.org/centos7-ocata/ will start getting
>> packages for content delivered in stable/ocata branches instead of
>> master. Repos in https://trunk.rdoproject.org/centos7-master/ will
>> keep getting packages for content in master branch.
>> - Anyone currently pinning to a specific hash repo under
>> https://trunk.rdoproject.org/centos7-ocata/ should move it to use
>> https://trunk.rdoproject.org/centos7-master/ as soon as possible. Note
>> that pinning to a specific hash is not recommended practice.
>> - With current job definitions, link current-tripleo promoted by
>> upstream TripleoCI will promote repos with content from master
>> branches and not stable/ocata after creating the ocata builder. We
>> think this is not the desired situation during RC period and we must
>> keep promotion of current-tripleo link for stable/ocata until
>> cycle-trailing projects are tagged. This will require some changes in
>> both Tripleo-CI and RDO-CI, please let us know your thoughs so that we
>> can agree on the best way to implement this and start working on it.
>>
>> Best regards,
>>
>> Alfredo
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Officially support Python 3.5 in Pike

2017-01-25 Thread Emilien Macchi
On Tue, Jan 24, 2017 at 4:39 PM, Emilien Macchi  wrote:
> OpenStack community decided to officially support Python 3.5 by the
> end of Pike cycle:
> https://governance.openstack.org/tc/goals/pike/python35.html
>
> To track this work in TripleO, I created a blueprint:
> https://blueprints.launchpad.net/tripleo/+spec/support-python-35
>
> I'm also tracking the work completion in the governance repository:
> https://review.openstack.org/#/c/424857/
>
> I'll start the evaluation of work that needs to be done and document
> it in the previous link.
> Anyone volunteer to help on $topic is very welcome, ping me on IRC or
> here by e-mail.
>

I had a chat with Victor Stinner who has been involved in porting
Python 3 in OpenStack for years now.
Here's the list of actions:

- we need to evaluate if all TripleO projects (written in Python) have
unit tests jobs for Python 3.
- for those that do not have, create them (non voting) and see how
they work. If blockers, fix them.
- investigate how to run integration tests in Python3 (probably for
tripleoclient & tripleo-common as a start).
- investigate if all TripleO projects are packaged in RDO with Python 3.
- update https://wiki.openstack.org/wiki/Python3 with TripleO projects
& their status in Python 3.

Because this is some work, I propose we create a squad:
https://review.openstack.org/425294

Anyone is welcome to join & participate in this effort.
Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Preparations for Ocata release in RDO

2017-01-25 Thread Javier Pena
> Hi,
> 
> In RDO we are preparing for the incoming Ocata release. This means
> we'll create a new RDO Trunk builder "centos-ocata" in the next few
> days (It will be ready for next week). This builder will get content
> from stable/ocata branches of projects as they become available and
> fallback to master for those not branched yet. The repos created by
> this worker will be published under
> https://trunk.rdoproject.org/centos7-ocata/ (which will not longer be
> a link to https://trunk.rdoproject.org/centos7-master/).
> 

Hi all,

The centos-ocata builder has been successfully bootstrapped, and its 
repositories are available at https://trunk.rdoproject.org/centos7-ocata/

Regards,
Javier

> According to the feedback received during the last release cycle, we
> are changing the way we do the transition this time so that
> repositories content will be more accurate to what is expected at any
> point in the cycle. Repos under centos-master will allways follow
> master branch and centos-ocata will get stable/ocata as soon as repos
> get the branch created.
> 
> This has some implications from the upstream projects point of view:
> 
> - Projects using repositories under
> https://trunk.rdoproject.org/centos7-ocata/ will start getting
> packages for content delivered in stable/ocata branches instead of
> master. Repos in https://trunk.rdoproject.org/centos7-master/ will
> keep getting packages for content in master branch.
> - Anyone currently pinning to a specific hash repo under
> https://trunk.rdoproject.org/centos7-ocata/ should move it to use
> https://trunk.rdoproject.org/centos7-master/ as soon as possible. Note
> that pinning to a specific hash is not recommended practice.
> - With current job definitions, link current-tripleo promoted by
> upstream TripleoCI will promote repos with content from master
> branches and not stable/ocata after creating the ocata builder. We
> think this is not the desired situation during RC period and we must
> keep promotion of current-tripleo link for stable/ocata until
> cycle-trailing projects are tagged. This will require some changes in
> both Tripleo-CI and RDO-CI, please let us know your thoughs so that we
> can agree on the best way to implement this and start working on it.
> 
> Best regards,
> 
> Alfredo
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] placement api request analysis

2017-01-25 Thread Chris Dent


I've started looking into what kind of request load the placement
API can expect when both the scheduler and the resource tracker are
talking to it. I think this is important to do now before we have
things widely relying on this stuff so we can give some reasonable
advice on deployment options and expected traffic.

I'm working with a single node devstack, which should make the math
nice and easy.

Unfortunately when doing this is really ended more of an audit of
where the resource tracker is doing more than it ought to be. What
follows ends being a rambling exploration of areas that _may_ be
wrong.

I've marked paragraphs that have things that maybe ought to change
with #B. It appears that the resource tracker is doing
a lot of extra work that it doesn't need to do (even before the
advent of the placement API). There's already one fix in progress
(for B2) but the others need some discussion as I'm not sure of the
ramifications. I'd like some help deciding what's going on before I
make random bug reports.

Before Servers
==

When the compute node starts it makes two requests to create the
resource provider that represents that compute, at which point it
also requests the aggregates for that resource provider, to update
its local map of aggregate associations.

#B0
It then updates inventory for the resource provider, twice, the
first one is a conflict (probably because the generation is out of
wack[1]).

After that every 60s or so, five requests are made:

GET 
/placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/aggregates"
GET 
/placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/inventories
GET 
/placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/allocations
GET 
/placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/aggregates
GET 
/placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/inventories"

These requests are returning the same data each time (so far).

The request to get aggregates happens twice on every cycle, because
it happens each time we ensure the resource provider is present in
our local map of resource providers. Aggregates are checked each time
because if we don't there's no other clean way for an operator to
associate aggregates and have them quickly picked up.

The request to inventories is checking if inventory has
changed. This is happening as a result of the regular call to
'update_available_resource' passing through _update method.

#B1
That same method is also calling _init_compute_node, which will
_also_ think about updating the inventory and thus do the aggregates
check from _ensure_resource_provider. That seems redundant. Perhaps
we should only call update_resource_stats from _update and not from
_init_compute_node as they are both called from the same method in
the resource tracker.

That same method also reguarly calls '_update_usage_from_instances'
which calls 'remove_deleted_instances' with a potentially empty list
of instances[2]. That method gets the allocations for this compute
node.

So before we've added any VMs we're at 5000 requests per minute in a
1000 node cluster.

#B2
Adding in the fix at https://review.openstack.org/#/c/424305/
reduces a lot of that churn by avoiding an update from _update when
not necessary, reducing to three requests every 60s when there are
no servers. The remaining requests are from the call to
_init_compute_node at #B1 above.

Creating a Server
=

When we create a server there are seven total requests, with these
involved with the actual instance:

GET 
/placement/resource_providers?resources=VCPU%3A1%2CMEMORY_MB%3A512%2CDISK_GB%3A1
GET /placement/allocations/717b8dcc-110c-4914-b9c1-c04433267577
PUT /placement/allocations/717b8dcc-110c-4914-b9c1-c04433267577

(allocations are done by comparing with what's there, if anything)

The others are what _update does.

After that the three requests grows to four per 60s:

GET 
/placement/resource_providers/8635a519-eac8-43b2-9bf0-aba848b328a7/aggregates
GET 
/placement/resource_providers/8635a519-eac8-43b2-9bf0-aba848b328a7/inventories
GET /placement/allocations/c4b73292-3731-4f25-b102-1bd176f4bd9b
GET 
/placement/resource_providers/8635a519-eac8-43b2-9bf0-aba848b328a7/allocations

#B3
The new GET to /placement/allocations is happening when the
resource tracker calls _update_usage_from_instance, which is always
being called becuause is_new_instance is always true in that method,
even when the instance is not "new". This is happening because the
tracked_instaces dict is _always_ getting cleared before
_update_usage_from_instance is being called. Which is weird because
it appears that it is that method's job to update tracked_instances.
If I remove the clear() the get on /placement/allocations goes away.
But I'm not sure what else this will break. The addition of that line
was a long time ago, in this change (I think):
https://review.openstack.org/#/c/13182/

With the clear() gone the calls in 

Re: [openstack-dev] [vote][kolla] deprecation for Debian distro support

2017-01-25 Thread Christian Berendt
As discussed in todays team meeting [0]:

* the vote is closed
* the deprecation is delayed

http://eavesdrop.openstack.org/meetings/kolla/2017/kolla.2017-01-25-16.01.log.html

Christian.

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move of openstack-salt project

2017-01-25 Thread Davanum Srinivas
Filip,

Thanks for the announce. Can you please follow the steps below to
retire the projects in openstack git repo?
http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

Thanks,
Dims

On Wed, Jan 25, 2017 at 11:39 AM, Filip Pytloun  wrote:
> Hello,
>
> I would like to announce migration of openstack-salt to join remaining
> formulas of the ecosystem.
> Since today, openstack-salt and other formulas are living at
> github.com/salt-formulas.
>
> In the past few years this ecosystem has grown to more than 90 formulas
> suitable for deployment of anything from base OS, CI/CD systems, IDM up
> to complex systems like Openstack, Kubernetes and more.
>
> With this growt we believe that unifying location of all salt formulas
> respecting same concepts and direction will make development and usage
> simpler. Openstack formulas are only a small subset of this ecosystem so
> it makes perfect sense to move them to join with the rest under one
> project.
>
> Here are links to project resources, more are going to come soon..
>
> Github: https://github.com/salt-formulas
> Launchpad https://launchpad.net/salt-formulas
> IRC: #salt-formulas @ irc.freenode.net
>
> Feel free to join and idle/discuss at IRC channel :-)
>
> Filip
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Move of openstack-salt project

2017-01-25 Thread Filip Pytloun
Hello,

I would like to announce migration of openstack-salt to join remaining
formulas of the ecosystem.
Since today, openstack-salt and other formulas are living at
github.com/salt-formulas.

In the past few years this ecosystem has grown to more than 90 formulas
suitable for deployment of anything from base OS, CI/CD systems, IDM up
to complex systems like Openstack, Kubernetes and more.

With this growt we believe that unifying location of all salt formulas
respecting same concepts and direction will make development and usage
simpler. Openstack formulas are only a small subset of this ecosystem so
it makes perfect sense to move them to join with the rest under one
project.

Here are links to project resources, more are going to come soon..

Github: https://github.com/salt-formulas
Launchpad https://launchpad.net/salt-formulas
IRC: #salt-formulas @ irc.freenode.net

Feel free to join and idle/discuss at IRC channel :-)

Filip


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] Thursday 2nd February - Bug Day!

2017-01-25 Thread James Page
Hi Team

Just a quick reminder that next Thursday marks our second bug day for the
year.

Please focus on triage and resolution of bugs across the openstack charms -
the new bugs URL is in the topic in #openstack-charms on Freenode IRC.

Happy bug hunting!

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [operators] Optional resource asking or not?

2017-01-25 Thread Dan Smith
> Update on that agreement : I made the necessary modification in the
> proposal [1] for not verifying the filters. We now send a request to the
> Placement API by introspecting the flavor and we get a list of potential
> destinations.

Thanks!

> When I began doing that modification, I know there was a functional test
> about server groups that needed modifications to match our agreement. I
> consequently made that change located in a separate patch [2] as a
> prerequisite for [1].
> 
> I then spotted a problem that we didn't identified when discussing :
> when checking a destination, the legacy filters for CPU, RAM and disk
> don't verify the maximum capacity of the host, they only multiple the
> total size by the allocation ratio, so our proposal works for them.
> Now, when using the placement service, it fails because somewhere in the
> DB call needed for returning the destinations, we also verify a specific
> field named max_unit [3].
> 
> Consequently, the proposal we agreed is not feature-parity between
> Newton and Ocata. If you follow our instructions, you will still get
> different result from a placement perspective between what was in Newton
> and what will be Ocata.

To summarize some discussion on IRC:

The max_unit field limits the maximum size of any single allocation and
is not scaled by the allocation_ratio (for good reason). Right now,
computes report a max_unit equal to their total for CPU and RAM
resources. So the different behavior here is that placement will not
choose hosts where the instance would single-handedly overcommit the
entire host. Multiple instances still could, per the rules of the
allocation-ratio.

The consensus seems to be that this is entirely sane behavior that the
previous core and ram filters weren't considering. If there's a good
reason to allow computes to report that they're willing to take a
larger-than-100% single allocation, then we can make that change later,
but the justification seems lacking at the moment.

> Technically speaking, the functional test is a canary bird, telling you
> that you get NoValidHosts while it was working previously.

My opinion, which is shared by several other people, is that this test
is broken. It's trying to overcommit the host with a single instance,
and in fact, it's doing it unintentionally for some resources that just
aren't checked before the move to placement. Changing the test to
properly reflect the resources on the host should be the path forward
and Sylvain is working on that now.

The other concern that was raised was that since CoreFilter is not
necessarily enabled on all clouds, cpu_allocation_ratio is not being
honored on those systems today. Moving to placement with ocata will
cause that value to be used, which may be incorrect for certain
overly-committed clouds which had previously ignored it. However, I
think we need not be too concerned as the defaults for these values are
16x overcommit for CPU and 1.5x overcommit for RAM. Those are probably
on the upper limit of sane for most environments, but also large enough
to not cause any sort of immediate panic while people realize (if they
didn't read the release notes) that they may want to tweak them.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-25 Thread Sean Dague

On 01/25/2017 06:16 AM, Monty Taylor wrote:


I have quibble with the current microversions construct. It's mostly
semantic in nature, and I _think_ it's not valid/useful - but I'm going
to describe it here just so that I've said it and we can all acknowledge
it and move on.

My concern is with the prefix "micro". What gets presented to the user
now is a "major" api version that is essentially useless, and a
monotonoically increasing single version number that does not indicate
whether a given version introduced a breaking change or not.

I LIKE the mechanism. It works well - I do not think using it is
burdensome or bad for the user so far. But it's not "micro". It's
_essentially_ "every 'microversion' bump must be treated as a major
version bump, we just moved it to a construct that doesn't involve
deploying 40 different rest endpoints.

There are ways in which we could use the mechanism while still using
structured content to convey some amount of meaning to a user so that
client consumers don't have to write matrixes of "if this cloud has max
microversion of 27, then do this, otherwise do this other thing" for all
of the microversions.

That said - it's WAY better than the other thing - at least so far in
the way I'm seeing nova use it.

So I imagine it's just me quibbling over the word 'micro' and wanting
something more like libtool's version:revision:age construct which
calculates for a given library and consumer whether or not a library can
be expected to be usable in a dynamic linking context. (this is a
different construct from semver, but turns out is handy when you have a
single client that may need to consume multiple different api providers)


I can definitely understand an issue with the naming. The naming grew 
organically out of the Nova v3 struggles. It was a name that 
distinguished it from Major versions, and far enough away from Semver 
words to help disabuse people that this was semver. Which continues to 
be a struggle.


I'd suggestion a new bike shed on names, except, we seem to have at 
least built context around what we mean by microversions now in our 
community, and I'd had to backslide on 2 years of education there.


It's probably time to build more of a primer back into the api-ref site, 
maybe an area on common concepts.




Also, when suppressing or not suppressing which user base is more
important? The users that exist now or the users to come? This may
sound like a snarky or idle question, but it's a real one: Is it
true that we do, as a general rule, base our development on existing
users and not people who have chosen not to use "the product" for
some reason?


We have a GIANT install base - but the tools that can work consistently
across that install base is small. If we continue to chase phantom maybe
users at the expense of the users we have currently, I'm pretty sure
we'll end up where linux on the desktop has. I believe we stopped be
able to legitimately make backwards incompatible change around havana.


Right, I think that has been the constant question. And I agree that 
taking care of our existing users, at the cost of not being able to 
clean everything up, is the right call.





Finding this:

http://docs.openstack.org/developer/nova/api_microversion_history.html

Is hard. I saw it for the first time 3 days ago. Know why? It's in the
nova developer docs, not in the API docs. It's a great doc.


Yeh, we need to get that reflected in api-ref. That's a short term todo 
I can probably bang out before the release. It was always intended to 
surface more broadly, but until new api-ref it really wasn't possible.


-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Kevin Benton
LBaaS is a little special since Octavia will have it's own API endpoint
completely that they will evolve on their own. The other spun-out projects
(e.g. VPNaaS) will have the API defined in neutron-lib[1].

The specific DVR issue you are referring to with roaming IPs being the
target of floating IPs (I think that's what you're referring to) is
something I'm planning to address in Pike with the help of other DVR
contributors because floating IP translations for unbound addresses blocks
various other use cases.

1.
https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/vpnaas.inc

On Wed, Jan 25, 2017 at 6:40 AM, Hayes, Graham  wrote:

> On 25/01/2017 01:08, Kevin Benton wrote:
> >>I would really like us to discuss this issue head-on and see what is
> > missing in Neutron APIs and what would take to make them extensible so
> > that vendors do not run around trying to figure out alternative
> > solutions
> >
> > The Neutron API is already very extensible and that's problematic. Right
> > now a vendor can write an out-of-tree service plugin or driver that adds
> > arbitrary fields and endpoints to the API that results in whatever
> > behavior they want. This is great for vendors because they can directly
> > expose features without having to make them vendor agnostic. However,
> > it's terrible for operators because it leads to lock-in and terrible for
> > the users because it leads to cross-cloud compatibility issues.
> >
> > For a concrete example of what I mean, take a look at this extension
> > here: [1]. This is directly exposing vendor-specific things onto Neutron
> > network API payloads. Nobody can build any tooling that depends on those
> > fields without being locked into a specific vendor.
> >
> > So what I would like to encourage is bringing API extension work into
> > Neutron-lib where we can ensure that the relevant abstractions are in
> > place and it's not just a pass-through to a bunch of vendor-specific
> > features. I would rather relax our constraint around requiring a
> > reference implementation for new extensions in neutron-lib than continue
> > to encourage developers to do expose whatever they want with the the
> > existing extension framework.
> >
> > So I'm all for developing new APIs *as a community* to enable NFV use
> > cases not supported by the current APIs. However, I don't want to
> > encourage or make it easier for vendors to just build arbitrary
> > extensions on top of Neutron that expose backend details.
> >
> > In my view, Neutron should provide a unified API for networking across
> > OpenStack clouds, not a platform for writing deployment-specific
> > networking APIs.
>
> How does this tie in with the removal of some of the networking *aaS
> projects from the stadium? I know LBaaS is doing a shim API layer in
> the short term, but long term that will have to move to a separate API.
>
> How do you think this will impact inter service bugs (e.g. Octavia HA +
> Neutron DVR issues that have been around for cycles)
>
> >
> > 1. https://github.com/Juniper/contrail-neutron-plugin/blob/
> 19ad4bcee4c1ff3bf2d2093e14727866412a694a/neutron_plugin_
> contrail/extensions/contrail.py#L9-L22
> >
> > Cheers,
> > Kevin Benton
> >
> > On Tue, Jan 24, 2017 at 3:42 PM, Sukhdev Kapur  > > wrote:
> >
> >
> > Ihar and Kevin,
> >
> > As our potential future PTLs, I would like to draw your attention to
> > one of the critical issue regarding Neutron as "the" networking
> > service in OpenStack.
> >
> > I keep hearing off and on that Neutron is not flexible to address
> > many networking use cases and hence a new (or additional) networking
> > project is needed. For example, to address the use cases around NFV
> > (rigid resource inter-dependencies).  Another one keeps popping up
> > is that it is very hard/difficult to add/enhance Neutron API -
> > hence, I picked this one goal called out in Ihar's candidacy.
> >
> > I would really like us to discuss this issue head-on and see what is
> > missing in Neutron APIs and what would take to make them extensible
> > so that vendors do not run around trying to figure out alternative
> > solutions
> >
> > cheers..
> > -Sukhdev
> >
> >
> >
> >
> > * Explore alternative ways to evolve Neutron API.  Piling up
> > extensions and allowing third parties to completely change core
> > resource behaviour is not ideal, and now that api-ref and API
> > consolidation effort in neutron-lib are closer to completion, we
> may
> > have better answers to outstanding questions than during previous
> > attempts to crack the nut. I would like to restart the
> > discussion some
> > time during Pike.
> >
> >
> >
> >
> >
> >
> > Thanks for attention,
> > Ihar
> >
> > 

[openstack-dev] [watcher] self-nomination as Watcher PTL

2017-01-25 Thread Чадин Александр
I'm happy to announce my candidacy for Watcher PTL for the Pike release cycle.

I've been working on OpenStack for 2.5 years as engineer. My contributions
have been started in February 2016, they are related to Watcher and Rally
projects. I'm very proud of being one of core developers in Watcher project.

Watcher has achieved great companies diversity and its stability has been
improved along with usability. My main goal is to reach production-ready status
for Watcher by focusing on the following tasks:
- provide to all strategies multi datasource support
  (Ceilometer, Monasca, Gnocchi)
- limit concurrent executing of actions across all ongoing action plans
- work with Nova team to respect Nova's policies during optimization process
- make Watcher HA-ready
- make some steps to start integration of storage/network models

I will continue to propose some new blueprints compiled by Servionica team
and by its clients. I appreciate any help coming from our great community.
I would be happy to promote Watcher across OpenStack community and
to welcome new contributors.

Thanks,
Alexander
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo-heat-templates, vendor plugins and the new hiera hook

2017-01-25 Thread Steven Hardy
On Wed, Jan 25, 2017 at 02:59:42PM +0200, Marios Andreou wrote:
> Hi, as part of the composable upgrades workflow shaping up for Newton to
> Ocata, we need to install the new hiera hook that was first added with
> [1] and disable the old hook and data as part of the upgrade
> initialization [2]. Most of the existing hieradata was ported to use the
> new hook in [3]. The deletion of the old hiera data is necessary for the
> Ocata upgrade, but it also means it will break any plugins still using
> the 'old' os-apply-config hiera hook.
> 
> In order to be able to upgrade to Ocata any templates that define hiera
> data need to be using the new hiera hook and then the overcloud nodes
> need to have the new hook installed (installing is done in [2] as a
> matter of necessity, and that is what prompted this email in the first
> place). I've had a go at updating all the plugin templates that are
> still using the old hiera data with a review at [4] which I have -1 for now.
> 
> I'll try and reach out to some individuals more directly as well but
> wanted to get the review at [4] and this email out as a first step,

Thanks for raising this marios, and yeah it's unfortunate as we've had to
do a switch from the old to new hiera hook this release with out a
transition where both work.

I think we probably need to do the following:

1. Convert anything in t-h-t refering to the old hook to the new (seems you
have this in progress, we need to ensure it all lands before ocata)

2. Write a good release note for t-h-t explaining the change, referencing
docs which show how to convert to use the new hook

3. Figure out a way to make the 99-refresh-completed script signal failure
if anyone tries to deploy with the old hook (vs potentially silently
failing then hanging the deploy, which I think is what will happen atm).

I think ensuring a good error path should mitigate this change, since it's
fairly simple for folks to switch to the new hook provided we can document
it and point to those docs in the error I think.

Be good to get input from Dan on this too, as he might have ideas on how we
could maintain both hooks for one release.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Honza Pokorny core on tripleo-ui

2017-01-25 Thread Steven Hardy
On Tue, Jan 24, 2017 at 08:52:51AM -0500, Emilien Macchi wrote:
> I have been discussed with TripleO UI core reviewers and it's pretty
> clear Honza's work has been valuable so we can propose him part of
> Tripleo UI core team.
> His quality of code and reviews make him a good candidate and it would
> also help the other 2 core reviewers to accelerate the review process
> in UI component.
> 
> Like usual, this is open for discussion, Tripleo UI core and TripleO
> core, please vote.

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Sergey (Sagi) Shnaidman for core on tripleo-ci

2017-01-25 Thread Steven Hardy
On Tue, Jan 24, 2017 at 07:03:56PM +0200, Juan Antonio Osorio wrote:
>  Sagi (sshnaidm on IRC) has done significant work in TripleO CI (both
>  on the current CI solution and in getting tripleo-quickstart jobs for
>  it); So I would like to propose him as part of the TripleO CI core team.
>  I think he'll make a great addition to the team and will help move CI
>  issues forward quicker.

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-25 Thread Steven Hardy
On Mon, Jan 23, 2017 at 02:03:28PM -0500, Emilien Macchi wrote:
> Greeting folks,
> 
> I would like to propose some changes in our core members:
> 
> - Remove Jay Dobies who has not been active in TripleO for a while
> (thanks Jay for your hard work!).
> - Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
> docker bits.
> - Add Steve Backer on os-collect-config and also docker bits in
> tripleo-common and tripleo-heat-templates.
> 
> Indeed, both Flavio and Steve have been involved in deploying TripleO
> in containers, their contributions are very valuable. I would like to
> encourage them to keep doing more reviews in and out container bits.
> 
> As usual, core members are welcome to vote on the changes.

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [aodh][vitrage] Aodh generic alarms

2017-01-25 Thread Julien Danjou
On Wed, Jan 25 2017, Afek, Ifat (Nokia - IL) wrote:

> As we see it, alarms can be generated by different sources – Aodh, Vitrage,
> Nagios, Zabbix, etc.

I think "generated" is the wrong word here. Aodh does not generate any
alarms: it allows users to create them. And then it evaluates them and
triggers them.

Nagios and Zabbix do *exactly* the same thing: users defined alarms and
they are evaluated and triggered by Nagios/Zabbix. The particularity of
Aodh is that it does gather nor store data itself (as Nagios and Zabbix
do) but is only a definition and evaluation of alarms.

So you can implement what Nagios and Zabbix do in Aodh. And you could
use Nagios instead of Aodh (instead that it has no REST API so…).

Vitrage seems to me to be a middle man, which indeed, seems to
*generate* (create) alarms based on thing it sees triggered by Nagios,
Zabiix or Aodh. IIUC.

> Each source has its own expertise and internal
> implementation. Nagios and Zabbix can raise alarms about the physical layer,
> Aodh can raise threshold alarms and event alarms, and Vitrage can raise 
> deduced
> alarms (e.g. if there is an alarm on a host, Vitrage will raise alarms on the
> relevant instances and applications). I would prefer that you view Vitrage the
> way you view Zabbix, as a project that has a way of evaluating some kinds of
> problems in the system, and notify about them.

This "specialization" you describe is entirely artificial. Aodh can
triggers alarm on the physical layer. It already does if you monitor
your hardware with e.g. SNMP or IPMI, puts data in Gnocchi and create
alarm rules based on those metrics. And it could be extended to do more
(that'd be cool :)

What Vitrage does is using the existing software that might be (already)
deployed (Nagios, Zabbix) and consolidate things.

> The question is should there be a central place that provides information 
> about
> *all* alarms gathered in the system, and this includes an API, database,
> notification mechanism and history. We can implement these in Vitrage (as we
> already integrate with different datasources and monitors), but we always had
> in mind that this is part of Aodh project definition.

I don't see in the case of Vitrage why alarms should be stored by Aodh
and not by Nagios, for example. What the rationale?

To circle back to the original point, the main question that I asked and
started this thread is: why, why Aodh should store Vitrage alarms? What
are the advantages, for both Aodh and Vitrage?

So far the only answer I read is "well we though Aodh would be a central
storage place for alarm". So far it seems it has more drawbacks than
benefits: worst performances for Vitrage, confusion for users and more
complexity in Aodh.

As I already said, I'm trying to be really objective on this. I just
really want someone to explain to me how awesome this will be and why we
should totally go toward this direction. :-)

Cheers,
-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Ocata-3 Priorities

2017-01-25 Thread Ian Cordasco
Hi all,

Brian kindly emailed the list last week [1] with our priorities for
python-glanceclient and glance.

The python-glanceclient priorities have all been effectively reviewed
and python-glanceclient has been released. The stable/ocata branch for
it now exists as well.

If you are reviewing Glance changes, *PLEASE* focus on the remaining priorities:

- https://review.openstack.org/#/c/382958/
- https://review.openstack.org/#/c/392993/
- https://review.openstack.org/#/c/397409/

All three of those support rolling upgrades via Alembic in Glance. If
we get those finished, we have a stretch priority of fixing Glance's
compatibility with WebOb 1.7.x:

- https://review.openstack.org/423366

Please do *NOT* approve other changes unless they are on this list and
please focus on reviewing these.

I have provided several procedural -2's on patches that will not be
merged until Ocata-3 or RC-1 is tagged. I've commented on each as to
when those -2's will be lifted. As release liaison and as a fellow
reviewer, I expect you all to be focusing on these priorities.

I appreciate your cooperation in releasing the best version of Glance yet.

[1]: http://lists.openstack.org/pipermail/openstack-dev/2017-January/110617.html
--
Ian Cordasco
Your Friendly, But Stern, Neighborhood Release CPL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [operators] Optional resource asking or not?

2017-01-25 Thread Sylvain Bauza


Le 25/01/2017 05:10, Matt Riedemann a écrit :
> On 1/24/2017 2:57 PM, Matt Riedemann wrote:
>> On 1/24/2017 2:38 PM, Sylvain Bauza wrote:
>>>
>>> It's litterally 2 days before FeatureFreeze and we ask operators to
>>> change their cloud right now ? Looks difficult to me and like I said in
>>> multiple places by email, we have a ton of assertions saying it's
>>> acceptable to have not all the filters.
>>>
>>> -Sylvain
>>>
>>
>> I'm not sure why feature freeze in two days is going to make a huge
>> amount of difference here. Most large production clouds are probably
>> nowhere near trunk (I'm assuming most are on Mitaka or older at this
>> point just because of how deployments seem to tail the oldest supported
>> stable branch). Or are you mainly worried about deployment tooling
>> projects, like TripleO, needing to deal with this now?
>>
>> Anyone upgrading to Ocata is going to have to read the release notes and
>> assess the upgrade impacts regardless of when we make this change, be
>> that Ocata or Pike.
>>
>> Sylvain, are you suggesting that for Ocata if, for example, the
>> CoreFilter isn't in the list of enabled scheduler filters, we don't make
>> the request for VCPU when filtering resource providers, but we also log
>> a big fat warning in the n-sch logs saying we're going to switch over in
>> Pike and that cpu_allocation_ratio needs to be configured because the
>> CoreFilter is going to be deprecated in Ocata and removed in Pike?
>>
>> [1]
>> https://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/resource-providers-scheduler-db-filters.html#other-deployer-impact
>>
>>
>>
> 
> To recap the discussion we had in IRC today, we're moving forward with
> the original plan of the *filter scheduler* always requesting VCPU,
> MEMORY_MB and DISK_GB* regardless of the enabled filters. The main
> reason being there isn't a clear path forward on straddling releases to
> deprecate or make decisions based on the enabled filters and provide a
> warning that makes sense.
> 
> For example, we can't deprecate the filters (at least yet) because the
> *caching scheduler* is still using them (it's not using placement yet).
> And if we logged a warning if you don't have the CoreFilter in
> CONF.filter_scheduler.enabled_filters, for example, but we don't want
> you to have it in that list, then what are you supposed to do? i.e. the
> goal is to not have the legacy primitive resource filters enabled for
> the filter scheduler in Pike, so you get into this weird situation of
> whether or not you have them enabled or not before Pike, and in what
> cases do you log a warning that makes sense. So we agreed at this point
> it's just simpler to say that if you don't enable these filters today,
> you're going to need to configure the appropriate allocation ratio
> configuration option prior to upgrading to Ocata. That will be in the
> upgrade section of the release notes and we can probably also work it
> into the placement devref as a deployment note. We can also work this
> into the nova-status upgrade check CLI.
> 
> *DISK_GB is special since we might have a flavor that's not specifying
> any disk or a resource provider with no DISK_GB allocations if the
> instances are all booted from volumes.
> 

Update on that agreement : I made the necessary modification in the
proposal [1] for not verifying the filters. We now send a request to the
Placement API by introspecting the flavor and we get a list of potential
destinations.

When I began doing that modification, I know there was a functional test
about server groups that needed modifications to match our agreement. I
consequently made that change located in a separate patch [2] as a
prerequisite for [1].

I then spotted a problem that we didn't identified when discussing :
when checking a destination, the legacy filters for CPU, RAM and disk
don't verify the maximum capacity of the host, they only multiple the
total size by the allocation ratio, so our proposal works for them.
Now, when using the placement service, it fails because somewhere in the
DB call needed for returning the destinations, we also verify a specific
field named max_unit [3].

Consequently, the proposal we agreed is not feature-parity between
Newton and Ocata. If you follow our instructions, you will still get
different result from a placement perspective between what was in Newton
and what will be Ocata.

Technically speaking, the functional test is a canary bird, telling you
that you get NoValidHosts while it was working previously.

After that I'm stuck. We can be discussing for a while about whether all
of that is sane or not, but the fact is, there is a discrepancy.

Honestly, I don't know what to do unless considering that we're now so
close to the Feature Freeze that it's becoming an all-or-none situation.
My only silver bullet I still have could be considering a placement
failure as non blocker and fallbacking to calling the full list of nodes
for Ocata. I know that sucks, but I don't 

Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-25 Thread Monty Taylor
On 01/25/2017 09:16 AM, Monty Taylor wrote:
> On 01/24/2017 12:39 PM, Chris Dent wrote:
>> On Mon, 23 Jan 2017, Sean Dague wrote:
>>
>>> We all inherited a bunch of odd and poorly defined behaviors in the
>>> system we're using. They were made because at the time they seemed like
>>> reasonable tradeoffs, and a couple of years later we learned more, or
>>> needed to address a different use case that people didn't consider
>>> before.
>>
>> Thanks, as usual, for providing some well considered input Sean. I
>> think it captures well what we could describe as the "nova
>> aspirational model for managing change" which essentially means:
>>
>> * don't change stuff unless you have to
>> * when you do change stuff, anything, use microversions to signal
>>
>> This is a common position and I suspect if we were to use the
>> voices that have spoken up so far to form the new document[1] then
>> it would codify that, including specifying microversions as the
>> technology for managing boundaries.
> 
> I have quibble with the current microversions construct. It's mostly
> semantic in nature, and I _think_ it's not valid/useful - but I'm going
> to describe it here just so that I've said it and we can all acknowledge
> it and move on.
> 
> My concern is with the prefix "micro". What gets presented to the user
> now is a "major" api version that is essentially useless, and a
> monotonoically increasing single version number that does not indicate
> whether a given version introduced a breaking change or not.
> 
> I LIKE the mechanism. It works well - I do not think using it is
> burdensome or bad for the user so far. But it's not "micro". It's
> _essentially_ "every 'microversion' bump must be treated as a major
> version bump, we just moved it to a construct that doesn't involve
> deploying 40 different rest endpoints.
> 
> There are ways in which we could use the mechanism while still using
> structured content to convey some amount of meaning to a user so that
> client consumers don't have to write matrixes of "if this cloud has max
> microversion of 27, then do this, otherwise do this other thing" for all
> of the microversions.
> 
> That said - it's WAY better than the other thing - at least so far in
> the way I'm seeing nova use it.
> 
> So I imagine it's just me quibbling over the word 'micro' and wanting
> something more like libtool's version:revision:age construct which
> calculates for a given library and consumer whether or not a library can
> be expected to be usable in a dynamic linking context. (this is a
> different construct from semver, but turns out is handy when you have a
> single client that may need to consume multiple different api providers)

You know what - forget this part. I just went to try to make a concrete
example of what I'm talking about and got bumpkiss. The single version
numbers are honestly fine.

>> That could very well be fine, but we have evidence that:
>>
>> * some projects don't yet use microversions in their APIs
>> * some projects have no intention of using microversions or at least
>>   have internal conflict about doing so
>> * some projects would like to change things (irrespective of
>>   microversions)
>>
>> What do we do about that? That's what I think we could be working
>> out here, and why I'm persisting in dragging this out. There's no
>> point making rules that a significant portion of the populace have
>> no interest in following.
>>
>> So the options seem to be:
>>
>> * codify the two rules above as the backbone for the
>>   api-compatibility assertion tag and allow several projects to not
>>   assert that, despite an overall OpenStack goal
> 
> I like the two rules above. They serve end users in the way Sean is
> talking about better than any of the alternatives I've heard.
> 
>> * keep hashing things out for a bit longer until either we have
>>   different rules so we have more projects liking the rules or we
>>   justify the rules until we have more projects accepting them
>>
>> More in response to Sean below, not to contradict what he's saying
>> but in the ever-optimistic hope of continuing and expanding the
>> conversation to get real rather than enforced consensus.
>>
>>> If you don't guaruntee that existing applications will work in the
>>> future (for some reasonable window of time), it's a massive turn off to
>>> anyone deciding to use this interface at all. You suppress your user
>>> base.
>>
>> I think "reasonable window of time" is a key phrase here that
>> perhaps we can build into the guidelines somewhat. The problems of
>> course are that some clouds will move forward in time at different
>> rates and as Sean has frequently pointed out, time's arrow is not
>> unidirectional in the universe of many OpenStack clouds.
>>
>> To what extent is the HEAD of OpenStack responsible to OpenStack two
>> or three years back?
> 
> I personally be the answer to this is "forever" I know that's not
> popular - but if we don't, someone _else_ has to deal with 

Re: [openstack-dev] [tacker] Core team changes / proposing Dharmendra Kushwaha

2017-01-25 Thread HADDLETON, Robert W (Bob)

+1

Thanks Stephen!  And welcome aboard Dharmendra!

Bob

On 1/24/2017 6:58 PM, Sridhar Ramaswamy wrote:

Tackers,

I'd like to propose following changes to the Tacker core team.

Stephen Wong

After being associated with Tacker project from its genesis, Stephen
Wong (irc: s3wong) has decided to step down from the core-team. I
would like to thank Stephen for his contribution to Tacker,
particularly for his help navigating the initial days splitting off
Neutron and in re-launching this project in Vancouver summit for
TOSCA-based NFV Orchestration. His recent effort in writing the SFC
driver to support VNF Forwarding Graph is much appreciated. Thanks
Stephen!

Dharmendra Kushwaha

It gives me great pleasure to propose Dharmendra (irc:  dkushwaha) to
join the Tacker core team. Dharmendra's contributions to tacker
started in Jan 2016. He is an active contributor across the board [1]
in bug fixes, code cleanups and, most recently, as a lead author of
the Network Services Descriptor blueprint.

Also, Dharmendra recently stepped up to take care of bug triage for
Tacker. There is an uptick in deployment issues reported through LP
[2] and in irc - which itself is a good healthy thing. Now we need to
respond by fixing the issues reported promptly. Dharmendra’s help will
be immensely valuable here.

Existing cores - please vote +1 / -1.

thanks,
Sridhar

[1] 
http://stackalytics.com/?module=tacker-group_id=dharmendra-kushwaha=marks
[2] https://answers.launchpad.net/tacker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Live migration issue

2017-01-25 Thread fabrice grelaud
Thanks for reply.

But « a priori » log say « this error can be safely ignore ». And therefore, 
this log comes from the live migration that succeeded (compute 2 to compute 1).

The ERROR that questions me is (live migration compute 1 to 2), on compute 1:
2017-01-25 11:03:58.475 113231 ERROR nova.virt.libvirt.driver 
[req-7bd352bf-8818-4f71-9fa0-04fabccebf9c 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Live Migration failure: Requested 
operation is not valid: domain 'instance-0187' is already active



> Le 25 janv. 2017 à 13:24, Lenny Verkhovsky  a écrit :
> 
> Hi,
> 
> What domain name are you using?
> Check for 'Traceback' and ' ERROR ' in the logs, maybe you will get a hint
> 
> 
> 2017-01-25 11:00:21.215 28309 INFO nova.compute.manager 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] You may see the error "libvirt: QEMU 
> error: Domain not found: no domain with matching name." This error can be 
> safely ignored.
> 
> 
> -Original Message-
> From: fabrice grelaud [mailto:fabrice.grel...@u-bordeaux.fr] 
> Sent: Wednesday, January 25, 2017 1:06 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: [openstack-dev] [openstack-ansible] Live migration issue
> 
> Hi osa team,
> 
> i ‘ve got live migration issue in one direction but not in other.
> I deploy openstack with OSA, ubuntu trusty, stable/newton branch, 14.0.5 tag.
> 
> My 2 compute node are same host type and have nova-compute and cinder-volume 
> (our ceph cluster as backend) services.
> 
> No problem to live migrate instance from Compute 2 to Compute 1 whereas the 
> reverse is not true.
> See log below:
> 
> Live migration instance Compute 2 to 1: OK
> 
> Compute 2 log
> 2017-01-25 11:00:15.621 28309 INFO nova.virt.libvirt.migration 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Increasing downtime to 46 ms after 0 
> sec elapsed time
> 2017-01-25 11:00:15.787 28309 INFO nova.virt.libvirt.driver 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Migration running for 0 secs, memory 
> 100% remaining; (bytes processed=0, remaining=0, total=0)
> 2017-01-25 11:00:17.737 28309 INFO nova.compute.manager [-] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] VM Paused (Lifecycle Event)
> 2017-01-25 11:00:17.794 28309 INFO nova.virt.libvirt.driver 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Migration operation has completed
> 2017-01-25 11:00:17.795 28309 INFO nova.compute.manager 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] _post_live_migration() is started..
> 2017-01-25 11:00:17.815 28309 INFO oslo.privsep.daemon 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] Running privsep helper: ['sudo', 
> 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', 
> '--config-file', '/etc/nova/nova.conf', '--privsep_context', 
> 'os_brick.privileged.default', '--privsep_sock_path', 
> '/tmp/tmpfL96lI/privsep.sock']
> 2017-01-25 11:00:18.387 28309 INFO oslo.privsep.daemon 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] Spawned new privsep daemon via 
> rootwrap
> 2017-01-25 11:00:18.395 28309 INFO oslo.privsep.daemon [-] privsep daemon 
> starting
> 2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep process 
> running with uid/gid: 0/0
> 2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep process 
> running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
> 2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep daemon 
> running as pid 28815
> 2017-01-25 11:00:18.397 28309 INFO nova.compute.manager 
> [req-aa0997d7-bf5f-480f-abc5-beadd2d03409 - - - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] During sync_power_state the instance 
> has a pending task (migrating). Skip.
> 2017-01-25 11:00:18.538 28309 INFO nova.compute.manager 
> [req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Running instance usage 
> audit for host p-oscompute02 from 2017-01-25 09:00:00 to 2017-01-25 10:00:00. 
> 2 instances.
> 2017-01-25 11:00:18.691 28309 INFO nova.compute.resource_tracker 
> [req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Auditing locally 
> available 

Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Hayes, Graham
On 25/01/2017 01:08, Kevin Benton wrote:
>>I would really like us to discuss this issue head-on and see what is
> missing in Neutron APIs and what would take to make them extensible so
> that vendors do not run around trying to figure out alternative
> solutions
>
> The Neutron API is already very extensible and that's problematic. Right
> now a vendor can write an out-of-tree service plugin or driver that adds
> arbitrary fields and endpoints to the API that results in whatever
> behavior they want. This is great for vendors because they can directly
> expose features without having to make them vendor agnostic. However,
> it's terrible for operators because it leads to lock-in and terrible for
> the users because it leads to cross-cloud compatibility issues.
>
> For a concrete example of what I mean, take a look at this extension
> here: [1]. This is directly exposing vendor-specific things onto Neutron
> network API payloads. Nobody can build any tooling that depends on those
> fields without being locked into a specific vendor.
>
> So what I would like to encourage is bringing API extension work into
> Neutron-lib where we can ensure that the relevant abstractions are in
> place and it's not just a pass-through to a bunch of vendor-specific
> features. I would rather relax our constraint around requiring a
> reference implementation for new extensions in neutron-lib than continue
> to encourage developers to do expose whatever they want with the the
> existing extension framework.
>
> So I'm all for developing new APIs *as a community* to enable NFV use
> cases not supported by the current APIs. However, I don't want to
> encourage or make it easier for vendors to just build arbitrary
> extensions on top of Neutron that expose backend details.
>
> In my view, Neutron should provide a unified API for networking across
> OpenStack clouds, not a platform for writing deployment-specific
> networking APIs.

How does this tie in with the removal of some of the networking *aaS
projects from the stadium? I know LBaaS is doing a shim API layer in
the short term, but long term that will have to move to a separate API.

How do you think this will impact inter service bugs (e.g. Octavia HA +
Neutron DVR issues that have been around for cycles)

>
> 1. 
> https://github.com/Juniper/contrail-neutron-plugin/blob/19ad4bcee4c1ff3bf2d2093e14727866412a694a/neutron_plugin_contrail/extensions/contrail.py#L9-L22
>
> Cheers,
> Kevin Benton
>
> On Tue, Jan 24, 2017 at 3:42 PM, Sukhdev Kapur  > wrote:
>
>
> Ihar and Kevin,
>
> As our potential future PTLs, I would like to draw your attention to
> one of the critical issue regarding Neutron as "the" networking
> service in OpenStack.
>
> I keep hearing off and on that Neutron is not flexible to address
> many networking use cases and hence a new (or additional) networking
> project is needed. For example, to address the use cases around NFV
> (rigid resource inter-dependencies).  Another one keeps popping up
> is that it is very hard/difficult to add/enhance Neutron API -
> hence, I picked this one goal called out in Ihar's candidacy.
>
> I would really like us to discuss this issue head-on and see what is
> missing in Neutron APIs and what would take to make them extensible
> so that vendors do not run around trying to figure out alternative
> solutions
>
> cheers..
> -Sukhdev
>
>
>
>
> * Explore alternative ways to evolve Neutron API.  Piling up
> extensions and allowing third parties to completely change core
> resource behaviour is not ideal, and now that api-ref and API
> consolidation effort in neutron-lib are closer to completion, we may
> have better answers to outstanding questions than during previous
> attempts to crack the nut. I would like to restart the
> discussion some
> time during Pike.
>
>
>
>
>
>
> Thanks for attention,
> Ihar
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

[openstack-dev] [I18n] Horizon and Horizon plugins: StringFreeze policies for translation

2017-01-25 Thread Ian Y. Choi

Hello OpenStack developers!

My name is Ian and I am now serving I18n PTL for Ocata cycle.

For previous releases, I18n team set higher priorities for translations 
on Horizon and Horizon plugins
during Soft and Hard StringFreezes to include user-faced translated 
strings as much as possible within releases.


On the other hand, since Ocata cycle is a little bit short, on the last 
I18n IRC meeting,
I18n team decided to start translations from now although some Horizon 
and Horizon plugin projects might

have not freezed all the strings (Soft StringFreeze) yet.
If some projects still have more strings to be freezed, please tell me 
through this thread and/or call me (@ianychoi)

on #openstack-18n channel.

Also, since just one week during Soft & Hard Freeze is too short for 
translators to complete translating and reviewing strings,

some translation contribution might be after RC1.
I18n team would like to do translations by R-2 week (Feb 06 - Feb 10), 
and I hope that all translations
will be successfully included into releases by RC2 and further 
intermediary / final RCs.


I do not expect a worse situation: for instance, there will be more 
translation in a project during R-2 week

but no further releases will be packed in the corresponding project.

Also, it seems that 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n is out of date.
Please fill out liaison information or I would like to ask about I18n 
liaisons later through PTLs :)



With many thanks,

/Ian


 Original Message 
Subject:[OpenStack-I18n] [StringFreeze] Soft StringFreeze starts!
Date:   Tue, 24 Jan 2017 02:16:24 +0300
From:   Ilya Alekseyev 
To: 	openstack-i...@lists.openstack.org 





Dear language coordinators & translators!

Only four weeks left before Ocata release [1] and I would like to inform 
you that Soft StringFreeze period has come.


Language teams need to focus on Horizon and Horizon-related projects
before all translated strings will be packaged for upcoming release.

According to the dashboard on https://translate.openstack.org, our 
targeted priorities look as follows:


- Dashboard - Horizon (High)
- Dashboard Authorization Page (High)
- neutron-lbaas-dashboard (High)
- Trove Dashboard (Medium)
- Sahara Dashboard (Medium)
- Murano Dashboard (Low)
- Magnum UI (Low)
- Designate Dashboard (Low)

All of those and more additional target projects grouped in [2] in 
*master* branch
and PTL (@ianychoi) will check feature freeze status on projects listed 
above.
He will create a stable version: *stable-ocata* in Zanata for the target 
projects this week.


Although the stable version has not been created yet, but it is highly 
encouraged to translate those projects from now, since StringFreeze 
periods are relatively short during Ocata cycle.


If you have questions, feel free to ask through 
openstack-i...@lists.openstack.org 
 mailing list or ask  I18n 
members on Freenode #openstack-i18n channel.



Thank you all for your contribution!

[1] https://releases.openstack.org/ocata/schedule.html
[2] 
https://translate.openstack.org/version-group/view/ocata-dashboard-translation/


Ilya Alekseyev (IRC: adiantum), StringFreeze Manager in Ocata cycle
Ian Y. Choi (IRC: ianychoi), I18n Ocata PTL
___
OpenStack-I18n mailing list
openstack-i...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-25 Thread Jiri Tomasek

+1


On 23.1.2017 20:03, Emilien Macchi wrote:

Greeting folks,

I would like to propose some changes in our core members:

- Remove Jay Dobies who has not been active in TripleO for a while
(thanks Jay for your hard work!).
- Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
docker bits.
- Add Steve Backer on os-collect-config and also docker bits in
tripleo-common and tripleo-heat-templates.

Indeed, both Flavio and Steve have been involved in deploying TripleO
in containers, their contributions are very valuable. I would like to
encourage them to keep doing more reviews in and out container bits.

As usual, core members are welcome to vote on the changes.

Thanks,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla-ansible] [kolla] Am I doing this wrong?

2017-01-25 Thread Steven Dake (stdake)
Thanks peeps for responding to Kris.  Kris, I had offered a response – do you 
need further information answered?  It looks to me like all the questions have 
been answered by others in the community.  If not, feel free to respond and 
I’ll answer the remainders.

Sean when your around and I am please ping me – it looks like you have the same 
outlook problem as I do and have sort of figured out how to solve it.  I’d like 
to know how so I don’t have to top post or use gmail for the ml.

Regards
-steve


-Original Message-
From: "sean.k.moo...@intel.com" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, January 24, 2017 at 5:01 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla-ansible] [kolla] Am I doing this wrong?



> -Original Message-
> From: Paul Bourke [mailto:paul.bou...@oracle.com]
> Sent: Tuesday, January 24, 2017 11:49 AM
> To: OpenStack Development Mailing List (not for usage questions) 
 d...@lists.openstack.org>
> Subject: Re: [openstack-dev] [kolla-ansible] [kolla] Am I doing this 
wrong?
> 
> Ah, I think you may be misreading what Sean is saying there. What he 
means is
> kolla-ansible provides the bare minimum config templates to make the 
service
> work. To template every possible config option would be too much of a
> maintenance burden on the project.
> 
> Of course, users will want to customise these. But instead of modifying 
the
> templates directly, we recommend you use the "config override"
> mechanism [0]
> 
> This has a number of benefits, the main one being that you can pick up new
> releases of Kolla and not get stuck in merge hell, Ansible will pick up 
the Kolla base
> templates and merge them with user provided overrides.
[Mooney, Sean K] paul is correct here, I did not intend to suggest that 
kolla-ansible should not
Be used to generate and manage config files. I simply wanted to point out 
that where
Customization is required to a config, it is preferable to use the config 
override mechanism
When possible vs modifying the ansible templates directly.
> 
> Wrt to the fact gathering, I understand your concern, we essentially have 
the same
> problem in our team. It can be raised again for further discussion, I'm 
sure there's
> other ways it can be solved.
[Mooney, Sean K] I belive you are intended to be able to use the ansible 
--limit and --tags flags,
To restrict the plays executed and node processed by a deploy and upgrade 
command.
I have used the --tags flags successfully in the past, I have had less 
success with the --limit flag.
In theory with the right combination of --limit and --tag you should be 
able to constrain  the node
On which facts are gathered to just those that would be modified e.g. 2-3 
instead of hundreds. 
> 
> [0]
> http://docs.openstack.org/developer/kolla-ansible/advanced-
> configuration.html#openstack-service-configuration-in-kolla
> 
> -Paul
> 
> On 23/01/17 18:03, Kris G. Lindgren wrote:
> > Hi Paul,
> >
> >
> >
> > Thanks for responding.
> >
> >
> >
> >> The fact gathering on every server is a compromise taken by Kolla to
> >
> >> work around limitations in Ansible. It works well for the majority of
> >
> >> situations; for more detail and potential improvements on this please
> >
> >> have a read of this post:
> >
> >> http://lists.openstack.org/pipermail/openstack-dev/2016-November/1078
> >> 33.html
> >
> >
> >
> > So my problem with this is the logging in to the compute nodes.  While
> > this may be fine for a smaller deployment.  Logging into thousands,
> > even hundreds, of nodes via ansible to gather facts, just to do a
> > deployment against 2 or 3 of them is not tenable.  Additionally, in
> > our higher audited environments (pki/pci) will cause our auditors 
heartburn.
> >
> >
> >
> >> I'm not quite following you here, the config templates from
> >
> >> kolla-ansible are one of it's stronger pieces imo, they're reasonably
> >
> >> well tested and maintained. What leads you to believe they shouldn't
> >> be
> >
> >> used?
> >
> >>
> >
> >> > * Certain parts of it are 'reference only' (the config tasks),
> >
> >>  > are not recommended
> >
> >>
> >
> >> This is untrue - kolla-ansible is designed to stand up a stable and
> >
> >> usable OpenStack 'out of the box'. There are definitely gaps in the
> >
> >> operator type tasks as you've highlighted, but I would not call it
> >
> >> 'reference only'.
> >
> >
> >
> > 

Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-25 Thread Monty Taylor
On 01/24/2017 12:39 PM, Chris Dent wrote:
> On Mon, 23 Jan 2017, Sean Dague wrote:
> 
>> We all inherited a bunch of odd and poorly defined behaviors in the
>> system we're using. They were made because at the time they seemed like
>> reasonable tradeoffs, and a couple of years later we learned more, or
>> needed to address a different use case that people didn't consider
>> before.
> 
> Thanks, as usual, for providing some well considered input Sean. I
> think it captures well what we could describe as the "nova
> aspirational model for managing change" which essentially means:
> 
> * don't change stuff unless you have to
> * when you do change stuff, anything, use microversions to signal
> 
> This is a common position and I suspect if we were to use the
> voices that have spoken up so far to form the new document[1] then
> it would codify that, including specifying microversions as the
> technology for managing boundaries.

I have quibble with the current microversions construct. It's mostly
semantic in nature, and I _think_ it's not valid/useful - but I'm going
to describe it here just so that I've said it and we can all acknowledge
it and move on.

My concern is with the prefix "micro". What gets presented to the user
now is a "major" api version that is essentially useless, and a
monotonoically increasing single version number that does not indicate
whether a given version introduced a breaking change or not.

I LIKE the mechanism. It works well - I do not think using it is
burdensome or bad for the user so far. But it's not "micro". It's
_essentially_ "every 'microversion' bump must be treated as a major
version bump, we just moved it to a construct that doesn't involve
deploying 40 different rest endpoints.

There are ways in which we could use the mechanism while still using
structured content to convey some amount of meaning to a user so that
client consumers don't have to write matrixes of "if this cloud has max
microversion of 27, then do this, otherwise do this other thing" for all
of the microversions.

That said - it's WAY better than the other thing - at least so far in
the way I'm seeing nova use it.

So I imagine it's just me quibbling over the word 'micro' and wanting
something more like libtool's version:revision:age construct which
calculates for a given library and consumer whether or not a library can
be expected to be usable in a dynamic linking context. (this is a
different construct from semver, but turns out is handy when you have a
single client that may need to consume multiple different api providers)

> That could very well be fine, but we have evidence that:
> 
> * some projects don't yet use microversions in their APIs
> * some projects have no intention of using microversions or at least
>   have internal conflict about doing so
> * some projects would like to change things (irrespective of
>   microversions)
> 
> What do we do about that? That's what I think we could be working
> out here, and why I'm persisting in dragging this out. There's no
> point making rules that a significant portion of the populace have
> no interest in following.
> 
> So the options seem to be:
> 
> * codify the two rules above as the backbone for the
>   api-compatibility assertion tag and allow several projects to not
>   assert that, despite an overall OpenStack goal

I like the two rules above. They serve end users in the way Sean is
talking about better than any of the alternatives I've heard.

> * keep hashing things out for a bit longer until either we have
>   different rules so we have more projects liking the rules or we
>   justify the rules until we have more projects accepting them
> 
> More in response to Sean below, not to contradict what he's saying
> but in the ever-optimistic hope of continuing and expanding the
> conversation to get real rather than enforced consensus.
> 
>> If you don't guaruntee that existing applications will work in the
>> future (for some reasonable window of time), it's a massive turn off to
>> anyone deciding to use this interface at all. You suppress your user
>> base.
> 
> I think "reasonable window of time" is a key phrase here that
> perhaps we can build into the guidelines somewhat. The problems of
> course are that some clouds will move forward in time at different
> rates and as Sean has frequently pointed out, time's arrow is not
> unidirectional in the universe of many OpenStack clouds.
> 
> To what extent is the HEAD of OpenStack responsible to OpenStack two
> or three years back?

I personally be the answer to this is "forever" I know that's not
popular - but if we don't, someone _else_ has to deal with making sure
code that wants to consume new apis and also has to talk to older
OpenStack installations can do that.

But it turns out OpenStack works way better than our detractors in the
"success is defined by the size of your VC intake" tech press like to
admit - and we have clouds _today_ that are happily running in
production with Juno 

Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Monty Taylor
On 01/24/2017 06:42 PM, Sukhdev Kapur wrote:
> 
> Ihar and Kevin, 
> 
> As our potential future PTLs, I would like to draw your attention to one
> of the critical issue regarding Neutron as "the" networking service in
> OpenStack. 
> 
> I keep hearing off and on that Neutron is not flexible to address many
> networking use cases and hence a new (or additional) networking project
> is needed. For example, to address the use cases around NFV (rigid
> resource inter-dependencies).  Another one keeps popping up is that it
> is very hard/difficult to add/enhance Neutron API - hence, I picked this
> one goal called out in Ihar's candidacy. 

Adding an additional networking project to try to solve this will only
make things work. We need one API. If it needs to grow features, it
needs to grow features - but they should be features that all of
OpenStack users get.

> I would really like us to discuss this issue head-on and see what is
> missing in Neutron APIs and what would take to make them extensible so
> that vendors do not run around trying to figure out alternative
> solutions

+100

> cheers..
> -Sukhdev
> 
> 
>  
> 
> * Explore alternative ways to evolve Neutron API.  Piling up
> extensions and allowing third parties to completely change core
> resource behaviour is not ideal, and now that api-ref and API
> consolidation effort in neutron-lib are closer to completion, we may
> have better answers to outstanding questions than during previous
> attempts to crack the nut. I would like to restart the discussion some
> time during Pike.
> 
> 
> 
>  
> 
> 
> Thanks for attention,
> Ihar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [aodh][vitrage] Aodh generic alarms

2017-01-25 Thread gordon chung


On 25/01/17 08:39 AM, Afek, Ifat (Nokia - IL) wrote:
> As we see it, alarms can be generated by different sources – Aodh, Vitrage, 
> Nagios, Zabbix, etc. Each source has its own expertise and internal 
> implementation. Nagios and Zabbix can raise alarms about the physical layer, 
> Aodh can raise threshold alarms and event alarms, and Vitrage can raise 
> deduced alarms (e.g. if there is an alarm on a host, Vitrage will raise 
> alarms on the relevant instances and applications). I would prefer that you 
> view Vitrage the way you view Zabbix, as a project that has a way of 
> evaluating some kinds of problems in the system, and notify about them.

so the purpose of 'generic alarms' proposal was just to 'log' the alarm 
from vitrage in a central place? tbh, i don't know if that's what we 
want to store in aodh. i think it should ideally be handling active 
alarms, not past alarms.

if we store a vitrage alarm in aodh, what would the use case be for 
querying it? the alarm occurred and vitrage has already sent a 
notification warning. if i were to query aodh, what additional 
information would i be retrieving?

it would seem much more useful to send that information to panko so you 
can see that alarm event with other past events relating to the resource.


cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Monty Taylor
On 01/24/2017 08:04 PM, Kevin Benton wrote:
>>I would really like us to discuss this issue head-on and see what is
> missing in Neutron APIs and what would take to make them extensible so
> that vendors do not run around trying to figure out alternative
> solutions
> 
> The Neutron API is already very extensible and that's problematic. Right
> now a vendor can write an out-of-tree service plugin or driver that adds
> arbitrary fields and endpoints to the API that results in whatever
> behavior they want. This is great for vendors because they can directly
> expose features without having to make them vendor agnostic. However,
> it's terrible for operators because it leads to lock-in and terrible for
> the users because it leads to cross-cloud compatibility issues. 
> 
> For a concrete example of what I mean, take a look at this extension
> here: [1]. This is directly exposing vendor-specific things onto Neutron
> network API payloads. Nobody can build any tooling that depends on those
> fields without being locked into a specific vendor.
> 
> So what I would like to encourage is bringing API extension work into
> Neutron-lib where we can ensure that the relevant abstractions are in
> place and it's not just a pass-through to a bunch of vendor-specific
> features. I would rather relax our constraint around requiring a
> reference implementation for new extensions in neutron-lib than continue
> to encourage developers to do expose whatever they want with the the
> existing extension framework.
> 
> So I'm all for developing new APIs *as a community* to enable NFV use
> cases not supported by the current APIs. However, I don't want to
> encourage or make it easier for vendors to just build arbitrary
> extensions on top of Neutron that expose backend details.
> 
> In my view, Neutron should provide a unified API for networking across
> OpenStack clouds, not a platform for writing deployment-specific
> networking APIs.

A billion times yes.



> 1. 
> https://github.com/Juniper/contrail-neutron-plugin/blob/19ad4bcee4c1ff3bf2d2093e14727866412a694a/neutron_plugin_contrail/extensions/contrail.py#L9-L22
> 
> Cheers,
> Kevin Benton
> 
> On Tue, Jan 24, 2017 at 3:42 PM, Sukhdev Kapur  > wrote:
> 
> 
> Ihar and Kevin, 
> 
> As our potential future PTLs, I would like to draw your attention to
> one of the critical issue regarding Neutron as "the" networking
> service in OpenStack. 
> 
> I keep hearing off and on that Neutron is not flexible to address
> many networking use cases and hence a new (or additional) networking
> project is needed. For example, to address the use cases around NFV
> (rigid resource inter-dependencies).  Another one keeps popping up
> is that it is very hard/difficult to add/enhance Neutron API -
> hence, I picked this one goal called out in Ihar's candidacy. 
> 
> I would really like us to discuss this issue head-on and see what is
> missing in Neutron APIs and what would take to make them extensible
> so that vendors do not run around trying to figure out alternative
> solutions
> 
> cheers..
> -Sukhdev
> 
> 
>  
> 
> * Explore alternative ways to evolve Neutron API.  Piling up
> extensions and allowing third parties to completely change core
> resource behaviour is not ideal, and now that api-ref and API
> consolidation effort in neutron-lib are closer to completion, we may
> have better answers to outstanding questions than during previous
> attempts to crack the nut. I would like to restart the
> discussion some
> time during Pike.
> 
> 
> 
>  
> 
> 
> Thanks for attention,
> Ihar
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [aodh][vitrage] Aodh generic alarms

2017-01-25 Thread Afek, Ifat (Nokia - IL)
Hi,

Alarm history and a database are definitely important, but they are not the 
main issue here.

As we see it, alarms can be generated by different sources – Aodh, Vitrage, 
Nagios, Zabbix, etc. Each source has its own expertise and internal 
implementation. Nagios and Zabbix can raise alarms about the physical layer, 
Aodh can raise threshold alarms and event alarms, and Vitrage can raise deduced 
alarms (e.g. if there is an alarm on a host, Vitrage will raise alarms on the 
relevant instances and applications). I would prefer that you view Vitrage the 
way you view Zabbix, as a project that has a way of evaluating some kinds of 
problems in the system, and notify about them.

The question is should there be a central place that provides information about 
*all* alarms gathered in the system, and this includes an API, database, 
notification mechanism and history. We can implement these in Vitrage (as we 
already integrate with different datasources and monitors), but we always had 
in mind that this is part of Aodh project definition.

What do you say?

Best Regards,
Ifat.


On 25/01/2017, 13:19, "Julien Danjou"  wrote:

On Tue, Jan 24 2017, gordon chung wrote:

> you mean, keep alarm history in aodh and also in panko if needed? i'm ok 
> with that.

Yeah, IIRC there's an expirer in Aodh for alarm history based on TTL –
that's enough. That should probably be replaced with just a hard limit on
the number of history items you have (e.g. 100) and having them the
older being dropped when the limit is hit.

And if somebody wants a full audit control of what's done, Panko is the
way to go (you know, bread crumbs ;-).

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] CI status for Ubuntu

2017-01-25 Thread Alex Schultz
On Tue, Jan 24, 2017 at 6:43 AM, Corey Bryant
 wrote:
>
> On Fri, Jan 20, 2017 at 6:34 PM, Alex Schultz  wrote:
>>
>> Just FYI,
>>
>> We switched the Ubuntu scenario jobs to non-voting this week due to
>> the large amount of breakage caused by the ocata-proposed update to m2
>> based packages.  The Ubuntu beaker jobs are still voting on the
>> modules themselves.
>>
>> Here's where we're keeping track of issues:
>>
>> https://etherpad.openstack.org/p/puppet-ubuntu-ocata-m2
>>
>> So if anyone wants to jump in and help that would be great.  We
>> addressed the libvirt, ceilometer and aodh issues. Still outstanding
>> as of now are items related to the webob 1.7, designate mdns failure
>> and python-gabbi missing for tempest.  There might be more issues, but
>> this is what we're aware of at the moment.
>>
>> Thanks,
>> -Alex
>>
>>
>
> Alex, I just wanted to make sure webob is no longer an issue for you now.
> It should be ok now that we're at version 1:1.6.2-2.
>

Just to communicate the current status, we've managed to clean up most
of the issues from the etherpad[0] and the jobs will be green shortly.
We had to disable designate for now[1], but the rest is green.  Thanks
to the Ubuntu team for addressing some of the packaging & dependency
issues we found.

Thanks,
-Alex

[0] https://etherpad.openstack.org/p/puppet-ubuntu-ocata-m2
[1] https://review.openstack.org/#/c/424936/

> --
> Regards,
> Corey
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Kevin Benton
>So I'm not sure that Kevin and Thierry's answers address Sukhdev's point.

I stated that I am happy to develop new APIs in Neutron. "So I'm all for
developing new APIs *as a community*"...

The important distinction I am making is that we can make new APIs (and we
do with routed networks as you mentioned, VLAN aware VMs, etc), but I don't
want to see the project just become a framework to make it even easier than
it is to define an arbitrary networking API.

>But I think that the point that Sukhdev raised - about other networking
projects being suggested because of Neutron being perceived as not flexible
enough

I'm explicitly stating that if someone wants Neutron to become more
flexible to develop arbitrary APIs that diverge across deployments even
more, that's not something I'm going to support. However, making it
flexible for operators/users by adding new vendor-agnostic APIs is
something I will encourage.

The reason I am stressing that distinction is because we have vendors that
want something like Gluon that allows them to define new arbitrary APIs
without upstreaming anything or working with the community to standardize
anything. I understand that may be useful for some artisanal NFV workloads,
but that's not the direction I want to take Neutron.

Flexibility for operators/users = GOOD
Flexibility for vendor API injection = BAD

On Wed, Jan 25, 2017 at 4:55 AM, Neil Jerram  wrote:

> On Wed, Jan 25, 2017 at 10:20 AM Thierry Carrez 
> wrote:
>
>> Kevin Benton wrote:
>> > [...]
>> > The Neutron API is already very extensible and that's problematic. Right
>> > now a vendor can write an out-of-tree service plugin or driver that adds
>> > arbitrary fields and endpoints to the API that results in whatever
>> > behavior they want. This is great for vendors because they can directly
>> > expose features without having to make them vendor agnostic. However,
>> > it's terrible for operators because it leads to lock-in and terrible for
>> > the users because it leads to cross-cloud compatibility issues.
>>
>> +1000 on this being a major problem in Neutron. Happy to see that you
>> want to work on trying to reduce it.
>
>
> The Neutron API is a model of what forms of connectivity can be expressed,
> between instances and the outside world.  Once that model is chosen, it is
> inevitably (and simultaneously):
>
> (a) overconstraining - in other words, there will be forms of connectivity
> that someone could reasonably want, but that are not allowed by the model
>
> (b) underconstraining - in other words, there will be nuances of
> behaviour, delivered by a particular implementation, that are arguably
> within what the model allows, but (as we're talking about semantics) it
> would really be better to revise the API so that it can explicitly express
> those nuances.
>
> Sometimes - since the semantics of the Neutron API are not precisely
> documented - it's not clear which of these situations one is in.  But I
> think that the point that Sukhdev raised - about other networking projects
> being suggested because of Neutron being perceived as not flexible enough -
> is to do with (a); whereas the points that Kevin and Thierry responded with
> - ways that the API is already _too_ flexible - are to do with (b).  So I'm
> not sure that Kevin and Thierry's answers address Sukhdev's point.
>
> It's possible for an API to have (a) and (b) problems simultaneously, and
> to make progress on addressing them both.  In Neutron's case, a major
> example of (a) has been the routed networks work, which (among other
> things) generalized Neutron's network concept from being something that
> always provides L2 adjacency between its ports, to something that may or
> may not.  So it seems to me that Neutron is able to address (a) problems.
>  (I'm personally less familiar with (b), but would guess that progress is
> being made there too.)
>
> Regards,
>  Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-25 Thread Marios Andreou
On 23/01/17 21:03, Emilien Macchi wrote:
> Greeting folks,
> 
> I would like to propose some changes in our core members:
> 
> - Remove Jay Dobies who has not been active in TripleO for a while
> (thanks Jay for your hard work!).
> - Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
> docker bits.
> - Add Steve Backer on os-collect-config and also docker bits in
> tripleo-common and tripleo-heat-templates.
> 
> Indeed, both Flavio and Steve have been involved in deploying TripleO
> in containers, their contributions are very valuable. I would like to
> encourage them to keep doing more reviews in and out container bits.
> 
> As usual, core members are welcome to vote on the changes.

+1

> 
> Thanks,
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] tripleo-heat-templates, vendor plugins and the new hiera hook

2017-01-25 Thread Marios Andreou
Hi, as part of the composable upgrades workflow shaping up for Newton to
Ocata, we need to install the new hiera hook that was first added with
[1] and disable the old hook and data as part of the upgrade
initialization [2]. Most of the existing hieradata was ported to use the
new hook in [3]. The deletion of the old hiera data is necessary for the
Ocata upgrade, but it also means it will break any plugins still using
the 'old' os-apply-config hiera hook.

In order to be able to upgrade to Ocata any templates that define hiera
data need to be using the new hiera hook and then the overcloud nodes
need to have the new hook installed (installing is done in [2] as a
matter of necessity, and that is what prompted this email in the first
place). I've had a go at updating all the plugin templates that are
still using the old hiera data with a review at [4] which I have -1 for now.

I'll try and reach out to some individuals more directly as well but
wanted to get the review at [4] and this email out as a first step,

thanks, marios

[1] https://review.openstack.org/#/c/379733/
[2]
https://review.openstack.org/#/c/424715/2/extraconfig/tasks/newton_ocata_upgrade_init_common.sh
[3] https://review.openstack.org/#/c/384757/
[4] https://review.openstack.org/#/c/425154/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL candidacy

2017-01-25 Thread Neil Jerram
On Wed, Jan 25, 2017 at 10:20 AM Thierry Carrez 
wrote:

> Kevin Benton wrote:
> > [...]
> > The Neutron API is already very extensible and that's problematic. Right
> > now a vendor can write an out-of-tree service plugin or driver that adds
> > arbitrary fields and endpoints to the API that results in whatever
> > behavior they want. This is great for vendors because they can directly
> > expose features without having to make them vendor agnostic. However,
> > it's terrible for operators because it leads to lock-in and terrible for
> > the users because it leads to cross-cloud compatibility issues.
>
> +1000 on this being a major problem in Neutron. Happy to see that you
> want to work on trying to reduce it.


The Neutron API is a model of what forms of connectivity can be expressed,
between instances and the outside world.  Once that model is chosen, it is
inevitably (and simultaneously):

(a) overconstraining - in other words, there will be forms of connectivity
that someone could reasonably want, but that are not allowed by the model

(b) underconstraining - in other words, there will be nuances of behaviour,
delivered by a particular implementation, that are arguably within what the
model allows, but (as we're talking about semantics) it would really be
better to revise the API so that it can explicitly express those nuances.

Sometimes - since the semantics of the Neutron API are not precisely
documented - it's not clear which of these situations one is in.  But I
think that the point that Sukhdev raised - about other networking projects
being suggested because of Neutron being perceived as not flexible enough -
is to do with (a); whereas the points that Kevin and Thierry responded with
- ways that the API is already _too_ flexible - are to do with (b).  So I'm
not sure that Kevin and Thierry's answers address Sukhdev's point.

It's possible for an API to have (a) and (b) problems simultaneously, and
to make progress on addressing them both.  In Neutron's case, a major
example of (a) has been the routed networks work, which (among other
things) generalized Neutron's network concept from being something that
always provides L2 adjacency between its ports, to something that may or
may not.  So it seems to me that Neutron is able to address (a) problems.
 (I'm personally less familiar with (b), but would guess that progress is
being made there too.)

Regards,
 Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] CoreOS template v2

2017-01-25 Thread Kevin Lefevre
Hi,

I did write a blueprint a while ago but did not start to implement it.

https://blueprints.launchpad.net/magnum/+spec/coreos-best-pratice


> Le 24 janv. 2017 à 23:16, Spyros Trigazis  a écrit :
> 
> Or start writing down (in the BP) what you want to put in the driver.
> Network, lbaas, scripts, the order of the scripts and then we can see
> if it's possible to adapt to the current coreos driver.
> 
> Spyros
> 
> On Jan 24, 2017 22:54, "Hongbin Lu"  wrote:
> As Spyros mentioned, an option is to start by cloning the existing templates. 
> However, I have a concern for this approach because it will incur a lot of 
> duplication. An alternative approach is modifying the existing CoreOS 
> templates in-place. It might be a little difficult to implement but it saves 
> your overhead to deprecate the old version and roll out the new version.
> 
> 
> 
> Best regards,
> 
> Hongbin
> 
> 
> 
> From: Spyros Trigazis [mailto:strig...@gmail.com]
> Sent: January-24-17 3:47 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] CoreOS template v2
> 
> 
> 
> Hi.
> 
> 
> 
> IMO, you should add a BP and start by adding a v2 driver in /contrib.
> 
> 
> 
> Cheers,
> 
> Spyros
> 
> 
> 
> On Jan 24, 2017 20:44, "Kevin Lefevre"  wrote:
> 
> Hi,
> 
> The CoreOS template is not really up to date and in sync with upstream CoreOS 
> « Best Practice » (https://github.com/coreos/coreos-kubernetes), it is more a 
> port of th fedora atomic template but CoreOS has its own Kubernetes 
> deployment method.
> 
> I’d like to implement the changes to sync kubernetes deployment on CoreOS to 
> latest kubernetes version (1.5.2) along with standards components according 
> the CoreOS Kubernetes guide :
>   - « Defaults » add ons like kube-dns , heapster and kube-dashboard (kube-ui 
> has been deprecated for a long time and is obsolete)
>   - Canal for network policy (Calico and Flannel)
>   - Add support for RKT as container engine
>   - Support sane default options recommended by Kubernetes upstream 
> (admission control : https://kubernetes.io/docs/admin/admission-controllers/, 
> using service account…)
>   - Of course add every new parameters to HOT.
> 
> These changes are difficult to implement as is (due to the fragment concept 
> and everything is a bit messy between common and specific template fragment, 
> especially for CoreOS).
> 
> I’m wondering if it is better to clone the CoreOS v1 template to a new v2 
> template en build from here ?
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Freezer] PTL Candidacy

2017-01-25 Thread Fausto Marzi
And you have at least all my support : )

On Wed, Jan 25, 2017 at 12:56 PM, Saad Zaher  wrote:

> Hello everyone,
>
> I'm happy to announce my candidacy to be the Freezer PTL for the Pike
> release
> cycle.
>
> The Freezer developers did a very good job over the past releases and I am
> sure
> they will continue doing an amazing job over the next release as well.
>
> For the Pike release cycle, I think we need to focus more on backing up
> OpenStack resources and adding more
> backup engines to give more variety to OpenStack users, also we should
> continue working on
> the core freezer features to provide stabilization.
>
> In this cycle we will give also more attention to freezer-dr, the disaster
> recovery
> part of freezer. It provides cloud administrators the capability to
> evacuate
> VMs from failed compute nodes, we aim to take it to more than that and
> support more severe disaster.
>
>
> In order to achieve that, I think we should focus on:
>
> * Enhancing core freezer-agent features and continue refactoring the
> missing parts
>   to allow pluggable architecture to support more engines and applications
> in freezer.
>
> * Move from Elasticsearch to Oslo.db
>
> * Fully implement Oslo.policy
>
> * Implement version 2 of freezer-api
>
> * Integration tests. We need to increase the work done on testing. This
> will
>   help to stabilize Freezer.
>
> * Documentation. We should target for a split, refactoring and global
>   improvement of our docs, which is a required step to increase the size
> of our
>   community.
>
> * Focus on implementing OpenStack engine(s) to backup OpenStack resources.
>
>
> I would be honoured to have your support.
>
> Thanks,
> Saad Zaher (szaher)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Live migration issue

2017-01-25 Thread Lenny Verkhovsky
Hi,

What domain name are you using?
Check for 'Traceback' and ' ERROR ' in the logs, maybe you will get a hint


2017-01-25 11:00:21.215 28309 INFO nova.compute.manager 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] You may see the error "libvirt: QEMU 
error: Domain not found: no domain with matching name." This error can be 
safely ignored.


-Original Message-
From: fabrice grelaud [mailto:fabrice.grel...@u-bordeaux.fr] 
Sent: Wednesday, January 25, 2017 1:06 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [openstack-ansible] Live migration issue

Hi osa team,

i ‘ve got live migration issue in one direction but not in other.
I deploy openstack with OSA, ubuntu trusty, stable/newton branch, 14.0.5 tag.

My 2 compute node are same host type and have nova-compute and cinder-volume 
(our ceph cluster as backend) services.

No problem to live migrate instance from Compute 2 to Compute 1 whereas the 
reverse is not true.
See log below:

Live migration instance Compute 2 to 1: OK

Compute 2 log
2017-01-25 11:00:15.621 28309 INFO nova.virt.libvirt.migration 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Increasing downtime to 46 ms after 0 sec 
elapsed time
2017-01-25 11:00:15.787 28309 INFO nova.virt.libvirt.driver 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Migration running for 0 secs, memory 100% 
remaining; (bytes processed=0, remaining=0, total=0)
2017-01-25 11:00:17.737 28309 INFO nova.compute.manager [-] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] VM Paused (Lifecycle Event)
2017-01-25 11:00:17.794 28309 INFO nova.virt.libvirt.driver 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Migration operation has completed
2017-01-25 11:00:17.795 28309 INFO nova.compute.manager 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] _post_live_migration() is started..
2017-01-25 11:00:17.815 28309 INFO oslo.privsep.daemon 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] Running privsep helper: ['sudo', 
'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', 
'/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', 
'--privsep_sock_path', '/tmp/tmpfL96lI/privsep.sock']
2017-01-25 11:00:18.387 28309 INFO oslo.privsep.daemon 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] Spawned new privsep daemon via rootwrap
2017-01-25 11:00:18.395 28309 INFO oslo.privsep.daemon [-] privsep daemon 
starting
2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep process 
running with uid/gid: 0/0
2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep process 
running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep daemon 
running as pid 28815
2017-01-25 11:00:18.397 28309 INFO nova.compute.manager 
[req-aa0997d7-bf5f-480f-abc5-beadd2d03409 - - - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] During sync_power_state the instance has 
a pending task (migrating). Skip.
2017-01-25 11:00:18.538 28309 INFO nova.compute.manager 
[req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Running instance usage 
audit for host p-oscompute02 from 2017-01-25 09:00:00 to 2017-01-25 10:00:00. 2 
instances.
2017-01-25 11:00:18.691 28309 INFO nova.compute.resource_tracker 
[req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Auditing locally available 
compute resources for node p-oscompute02.openstack.local
2017-01-25 11:00:19.634 28309 INFO nova.compute.resource_tracker 
[req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Total usable vcpus: 40, 
total allocated vcpus: 4
2017-01-25 11:00:19.635 28309 INFO nova.compute.resource_tracker 
[req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Final resource view: 
name=p-oscompute02.openstack.local phys_ram=128700MB used_ram=6144MB 
phys_disk=33493GB used_disk=22GB total_vcpus=40 used_vcpus=4 pci_stats=[]
2017-01-25 11:00:19.709 28309 INFO nova.compute.resource_tracker 
[req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Compute_service record 
updated for p-oscompute02:p-oscompute02.openstack.local
2017-01-25 11:00:20.163 28309 INFO os_vif 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 

[openstack-dev] [Freezer] PTL Candidacy

2017-01-25 Thread Saad Zaher
Hello everyone,

I'm happy to announce my candidacy to be the Freezer PTL for the Pike
release
cycle.

The Freezer developers did a very good job over the past releases and I am
sure
they will continue doing an amazing job over the next release as well.

For the Pike release cycle, I think we need to focus more on backing up
OpenStack resources and adding more
backup engines to give more variety to OpenStack users, also we should
continue working on
the core freezer features to provide stabilization.

In this cycle we will give also more attention to freezer-dr, the disaster
recovery
part of freezer. It provides cloud administrators the capability to evacuate
VMs from failed compute nodes, we aim to take it to more than that and
support more severe disaster.


In order to achieve that, I think we should focus on:

* Enhancing core freezer-agent features and continue refactoring the
missing parts
  to allow pluggable architecture to support more engines and applications
in freezer.

* Move from Elasticsearch to Oslo.db

* Fully implement Oslo.policy

* Implement version 2 of freezer-api

* Integration tests. We need to increase the work done on testing. This will
  help to stabilize Freezer.

* Documentation. We should target for a split, refactoring and global
  improvement of our docs, which is a required step to increase the size of
our
  community.

* Focus on implementing OpenStack engine(s) to backup OpenStack resources.


I would be honoured to have your support.

Thanks,
Saad Zaher (szaher)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Honza Pokorny core on tripleo-ui

2017-01-25 Thread Florian Fuchs


- Original Message -
> From: "Emilien Macchi" 
> To: "OpenStack Development Mailing List" 
> Sent: Tuesday, January 24, 2017 2:52:51 PM
> Subject: [openstack-dev] [tripleo] Proposing Honza Pokorny core on tripleo-ui
> 
> I have been discussed with TripleO UI core reviewers and it's pretty
> clear Honza's work has been valuable so we can propose him part of
> Tripleo UI core team.
> His quality of code and reviews make him a good candidate and it would
> also help the other 2 core reviewers to accelerate the review process
> in UI component.
> 
> Like usual, this is open for discussion, Tripleo UI core and TripleO
> core, please vote.

+1

Florian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-25 Thread Chris Dent

On Wed, 25 Jan 2017, Thierry Carrez wrote:


We were discussing this in the context of an "assert" tag, not a goal.


Yes, but it is often the case that changes are being evaluated as if
it was a goal. A couple of glance related changes experienced
reactions of "this doesn't meet compatibility guidelines":

https://review.openstack.org/#/c/420038/
https://review.openstack.org/#/c/414261/

This is perhaps a proper reaction as a sanity check, but if a
project does not subscribe to the mooted assert tag then whether it
is a blocker or not should be up to the project?


I think that's a good commitment to document, and knowing which projects
actually commit to that is very useful to our users (the appdev
variety). I don't think that means every project needs to commit to that
right now, or that microversions are the only way to make sure you won't
ever break API compatibility. I just think it's a good information bit
to communicate.


It is definitely a good commitment to document, but we need to make
sure that we express it is an optional commitment, if in fact it is.
I get the impression that a lot people think it is not.

And if the commitment is being made, then we need to make sure we
document what demarcates change boundaries (when they inevitably
happen) and how to manage them.

I think we would be doing a huge disservice to our efforts at making
the APIs consistent (amongst the different services) if we have
multiple ways to manage them.

BTW: I think we should start using the term "stability" not
"compatibility".

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Sergey (Sagi) Shnaidman for core on tripleo-ci

2017-01-25 Thread Jiří Stránský

On 24.1.2017 18:03, Juan Antonio Osorio wrote:

Sagi (sshnaidm on IRC) has done significant work in TripleO CI (both
on the current CI solution and in getting tripleo-quickstart jobs for
it); So I would like to propose him as part of the TripleO CI core team.



+1


I think he'll make a great addition to the team and will help move CI
issues forward quicker.

Best Regards,





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-25 Thread Jiří Stránský

On 23.1.2017 20:03, Emilien Macchi wrote:

Greeting folks,

I would like to propose some changes in our core members:

- Remove Jay Dobies who has not been active in TripleO for a while
(thanks Jay for your hard work!).
- Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
docker bits.
- Add Steve Backer on os-collect-config and also docker bits in
tripleo-common and tripleo-heat-templates.

Indeed, both Flavio and Steve have been involved in deploying TripleO
in containers, their contributions are very valuable. I would like to
encourage them to keep doing more reviews in and out container bits.

As usual, core members are welcome to vote on the changes.


+1



Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Honza Pokorny core on tripleo-ui

2017-01-25 Thread Jiří Stránský

On 24.1.2017 14:52, Emilien Macchi wrote:

I have been discussed with TripleO UI core reviewers and it's pretty
clear Honza's work has been valuable so we can propose him part of
Tripleo UI core team.
His quality of code and reviews make him a good candidate and it would
also help the other 2 core reviewers to accelerate the review process
in UI component.

Like usual, this is open for discussion, Tripleo UI core and TripleO
core, please vote.


+1



Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [aodh][vitrage] Aodh generic alarms

2017-01-25 Thread Julien Danjou
On Tue, Jan 24 2017, gordon chung wrote:

> you mean, keep alarm history in aodh and also in panko if needed? i'm ok 
> with that.

Yeah, IIRC there's an expirer in Aodh for alarm history based on TTL –
that's enough. That should probably be replaced with just a hard limit on
the number of history items you have (e.g. 100) and having them the
older being dropped when the limit is hit.

And if somebody wants a full audit control of what's done, Panko is the
way to go (you know, bread crumbs ;-).

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Live migration issue

2017-01-25 Thread fabrice grelaud
Hi osa team,

i ‘ve got live migration issue in one direction but not in other.
I deploy openstack with OSA, ubuntu trusty, stable/newton branch, 14.0.5 tag.

My 2 compute node are same host type and have nova-compute and cinder-volume 
(our ceph cluster as backend) services.

No problem to live migrate instance from Compute 2 to Compute 1 whereas the 
reverse is not true.
See log below:

Live migration instance Compute 2 to 1: OK

Compute 2 log
2017-01-25 11:00:15.621 28309 INFO nova.virt.libvirt.migration 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Increasing downtime to 46 ms after 0 sec 
elapsed time
2017-01-25 11:00:15.787 28309 INFO nova.virt.libvirt.driver 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Migration running for 0 secs, memory 100% 
remaining; (bytes processed=0, remaining=0, total=0)
2017-01-25 11:00:17.737 28309 INFO nova.compute.manager [-] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] VM Paused (Lifecycle Event)
2017-01-25 11:00:17.794 28309 INFO nova.virt.libvirt.driver 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Migration operation has completed
2017-01-25 11:00:17.795 28309 INFO nova.compute.manager 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] _post_live_migration() is started..
2017-01-25 11:00:17.815 28309 INFO oslo.privsep.daemon 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] Running privsep helper: ['sudo', 
'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', 
'/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', 
'--privsep_sock_path', '/tmp/tmpfL96lI/privsep.sock']
2017-01-25 11:00:18.387 28309 INFO oslo.privsep.daemon 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] Spawned new privsep daemon via rootwrap
2017-01-25 11:00:18.395 28309 INFO oslo.privsep.daemon [-] privsep daemon 
starting
2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep process 
running with uid/gid: 0/0
2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep process 
running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep daemon 
running as pid 28815
2017-01-25 11:00:18.397 28309 INFO nova.compute.manager 
[req-aa0997d7-bf5f-480f-abc5-beadd2d03409 - - - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] During sync_power_state the instance has 
a pending task (migrating). Skip.
2017-01-25 11:00:18.538 28309 INFO nova.compute.manager 
[req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Running instance usage 
audit for host p-oscompute02 from 2017-01-25 09:00:00 to 2017-01-25 10:00:00. 2 
instances.
2017-01-25 11:00:18.691 28309 INFO nova.compute.resource_tracker 
[req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Auditing locally available 
compute resources for node p-oscompute02.openstack.local
2017-01-25 11:00:19.634 28309 INFO nova.compute.resource_tracker 
[req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Total usable vcpus: 40, 
total allocated vcpus: 4
2017-01-25 11:00:19.635 28309 INFO nova.compute.resource_tracker 
[req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Final resource view: 
name=p-oscompute02.openstack.local phys_ram=128700MB used_ram=6144MB 
phys_disk=33493GB used_disk=22GB total_vcpus=40 used_vcpus=4 pci_stats=[]
2017-01-25 11:00:19.709 28309 INFO nova.compute.resource_tracker 
[req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] Compute_service record 
updated for p-oscompute02:p-oscompute02.openstack.local
2017-01-25 11:00:20.163 28309 INFO os_vif 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] Successfully unplugged vif 
VIFBridge(active=True,address=fa:16:3e:d2:7c:83,bridge_name='brqc434ace8-45',has_traffic_filtering=True,id=dff20b91-a654-437d-8a74-dc55aeac8ab7,network=Network(c434ace8-45f6-4bb1-95bc-d52dadb557c7),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tapdff20b91-a6')
2017-01-25 11:00:20.201 28309 INFO nova.virt.libvirt.driver 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Deleting instance files 
/var/lib/nova/instances/c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3_del
2017-01-25 11:00:20.202 28309 INFO nova.virt.libvirt.driver 
[req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 

  1   2   >