Re: [openstack-dev] [heat][nova] Does Heat support Nova micro versions

2018-10-29 Thread Rabi Mishra
On Sat, Oct 27, 2018 at 10:08 PM Gary Kotton  wrote:

> Hi,
>
> Does heat support os-compute-api-version? Say for example I am using
> queens but have a use case via heat that requires a API parameter that was
> capped in the 2.56
>

There isn't a way to specify compute microversion in the template.

Till queens, heat client plugin for nova was using the base api version[1],
unless a specific feature/property requires a higher mircoversion[2]. So
features capped in newer api versions should be usable without any change.

However, we moved to use the "max api microversion supported" as the
default[3] since rocky to support new features without much changes and not
to have too many versioned clients. As a side-effect, features/properties
capped by new nova api microversions can't be used any more.

We probably have to look for a better way to handle this in the future.


[1]
https://github.com/openstack/heat/blob/stable/queens/heat/engine/clients/os/nova.py#L61
[2]
https://github.com/openstack/heat/blob/stable/queens/heat/engine/resources/openstack/nova/server.py#L838
[3]
https://github.com/openstack/heat/blob/stable/rocky/heat/engine/clients/os/nova.py#L100


> microversion.
>
> Thanks
>
> Gary
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration

2018-09-27 Thread Rabi Mishra
On Thu, Sep 27, 2018 at 11:45 PM Zane Bitter  wrote:

> On 26/09/18 10:27 PM, Qiming Teng wrote:
>
 

> Heat still has a *lot* of users running very important stuff on Heat
> scaling group code which, as you know, is burdened by a lot of technical
> debt.
>
> Though I agree that a common library that can be used by both projects
would be really good, I still don't understand what user issues (though the
resource implementations are not the best, they actually work) we're trying
to address here.

As far as duplicated effort is concerned (that's the only justification I
could get from the etherpad), possibly senlin duplicated some stuff
expecting to replace heat implementation in time. Also, we've not made any
feature additions to heat group resources since long time (expecting senlin
to do it instead) and I've not seen any major bugs reported by users. May
be we're talking about duplicated effort in the "future", now that we have
changed plans for heat ASG?;)

>> What will be great if we can build common library cross projects, and use
> >> that common library in both projects, make sure we have all improvement
> >> implemented in that library, finally to use Senlin from that from that
> >> library call in Heat autoscaling group. And in long-term, we gonna let
> all
>
> 

>
> +1 - to expand on Rico's example, we have at least 3 completely separate
> implementations of batching, each supporting different actions:
>
> Heat AutoscalingGroup: updates only
> Heat ResourceGroup: create or update
> Senlin Batch Policy: updates only
>
> and users are asking for batch delete as well.
>

I've seen this request a few times. But, what I wonder is "why a user would
want to do a delete in a controlled batched manner"? The only
justifications provided is that "they want to throttle requests to other
services, as those services are not able to handle large concurrent
requests sent by heat properly". Are we not looking at the wrong place to
fix those issues?

IMHO, a good list of user issues on the mentioned etherpad would really
help justify the effort needed.

This is clearly an area
> where technical debt from duplicate implementations is making it hard to
> deliver value to users.
>
> cheers,
> Zane.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-27 Thread Rabi Mishra
On Mon, Aug 27, 2018 at 3:25 PM, Sergii Golovatiuk 
wrote:

> Hi,
>
> On Mon, Aug 27, 2018 at 5:32 AM, Rabi Mishra  wrote:
> > On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker  wrote:
> Steve mentioned kubectl (kubernetes CLI which communicates with
>

Not sure what he meant. May be I miss something, but not heard of 'kubectl
standalone', though he might have meant standalone k8s cluster on every
node as you think.


> kube-api) not kubelet which is only one component of kubernetes. All
> kubernetes components may be compiled as one binary (hyperkube) which
> can be used to minimize footprint. Generated ansible for kubelet is
> not enough as kubelet doesn't have any orchestration logic.
>

What orchestration logic do we've with TripleO atm? AFAIK we've provide
roles data for service placement across nodes, right?
I see standalone kubelet as a first step for scheduling openstack services
with in k8s cluster in the future (may be).

>>
> >> This was a while ago now so this could be worth revisiting in the
> future.
> >> We'll be making gradual changes, the first of which is using podman to
> >> manage single containers. However podman has native support for the pod
> >> format, so I'm hoping we can switch to that once this transition is
> >> complete. Then evaluating kubectl becomes much easier.
> >>
> >>> Question. Rather then writing a middle layer to abstract both container
> >>> engines, couldn't you just use CRI? CRI is CRI-O's native language, and
> >>> there is support already for Docker as well.
> >>
> >>
> >> We're not writing a middle layer, we're leveraging one which is already
> >> there.
> >>
> >> CRI-O is a socket interface and podman is a CLI interface that both sit
> on
> >> top of the exact same Go libraries. At this point, switching to podman
> needs
> >> a much lower development effort because we're replacing docker CLI
> calls.
> >>
> > I see good value in evaluating kubelet standalone and leveraging it's
> > inbuilt grpc interfaces with cri-o (rather than using podman) as a long
> term
> > strategy, unless we just want to provide an alternative to docker
> container
> > runtime with cri-o.
>
> I see no value using kubelet without kubernetes IMHO.
>
>
>
> >>>
> >>>
> >>> Thanks,
> >>> Kevin
> >>> 
> >>> From: Jay Pipes [jaypi...@gmail.com]
> >>> Sent: Thursday, August 23, 2018 8:36 AM
> >>> To: openstack-dev@lists.openstack.org
> >>> Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for
> nice
> >>> API calls
> >>>
> >>> Dan, thanks for the details and answers. Appreciated.
> >>>
> >>> Best,
> >>> -jay
> >>>
> >>> On 08/23/2018 10:50 AM, Dan Prince wrote:
> >>>>
> >>>> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:
> >>>>>
> >>>>> On 08/15/2018 04:01 PM, Emilien Macchi wrote:
> >>>>>>
> >>>>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi  >>>>>> <mailto:emil...@redhat.com>> wrote:
> >>>>>>
> >>>>>>   More seriously here: there is an ongoing effort to converge
> the
> >>>>>>   tools around containerization within Red Hat, and we, TripleO
> >>>>>> are
> >>>>>>   interested to continue the containerization of our services
> >>>>>> (which
> >>>>>>   was initially done with Docker & Docker-Distribution).
> >>>>>>   We're looking at how these containers could be managed by k8s
> >>>>>> one
> >>>>>>   day but way before that we plan to swap out Docker and join
> >>>>>> CRI-O
> >>>>>>   efforts, which seem to be using Podman + Buildah (among other
> >>>>>> things).
> >>>>>>
> >>>>>> I guess my wording wasn't the best but Alex explained way better
> here:
> >>>>>>
> >>>>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%
> 23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52
> >>>>>>
> >>>>>> If I may have a chance to rephrase, I guess our current intention is
> >>>>>> to
> >>>>>> continue our containerization and investigate how we can improve

Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-26 Thread Rabi Mishra
on some problems that we, TripleO have been facing
>>>>> since we containerized our services.
>>>>>
>>>>> We're doing all of this in the open, so feel free to ask any question.
>>>>>
>>>> I appreciate your response, Emilien, thank you. Alex' responses to
>>>> Jeremy on the #openstack-tc channel were informative, thank you Alex.
>>>>
>>>> For now, it *seems* to me that all of the chosen tooling is very Red Hat
>>>> centric. Which makes sense to me, considering Triple-O is a Red Hat
>>>> product.
>>>>
>>> Perhaps a slight clarification here is needed. "Director" is a Red Hat
>>> product. TripleO is an upstream project that is now largely driven by
>>> Red Hat and is today marked as single vendor. We welcome others to
>>> contribute to the project upstream just like anybody else.
>>>
>>> And for those who don't know the history the TripleO project was once
>>> multi-vendor as well. So a lot of the abstractions we have in place
>>> could easily be extended to support distro specific implementation
>>> details. (Kind of what I view podman as in the scope of this thread).
>>>
>>> I don't know how much of the current reinvention of container runtimes
>>>> and various tooling around containers is the result of politics. I don't
>>>> know how much is the result of certain companies wanting to "own" the
>>>> container stack from top to bottom. Or how much is a result of technical
>>>> disagreements that simply cannot (or will not) be resolved among
>>>> contributors in the container development ecosystem.
>>>>
>>>> Or is it some combination of the above? I don't know.
>>>>
>>>> What I *do* know is that the current "NIH du jour" mentality currently
>>>> playing itself out in the container ecosystem -- reminding me very much
>>>> of the Javascript ecosystem -- makes it difficult for any potential
>>>> *consumers* of container libraries, runtimes or applications to be
>>>> confident that any choice they make towards one of the other will be the
>>>> *right* choice or even a *possible* choice next year -- or next week.
>>>> Perhaps this is why things like openstack/paunch exist -- to give you
>>>> options if something doesn't pan out.
>>>>
>>> This is exactly why paunch exists.
>>>
>>> Re, the podman thing I look at it as an implementation detail. The
>>> good news is that given it is almost a parity replacement for what we
>>> already use we'll still contribute to the OpenStack community in
>>> similar ways. Ultimately whether you run 'docker run' or 'podman run'
>>> you end up with the same thing as far as the existing TripleO
>>> architecture goes.
>>>
>>> Dan
>>>
>>> You have a tough job. I wish you all the luck in the world in making
>>>> these decisions and hope politics and internal corporate management
>>>> decisions play as little a role in them as possible.
>>>>
>>>> Best,
>>>> -jay
>>>>
>>>> 
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.op
>>>> enstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>> 
>>> ______
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][heat][congress] gabbi<1.42.1 causing error in queens dsvm

2018-08-13 Thread Rabi Mishra
On Tue, Aug 14, 2018 at 4:40 AM, Eric K  wrote:

> It appears that gabbi<1.42.1 is causing on error with heat tempest
> plugin in congress stable/queens dsvm job [1][2][3].

I wonder why you're enabling heat-tempest-plugin in the first place? I see
a number of tempest plugins enabled. However, you don't seem to gate on the
tests in those plugins[1].

[1]
https://github.com/openstack/congress/blob/master/playbooks/legacy/congress-devstack-api-base/run.yaml#L61


> The issue was
> addressed in heat tempest plugin [4], but the problem remains for
> stable/queens jobs because the queens upper-constraint is still at
> 1.40.0 [5].
>
>
Yeah, a uc bump to 1.42.1 is required if you're enabling it.


> Any suggestions on how to proceed? Thank you!
>
> [1] https://bugs.launchpad.net/heat-tempest-plugin/+bug/1749218
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1609361
> [3] http://logs.openstack.org/41/567941/2/check/congress-
> devstack-api-mysql/c232d8a/job-output.txt.gz#_2018-08-13_11_46_28_441837
> [4] https://review.openstack.org/#/c/544025/
> [5] https://github.com/openstack/requirements/blob/stable/
> queens/upper-constraints.txt#L245
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Rabi Mishra
On Tue, Jun 26, 2018 at 2:48 PM, Ghanshyam Mann 
wrote:

> Hello Everyone,
>
> In Queens cycle,  community goal to split the Tempest Plugin has been
> completed [1] and i think almost all the projects have separate repo for
> tempest plugin [2]. Which means each tempest plugins are being separated
> from their project release model.  Few projects have started the
> independent release model for their plugins like kuryr-tempest-plugin,
> ironic-tempest-plugin etc [3].  I think neutron-tempest-plugin also
> planning as chatted with amotoki.
>
> There might be some changes in Tempest which might not work with older
> version of Tempest Plugins.


I don't think that's a good premise. Isn't tempest branchless and by
definition should be backward compatible with service releases?

If there are changes in the plugin interface in tempest, I would also
expect those to be backward compatible too. Likewise plugins should be
backward compatible with their respective projects, so any kind of release
model would work.

Else, I think the whole branchless concept is of very little use.

For example, If I am testing any production cloud which has Nova, Neutron,
> Cinder, Keystone , Aodh, Congress etc  i will be using Tempest and Aodh's ,
> Congress's Tempest plugins. With Independent release model of each Tempest
> Plugins, there might be chance that the Aodh's or Congress's Tempest plugin
> versions are not compatible with latest/known Tempest versions. It will
> become hard to find the compatible tag/release of Tempest and Tempest
> Plugins or in some cases i might need to patch up the things.
>
> During QA feedback sessions at Vancouver Summit, there was feedback to
> coordinating the release of all Tempest plugins and Tempest [4] (also
> amotoki talked to me on this as neutron-tempest-plugin is planning their
> first release). Idea is to release/tag all the Tempest plugins and Tempest
> together so that specific release/tag can be identified as compatible
> version of all the Plugins and Tempest for testing the complete stack. That
> way user can get to know what version of Tempest Plugins is compatible with
> what version of Tempest.
>
> For above use case, we need some coordinated release model among Tempest
> and all the Tempest Plugins. There should be single release of all Tempest
> Plugins with well defined tag whenever any Tempest release is happening.
> For Example, Tempest version 19.0.0 is to mark the "support of the Rocky
> release". When releasing the Tempest 19.0, we will release all the Tempest
> plugins also to tag the compatibility of plugins with Tempest for "support
> of the Rocky release".
>
> One way to make this coordinated release (just a initial thought):
> 1. Release Each Tempest Plugins whenever there is any major version
> release of Tempest (like marking the support of OpenStack release in
> Tempest, EOL of OpenStack release in Tempest)
> 1.1 Each plugin or Tempest can do their intermediate release of minor
> version change which are in backward compatible way.
> 1.2 This coordinated Release can be started from latest Tempest
> Version for simple reading.  Like if we start this coordinated release from
> Tempest version 19.0.0 then,
> each plugins will be released as 19.0.0 and so on.
>
> Giving the above background and use case of this coordinated release,
> A) I would like to ask each plugins owner if you are agree on this
> coordinated release?  If no, please give more feedback or issue we can face
> due to this coordinated release.
>
> If we get the agreement from all Plugins then,
> B) Release team or TC help to find the better model for this use case or
> improvement in  above model.
>
> C) Once we define the release model, find out the team owning that release
> model (there are more than 40 Tempest plugins currently) .
>
> NOTE: Till we decide the best solution for this use case, each plugins can
> do/keep doing their plugin release as per independent release model.
>
> [1] https://governance.openstack.org/tc/goals/queens/split-tempe
> st-plugins.html
> [2] https://docs.openstack.org/tempest/latest/plugin-registry.html
> [3] https://github.com/openstack/kuryr-tempest-plugin/releases
>https://github.com/openstack/ironic-tempest-plugin/releases
> [4] http://lists.openstack.org/pipermail/openstack-dev/2018-June
> /131011.html
>
>
> -gmann
>
>
> ______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][heat-templates] Creating a role with no domain

2018-06-21 Thread Rabi Mishra
Looks like that's a bug where we create a domain specific role for
'default' domain[1], when domain is not specified.

[1]
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/keystone/role.py#L54

You're welcome to raise a bug and propose a fix where we should be just
removing the default.

On Thu, Jun 21, 2018 at 4:14 PM, Tikkanen, Viktor (Nokia - FI/Espoo) <
viktor.tikka...@nokia.com> wrote:

> Hi!
>
> There was a new ’domain’ property added to OS::Keystone::Role (
> *https://storyboard.openstack.org/#!/story/1684558*
> <https://storyboard.openstack.org/#!/story/1684558>,
> *https://review.openstack.org/#/c/459033/*
> <https://review.openstack.org/#/c/459033/>).
>
> With “openstack role create” CLI command it is still possible to create
> roles with no associated domains; but it seems that the same cannot be done
> with heat templates.
>
> An example: if I create two roles, CliRole (with “openstack role create
> CliRole” command)  and SimpleRole with the following heat template:
>
> heat_template_version: 2015-04-30
> description: Creates a role
> resources:
>   role_resource:
> type: OS::Keystone::Role
> properties:
>   name: SimpleRole
>
> the result in the keystone database will be:
>
> MariaDB [keystone]> select * from role;
> +--+--+-
> --+---+
> | id   | name | extra | domain_id
> |
> +--+--+-
> --+---+
> | 5de0eee4990e4a59b83dae93af9c0951 | SimpleRole   | {}| default
> |
> | 79472e6e1bf341208bd88e1c2dcf7f85 | CliRole  | {}| <>
> |
> | 7dd5e4ea87e54a13897eb465fdd0e950 | heat_stack_owner | {}| <>
> |
> | 80fd61edbe8842a7abb47fd7c91ba9d7 | heat_stack_user  | {}| <>
> |
> | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | {}| <>
> |
> | e174c27e79b84ea392d28224eb0af7c9 | admin| {}| <>
> |
> +--+--+-
> --+---+
>
> Should it be possible to create a role without associated domain with a
> heat template?
>
> -V.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ci][infra][aodh][telemetry] telemetry test broken on oslo.messaging stable/queens

2018-06-11 Thread Rabi Mishra
On Tue, Jun 12, 2018 at 10:33 AM, Mehdi Abaakouk  wrote:

>
> Hi,
>
> The tempest plugin error remember me something we got in telemetry gate a
> while back.
>
> We fix the telemetry tempest plugin with https://github.com/openstack/t
> elemetry-tempest-plugin/commit/11277a8bee2b0ee0688ed32cc0e836872c24ee4b
>
> So I propose the same for heat tempest plugin:
> https://review.openstack.org/574550
>
> After
https://github.com/cdent/gabbi/pull/243/commits/01993966c179791186977e27c64b9e525a566408
(gabbi === 1.42.0) it just checks for host is not None and we pass empty
string here. So it should not fail.

However, I think the issue is that quuens upper-constraints have
gabbi===1.40.0. Unless we can bump that we've to go with this workaround.

> Hope that helps,
> sileht
>
>
> Le 2018-06-11 21:53, Ken Giusti a écrit :
>
>> Updated subject to include [aodh] and [telemetry]
>>
>> On Tue, Jun 5, 2018 at 11:41 AM, Doug Hellmann 
>> wrote:
>>
>>> Excerpts from Ken Giusti's message of 2018-06-05 10:47:17 -0400:
>>>
>>>> Hi,
>>>>
>>>> The telemetry integration test for oslo.messaging has started failing
>>>> on the stable/queens branch [0].
>>>>
>>>> A quick review of the logs points to a change in heat-tempest-plugin
>>>> that is incompatible with the version of gabbi from queens upper
>>>> constraints (1.40.0) [1][2].
>>>>
>>>> The job definition [3] includes required-projects that do not have
>>>> stable/queens branches - including heat-tempest-plugin.
>>>>
>>>> My question - how do I prevent this job from breaking when these
>>>> unbranched projects introduce changes that are incompatible with
>>>> upper-constrants for a particular branch?
>>>>
>>>
>>> Aren't those projects co-gating on the oslo.messaging test job?
>>>
>>> How are the tests working for heat's stable/queens branch? Or telemetry?
>>> (whichever project is pulling in that tempest repo)
>>>
>>>
>> I've run the stable/queens branches of both Aodh[1] and Heat[2] - both
>> failed.
>>
>> Though the heat failure is different from what we're seeing on
>> oslo.messaging [3],
>> the same warning about gabbi versions is there [4].
>>
>> However the Aodh failure is exactly the same as the oslo.messaging one
>> [5] - this makes sense since the oslo.messaging test is basically
>> running the same telemetry-tempest-plugin test.
>>
>> So this isn't something unique to oslo.messaging - the telemetry
>> integration test is busted in stable/queens.
>>
>> I'm going to mark these tests as non-voting on oslo.messaging's queens
>> branch for now so we can land some pending patches.
>>
>>
>> [1] https://review.openstack.org/#/c/574306/
>> [2] https://review.openstack.org/#/c/574311/
>> [3]
>> http://logs.openstack.org/11/574311/1/check/heat-functional-
>> orig-mysql-lbaasv2/21cce1d/job-output.txt.gz#_2018-06-11_17_30_51_106223
>> [4]
>> http://logs.openstack.org/11/574311/1/check/heat-functional-
>> orig-mysql-lbaasv2/21cce1d/logs/devstacklog.txt.gz#_2018-
>> 06-11_17_09_39_691
>> [5]
>> http://logs.openstack.org/06/574306/1/check/telemetry-dsvm-i
>> ntegration/0a9620a/job-output.txt.gz#_2018-06-11_16_53_33_982143
>>
>>
>>
>>
>>>> I've tried to use override-checkout in the job definition, but that
>>>> seems a bit hacky in this case since the tagged versions don't appear
>>>> to work and I've resorted to a hardcoded ref [4].
>>>>
>>>> Advice appreciated, thanks!
>>>>
>>>> [0] https://review.openstack.org/#/c/567124/
>>>> [1] http://logs.openstack.org/24/567124/1/check/oslo.messaging-t
>>>> elemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-
>>>> post_test_hook.txt.gz#_2018-05-16_05_20_05_624
>>>> [2] http://logs.openstack.org/24/567124/1/check/oslo.messaging-t
>>>> elemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.
>>>> txt.gz#_2018-05-16_05_19_06_332
>>>> [3] https://git.openstack.org/cgit/openstack/oslo.messaging/tree
>>>> /.zuul.yaml?h=stable/queens#n250
>>>> [4] https://review.openstack.org/#/c/572193/2/.zuul.yaml
>>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][neutron] Extraroute support

2018-06-01 Thread Rabi Mishra
On Fri, Jun 1, 2018 at 3:57 PM, Lajos Katona 
wrote:

> Hi,
>
> Could somebody help me out with Neutron's Extraroute support in Hot
> templates.
> The support status of the Extraroute is support.UNSUPPORTED in heat, and
> only create and delete are the supported operations.
> see: https://github.com/openstack/heat/blob/master/heat/engine/re
> sources/openstack/neutron/extraroute.py#LC35
>
>
As I see the unsupported tag was added when the feature was moved from the
> contrib folder to in-tree (https://review.openstack.org/186608)
> Perhaps you can help me out why only create and delete are supported and
> update not.
>
>
I think most of the resources when moved from contrib to in-tree are marked
as unsupported. Adding routes to an existing router by multiple stacks can
be racy and is probably the reason use of this resource is not encouraged
and hence it's not supported. You can see the discussion in the original
patch that proposed this resource https://review.openstack.org/#/c/41044/

Not sure if things have changed on neutron side for us to revisit the
concerns.

Also it does not have any update_allowed properties, hence no
handle_update(). It would be replaced if you change any property.

Hope it helps.



> Thanks in advance for  the help.
>
> Regards
> Lajos
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project

2018-03-07 Thread Rabi Mishra
On Wed, Mar 7, 2018 at 6:10 PM, Ghanshyam Mann <gm...@ghanshyammann.com>
wrote:

>  Hi All,
>
> QA had discussion in Dublin PTG about interop adds-on tests location.
> First of all thanks all (specially markvoelker, dhellmann, mugsie) for
> joining the sessions. and I am glad we conclude the things and agreed on
> solution.
>
> Discussion was carry forward from the ML discussion [1] and to get the
> agreement about interop adds-on program tests location.
>
> Till now only 2 projects (heat and designate) are in list of adds-on
> program from interop side. After discussion and points from all stack
> holders, QA team agreed to host these 2 projects interop tests.  Tests from
> both projects are not much as of now and QA team can accommodate to host
> their interop tests.
>
> Along with that agreement we had few more technical points to consider
> while moving designate and heat interop tests in Tempest repo. All the
> interop tests going to be added in Tempest must to be Tempest like tests.
> Tempest like tests here means tests written using Tempest interfaces and
> guidelines. For example, heat has their tests in heat-tempest-plugin based
> on gabbi and to move heat interop tests to Tempest those have to be written
> as Tempest like test. This is because if we accept non-tempest like tests
> in Tempest then, it will be too difficult to maintain by Tempest team.
>
> Projects (designate and heat) and QA team will work closely to move
> interop tests to Tempest repo which might needs some extra work of
> standardizing their tests and interface used by them like service clients
> etc.
>

Though I've not been part of any of these discussions, this seems to be
exactly opposite to what I've been made to understand by the team i.e. Heat
is not rewriting the gabbi api tests used by Trademark program, but would
create a new tempest plugin (new repo
'orchestration-trademark-tempest-plugin') to host the heat related tests
that are currently candidates for Trademark program?

>
> In future, if there are more new interop adds-on program proposal then, we
> need to analyse the situation again regarding QA team bandwidth. TC or QA
> or interop team needs to raise the resource requirement to Board of
> Directors before any more new adds-on program is being proposed. If QA team
> has less resource and less review bandwitdh then we cannot accept the more
> interop programs till QA get more resource to maintain new interop tests.
>
> Overall Summary:
> - QA team agreed to host the interop tests for heat and designate in
> Tempest repo.
> - Existing TC resolution needs to be adjust about the QA team resource
> bandwidth requirement. If there is going to be more adds-on program
> proposal then, QA team will not accept the new interop tests if QA team
> bandwidth issue still exist that time also.
> - Tempest will document the clear process about interop tests addition and
> other more care items etc.
> - Projects team to make their tests and interface as Tempest like tests
> and stable interfaces standards. Tempest team will closely work and help
> Designate and Heat on this.
>
> Action Items:
> - mugsie to abandon https://review.openstack.org/#/c/521602 with quick
> summary of discussion here at PTG
> - markvoelker to write up clarification to InteropWG process stating that
> tests should be moved into Tempest before being proposed to the BoD
> - markvoelker to work with gmann before next InteropWG+BoD discussion to
> frame up a note about resourcing testing for add-on/vertical programs
> - dhellmann to adjust the TC resolution for resource requirement in QA
> when new adds-on program is being proposed
> - project teams to convert  interop test and  framework as per tempest
> like tests and propose to add to tempest repo.
> - gmann to define process in QA about interop tests addition and
> maintainance
>
> We have added this as one of the monitoring/helping item for QA to make
> sure it is done without delay.  Let's work together to finish this
> activity.
>
> Discussion Details: https://etherpad.openstack.org/p/qa-rocky-ptg-Interop-
> test-for-adds-on-project
>
> ..1 http://lists.openstack.org/pipermail/openstack-dev/2018-
> January/126146.html
>
> -gmann
>
> ______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK

2018-02-26 Thread Rabi Mishra
On Mon, Feb 26, 2018 at 3:44 PM, Monty Taylor <mord...@inaugust.com> wrote:

> On 02/26/2018 09:57 AM, Akihiro Motoki wrote:
>
>> Hi neutron and openstacksdk team,
>>
>> This mail proposes to change the first priority of neutron-related
>> python binding to OpenStack SDK rather than neutronclient python
>> bindings.
>> I think it is time to start this as OpenStack SDK became a official
>> project in Queens.
>>
>
> ++
>
>
> [Current situations and problems]
>>
>> Network OSC commands are categorized into two parts: OSC and
>> neutronclient OSC plugin.
>> Commands implemented in OSC consumes OpenStack SDK
>> and commands implemented as neutronclient OSC plugin consumes
>> neutronclient python bindings.
>> This brings tricky situation that some features are supported only in
>> OpenStack SDK and some features are supported only in neutronclient
>> python bindings.
>>
>> [Proposal]
>>
>> The proposal is to implement all neutron features in OpenStack SDK as
>> the first citizen,
>> and the neutronclient OSC plugin consumes corresponding OpenStack SDK
>> APIs.
>>
>> Once this is achieved, users of OpenStack SDK users can see all
>> network related features.
>>
>> [Migration plan]
>>
>> The migration starts from Rocky (if we agree).
>>
>> New features should be supported in OpenStack SDK and
>> OSC/neutronclient OSC plugin as the first priority. If new feature
>> depends on neutronclient python bindings, it can be implemented in
>> neutornclient python bindings first and they are ported as part of
>> existing feature transition.
>>
>> Existing features only supported in neutronclient python bindings are
>> ported into OpenStack SDK,
>> and neutronclient OSC plugin will consume them once they are
>> implemented in OpenStack SDK.
>>
>
> I think this is a great idea. We've got a bunch of good
> functional/integrations tests in the sdk gate as well that we can start
> running on neutron patches so that we don't lose cross-gating.
>
> [FAQ]
>>
>> 1. Will neutornclient python bindings be removed in future?
>>
>> Different from "neutron" CLI, as of now, there is no plan to drop the
>> neutronclient python bindings.
>> Not a small number of projects consumes it, so it will be maintained
>> as-is.
>> The only change is that new features are implemented in OpenStack SDK
>> first and
>> enhancements of neutronclient python bindings will be minimum.
>>
>> 2. Should projects that consume neutronclient python bindings switch
>> to OpenStack SDK?
>>
>> Necessarily not. It depends on individual projects.
>> Projects like nova that consumes small set of neutron features can
>> continue to use neutronclient python bindings.
>> Projects like horizon or heat that would like to support a wide range
>> of features might be better to switch to OpenStack SDK.
>>
>
> We've got a PTG session with Heat to discuss potential wider-use of SDK
> (and have been meaning to reach our to horizon as well) Perhaps a good
> first step would be to migrate the 
> heat.engine.clients.os.neutron:NeutronClientPlugin
> code in Heat from neutronclient to SDK.


Yeah, this would only be possible after openstacksdk supports all neutron
features as mentioned in the proposal.

Note: We had initially added the OpenStackSDKPlugin in heat to support
neutron segments and were thinking of doing all new neutron stuff with
openstacksdk. However, we soon realised that it's not possible when
implementing neutron trunk support and had to drop the idea.


> There's already an heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin
> plugin in Heat. I started a patch to migrate senlin from senlinclient
> (which is just a thin wrapper around sdk): https://review.openstack.org/#
> /c/532680/
>
> For those of you who are at the PTG, I'll be giving an update on SDK after
> lunch on Wednesday. I'd also be more than happy to come chat about this
> more in the neutron room if that's useful to anybody.
>
> Monty
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][qa] Split tempest plugin from heat

2017-11-29 Thread Rabi Mishra
On Wed, Nov 29, 2017 at 9:51 PM, Zane Bitter <zbit...@redhat.com> wrote:

> On 19/11/17 03:08, Rabi Mishra wrote:
>
>> Hi All,
>>
>> As part of community goal[1] for Queens, we've completed the repo split
>> and created the new project[2].
>>
>> The next objective to is to use the new plugin in our jobs. As we've
>> merged some changes after the split, I've synced them and fixed some minor
>> issues before using it in the gate jobs.
>>
>> These are very small changes and I would suggest we review/land them on
>> priority (we don't have to keep syncing them again and again for broken
>> jobs). We should probably *not approve* any changes to integration tests in
>> heat before these go in.
>>
>> - Changes in heat-tempest-plugin (sync  missing patches and fixes)
>> https://review.openstack.org/#/q/project:openstack/heat-temp
>> est-plugin+topic:sync_from_heat
>>
>> - Use heat-tempest-plugin for integration jobs
>> https://review.openstack.org/#/c/508112/
>>
>> - Use heat-tempest-plugin for grenade job
>> https://review.openstack.org/#/c/521246/
>>
>> - Add heat integration jobs to heat-tempest-plugin check/gate queue
>> https://review.openstack.org/#/c/521340/
>>I guess this would result in some issues, where we can't add any
>> changes to heat that breaks the existing tests, as the changes for both
>> projects would have a circular dependency (not sure how it works atm with
>> other plugins!).
>>
>> - Remove plugin an integration tests from heat (This has -1 atm as
>> releasenote job is broken, waiting for infra to fix it)
>> https://review.openstack.org/#/c/521263/ <https://review.openstack.org/
>> #/c/521263/>
>>
>> I've also created an etherpad[3] to track these.
>>
>> Also, the plugin project is expected to be branchless (we may not
>> backport these job changes to stable branches soon though), we've to find
>> a  way to run additional tests for any new feature only on >
>> . AFAIK, other projects check api microversions supported
>> and without microversions in heat, may be we've find an alternate way.
>>
>
> So from discussion on IRC today I learned that the plan agreed at the PTG
> was *not* to move all of heat_integrationtests to a separate repo, but just
> those tests that test the API and are likely to be used in the trademark
> program for verifying clouds.
>
>
Well, the etherpad[1] for the session clearly mentions "Create the new repo
and import API and *scenario* tests" and *not* "that tests the API and
likely to be used in trademark programs".  What you mentioned above is
probably the conclusion from all the discussion we had on IRC yesterday.

In my view that effectively means just the Gabbi tests.
>
>
However, what is actually happening is that *all* of the
> integration/scenario tests are moving to the separate branchless repo. This
> looks to me like it will be a disaster for developer and reviewer
> productivity, and project quality[1]. (Other folks seem inclined to agree.)
>

Just to make it clear, there had been several discussions on 'what to
move'/'branchless or not' in meetings[2] and IRC discussions[3] and the
change of plan to move functional tests was based on the following.

- All our tests are API driven. There is probably not much difference
between functional and scenario (i.e. if we only move scenario tests, we
would have the same kind of issues as we would with functional tests).

- All our tests run with tempest as test runner and use the same tempest
configuration. Unless we run remaining in-tree tests without tempest, we
would end-up with the same set problems which the goal wanted to address (
issues with in-tree tempest plugins).

- Current coverage of gabbi tests is very poor and we've never been serious
to land the patches to complete the API coverage. One of the patches have
been languishing in review queue since a year[4].

- As heat is the orchestration service, I would think testing heat from
user/operator point of view is not only testing API and interoperability,
but how it works with other services deployed and many of our functional
tests[5] do that.

As per my understanding, the issue of having tests in a separate repo and
it's impact on developer productivity was discussed by the all concerned
before accepting this as a community goal. Our problem is probably not
different from some other projects[6].

It's little unfortunate that we've started discussing about these issues
after most of stuff has been done. Having said that, it's never late to do
the right thing that saves us from a disaster, if that's what it means:)

Given the option, I would prefer:

- We keep all tests in-tree and run them with tempest for the 

[openstack-dev] [heat] Split tempest plugin from heat

2017-11-19 Thread Rabi Mishra
Hi All,

As part of community goal[1] for Queens, we've completed the repo split and
created the new project[2].

The next objective to is to use the new plugin in our jobs. As we've merged
some changes after the split, I've synced them and fixed some minor issues
before using it in the gate jobs.

These are very small changes and I would suggest we review/land them on
priority (we don't have to keep syncing them again and again for broken
jobs). We should probably *not approve* any changes to integration tests in
heat before these go in.

- Changes in heat-tempest-plugin (sync  missing patches and fixes)

https://review.openstack.org/#/q/project:openstack/heat-tempest-plugin+topic:sync_from_heat

- Use heat-tempest-plugin for integration jobs
  https://review.openstack.org/#/c/508112/

- Use heat-tempest-plugin for grenade job
  https://review.openstack.org/#/c/521246/

- Add heat integration jobs to heat-tempest-plugin check/gate queue
  https://review.openstack.org/#/c/521340/
  I guess this would result in some issues, where we can't add any changes
to heat that breaks the existing tests, as the changes for both projects
would have a circular dependency (not sure how it works atm with other
plugins!).

- Remove plugin an integration tests from heat (This has -1 atm as
releasenote job is broken, waiting for infra to fix it)
  https://review.openstack.org/#/c/521263/
<https://review.openstack.org/#/c/521263/>

I've also created an etherpad[3] to track these.

Also, the plugin project is expected to be branchless (we may not backport
these job changes to stable branches soon though), we've to find  a  way to
run additional tests for any new feature only on > . AFAIK,
other projects check api microversions supported and without microversions
in heat, may be we've find an alternate way.
-- 
Regards,
Rabi Mishra

[1]
https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
[2] https://git.openstack.org/cgit/openstack/heat-tempest-plugin
[3] https://etherpad.openstack.org/p/heat-tempest-plugin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Removal of CloudWatch api

2017-10-04 Thread Rabi Mishra
Hi All,

As discussed in the last meeting, here is the ML thead to gather more
feedback on this.

Background:

Heat support for AWS CloudWatch compatible API (a very minimalistic
implementation, primarily used for metric data collection for autoscaling,
before the telemetry services in OpenStack), has been deprecated since
Havana cycle (may be before that?).  We now have a global alias[1] for
AWS::CloudWatch::Alarm to use OS::Aodh::Alarm instead.  However, the
ability to push metrics to ceilometer via heat, using a pre-signed url for
CloudWatch api endpoint, is still supported for backward compatibility.
heat-cfntools/cfn-push-stats tool is mainly used from the instances/vms for
this.

What we plan to do?

We think that CloudWatch api  and related code base has been in heat tree
without any change for the sole reason above and possibly it's time to
remove them completely. However, we may not have an alternate way to
continue providing backward compatibility to users.

What would be the impact?

- Users using AWS::CloudWatch::Alarm and pushing metric data from instances
using cfn-push-stats would not be able to do so. Templates with these would
not work any more.

- AWS::ElasticLoadBalancing::LoadBalancer[2] resource which uses
AWS::CloudWatch::Alarm and cfn-push-stats would not work anymore. We
probably have to remove this resource too?

Though it seems like a big change, the general opinion is that there would
not be many users still using them and hence very little risk in removing
CloudWatch support completely this cycle.

If you think otherwise please let us know:)


[1] https://git.openstack.org/cgit/openstack/heat/tree/etc/
heat/environment.d/default.yaml#n6
[2]
https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resources/aws/lb/loadbalancer.py#n640

Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia][heat] Octavia deployment with Heat

2017-09-14 Thread Rabi Mishra
On Thu, Sep 14, 2017 at 6:05 PM, Lingxian Kong <anlin.k...@gmail.com> wrote:

> BTW, may I ask if Heat(master) already supports Octavia V2 API? If no, is
> there anyone working on that or it's on the TODO list? Thanks!
>
>
I think the current support is limited to neutron LBaaS v2.0  extension[1].
Looks like Octavia v2.0 API [2] is a superset of that.

No, we don't have anyone working on it atm.

[1]
https://developer.openstack.org/api-ref/network/v2/index.html#load-balancer-as-a-service

[2] https://developer.openstack.org/api-ref/load-balancer/v2/index.html

>
> Cheers,
> Lingxian Kong (Larry)
>
> On Thu, Sep 14, 2017 at 6:11 PM, <mihaela.ba...@orange.com> wrote:
>
>> Hello,
>>
>>
>>
>> Are there any plans to fix this in Heat?
>>
>>
>>
>> Thank you,
>>
>> Mihaela Balas
>>
>>
>>
>> *From:* Rabi Mishra [mailto:ramis...@redhat.com]
>> *Sent:* Wednesday, July 26, 2017 3:43 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [octavia][heat] Octavia deployment with
>> Heat
>>
>>
>>
>> On Wed, Jul 26, 2017 at 5:34 PM, <mihaela.ba...@orange.com> wrote:
>>
>> Hello,
>>
>>
>>
>> Is Octavia (Ocata version) supposed to work with Heat (tested with Newton
>> version) deployment? I launch a Heat stack trying to deploy a load balancer
>> with a single listener/pool and two members. While the Heat shows status
>> COMPLETE and the Neutron shows all objects as created, Octavia creates the
>> listener, the pool but with a single member (instead of two).
>>
>> Another example: I launch a Heat stack trying to deploy a load balancer
>> with a multiple listeners/pools each having two members. The results is
>> that Heat shows status COMPLETE and the Neutron shows all objects as
>> created, Octavia creates the listeners, but only some of the pools and for
>> those pool creates only one member or none.
>>
>> In the Octavia log I could see only these type of errors:
>>
>>
>>
>> Sounds like https://bugs.launchpad.net/heat/+bug/1632054.
>>
>> We just check provisioning_status of the loadbalancer when adding members
>> and mark the resource as CREATE_COMPLETE.  I think octavia had added
>> provisioning_status for all top level objects like listener etc[1], but I
>> don't think those attributes are available with lbaasv2 api for us to check.
>>
>> [1] https://review.openstack.org/#/c/372791/
>>
>>
>>
>> 2017-07-26 08:12:08.639 1 INFO octavia.api.v1.controllers.member
>> [req-749be397-dd63-4fb6-9d86-b717f6d59e3d -
>> 989bbadfe4134722b478ca799217833e - default default] Member cannot be
>> created or modified because the Load Balancer is in an immutable state
>>
>> 2017-07-26 08:12:08.698 1 DEBUG wsme.api 
>> [req-749be397-dd63-4fb6-9d86-b717f6d59e3d
>> - 989bbadfe4134722b478ca799217833e - default default] Client-side error:
>> Load Balancer b12a29db-81d0-451a-af9c-d563b636bf01 is immutable and
>> cannot be updated. format_exception /opt/octavia/lib/python2.7/sit
>> e-packages/wsme/api.py:222
>>
>>
>>
>> I think what happens is that it takes some time until the configuration
>> is updated on an amphora and during that time the Load Balancer is in
>> UPDATE state and new configuration cannot be added.
>>
>>
>>
>> Is this scenario validated or it is still work in progress?
>>
>>
>>
>> Thanks,
>>
>> Mihaela Balas
>>
>>
>>
>>
>>
>> _
>>
>>
>>
>> Ce message et ses pieces jointes peuvent contenir des informations 
>> confidentielles ou privilegiees et ne doivent donc
>>
>> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
>> ce message par erreur, veuillez le signaler
>>
>> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
>> electroniques etant susceptibles d'alteration,
>>
>> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
>> falsifie. Merci.
>>
>>
>>
>> This message and its attachments may contain confidential or privileged 
>> information that may be protected by law;
>>
>> they should not be distributed, used or copied without authorisation.
>>
>> If you have received this email in error, please notify the sender and 
>> delete this message and its attachments.
>>
>> A

Re: [openstack-dev] [heat][infra] Help needed! high gate failure rate

2017-08-10 Thread Rabi Mishra
On Thu, Aug 10, 2017 at 4:34 PM, Rabi Mishra <ramis...@redhat.com> wrote:

> On Thu, Aug 10, 2017 at 2:51 PM, Ian Wienand <iwien...@redhat.com> wrote:
>
>> On 08/10/2017 06:18 PM, Rico Lin wrote:
>> > We're facing a high failure rate in Heat's gates [1], four of our gate
>> > suffering with fail rate from 6 to near 20% in 14 days. which makes
>> most of
>> > our patch stuck with the gate.
>>
>> There have been a confluence of things causing some problems recently.
>> The loss of OSIC has distributed more load over everything else, and
>> we have seen an increase in job timeouts and intermittent networking
>> issues (especially if you're downloading large things from remote
>> sites).  There have also been some issues with the mirror in rax-ord
>> [1]
>>
>> > gate-heat-dsvm-functional-convg-mysql-lbaasv2-ubuntu-xenial(19.67%)
>> > gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-
>> ubuntu-xenia(9.09%)
>> > gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial(8.47%)
>> > gate-heat-dsvm-functional-convg-mysql-lbaasv2-py35-ubuntu-xenial(6.00%)
>>
>> > We still try to find out what's the cause but (IMO,) seems it might be
>> some
>> > thing wrong with our infra. We need some help from infra team, to know
>> if
>> > any clue on this failure rate?
>>
>> The reality is you're just going to have to triage this and be a *lot*
>> more specific with issues.
>
>
> One of the issues we see recently is that, many jobs killed mid way
> through the tests as the job times out(120 mins).  It seems jobs are many
> times scheduled to very slow nodes, where setting up devstack takes more
> than 80 mins[1].
>
> [1] http://logs.openstack.org/49/492149/2/check/gate-heat-dsvm-
> functional-orig-mysql-lbaasv2-ubuntu-xenial/03b05dd/console.
> html#_2017-08-10_05_55_49_035693
>
> We download an image from a fedora mirror and it seems to take more than
1hr.

http://logs.openstack.org/41/484741/7/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-py35-ubuntu-xenial/a797010/logs/devstacklog.txt.gz#_2017-08-10_04_13_14_400

Probably an issue with the specific mirror or some infra network bandwidth
issue. I've submitted a patch to change the mirror to see if that helps.


> I find opening an etherpad and going
>> through the failures one-by-one helpful (e.g. I keep [2] for centos
>> jobs I'm interested in).
>>
>> Looking at the top of the console.html log you'll have the host and
>> provider/region stamped in there.  If it's timeouts or network issues,
>> reporting to infra the time, provider and region of failing jobs will
>> help.  If it's network issues similar will help.  Finding patterns is
>> the first step to understanding what needs fixing.
>>
>> If it's due to issues with remote transfers, we can look at either
>> adding specific things to mirrors (containers, images, packages are
>> all things we've added recently) or adding a caching reverse-proxy for
>> them ([3],[4] some examples).
>>
>> Questions in #openstack-infra will usually get a helpful response too
>>
>> Good luck :)
>>
>> -i
>>
>> [1] https://bugs.launchpad.net/openstack-gate/+bug/1708707/
>> [2] https://etherpad.openstack.org/p/centos7-dsvm-triage
>> [3] https://review.openstack.org/491800
>> [4] https://review.openstack.org/491466
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Regards,
> Rabi Misra
>
>


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][infra] Help needed! high gate failure rate

2017-08-10 Thread Rabi Mishra
On Thu, Aug 10, 2017 at 2:51 PM, Ian Wienand  wrote:

> On 08/10/2017 06:18 PM, Rico Lin wrote:
> > We're facing a high failure rate in Heat's gates [1], four of our gate
> > suffering with fail rate from 6 to near 20% in 14 days. which makes most
> of
> > our patch stuck with the gate.
>
> There have been a confluence of things causing some problems recently.
> The loss of OSIC has distributed more load over everything else, and
> we have seen an increase in job timeouts and intermittent networking
> issues (especially if you're downloading large things from remote
> sites).  There have also been some issues with the mirror in rax-ord
> [1]
>
> > gate-heat-dsvm-functional-convg-mysql-lbaasv2-ubuntu-xenial(19.67%)
> > gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-
> apache-ubuntu-xenia(9.09%)
> > gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial(8.47%)
> > gate-heat-dsvm-functional-convg-mysql-lbaasv2-py35-ubuntu-xenial(6.00%)
>
> > We still try to find out what's the cause but (IMO,) seems it might be
> some
> > thing wrong with our infra. We need some help from infra team, to know if
> > any clue on this failure rate?
>
> The reality is you're just going to have to triage this and be a *lot*
> more specific with issues.


One of the issues we see recently is that, many jobs killed mid way through
the tests as the job times out(120 mins).  It seems jobs are many times
scheduled to very slow nodes, where setting up devstack takes more than 80
mins[1].

[1]
http://logs.openstack.org/49/492149/2/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial/03b05dd/console.html#_2017-08-10_05_55_49_035693

I find opening an etherpad and going
> through the failures one-by-one helpful (e.g. I keep [2] for centos
> jobs I'm interested in).
>
> Looking at the top of the console.html log you'll have the host and
> provider/region stamped in there.  If it's timeouts or network issues,
> reporting to infra the time, provider and region of failing jobs will
> help.  If it's network issues similar will help.  Finding patterns is
> the first step to understanding what needs fixing.
>
> If it's due to issues with remote transfers, we can look at either
> adding specific things to mirrors (containers, images, packages are
> all things we've added recently) or adding a caching reverse-proxy for
> them ([3],[4] some examples).
>
> Questions in #openstack-infra will usually get a helpful response too
>
> Good luck :)
>
> -i
>
> [1] https://bugs.launchpad.net/openstack-gate/+bug/1708707/
> [2] https://etherpad.openstack.org/p/centos7-dsvm-triage
> [3] https://review.openstack.org/491800
> [4] https://review.openstack.org/491466
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] using convergence_engine to deploy overcloud stack

2017-08-09 Thread Rabi Mishra
On Wed, Aug 9, 2017 at 1:41 PM, Smigielski, Radoslaw (Nokia - IE) <
radoslaw.smigiel...@nokia.com> wrote:

> Hi there!
>
>I have a question about heat "convergence_engine" option, it's present
> in heat config since quite a long time but still not enabled.
>
Well, convergence is enabled by default in heat since newton. However,
Tripleo does not use it yet, as convergence engine memory usage is higher
than that of legacy engine.

There has been number of optimizations in the last two cycles to improve
that situation. However, AFAIK, when using a single node undercloud, memory
usage would always be more with convergence.

I think there are plans for Tripleo to move to convergence in Queens as
discussed in this ML thread.

http://lists.openstack.org/pipermail/openstack-dev/2017-June/118237.html

And I am wondering if anyone has tried enabled it and deploy overcloud? I
> did it myself and it seems to be working.
>
> The main reason why I am looking at this options are problems with scaling
> out, adding computes, replacing failed controllers on setups with 50+
> computes and number of nested stack 1000+.
>

You've to probably scale out undercloud heat to get most of the benefits of
convergence that comes with the distributed architecture.

> Is there any reason what holds Heat switch to convergence architecture?
>
>
> _
> *Radosław Śmigielski*
> Nokia CBIS R
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia][heat] Octavia deployment with Heat

2017-07-26 Thread Rabi Mishra
On Wed, Jul 26, 2017 at 5:34 PM,  wrote:

> Hello,
>
>
>
> Is Octavia (Ocata version) supposed to work with Heat (tested with Newton
> version) deployment? I launch a Heat stack trying to deploy a load balancer
> with a single listener/pool and two members. While the Heat shows status
> COMPLETE and the Neutron shows all objects as created, Octavia creates the
> listener, the pool but with a single member (instead of two).
>
> Another example: I launch a Heat stack trying to deploy a load balancer
> with a multiple listeners/pools each having two members. The results is
> that Heat shows status COMPLETE and the Neutron shows all objects as
> created, Octavia creates the listeners, but only some of the pools and for
> those pool creates only one member or none.
>
> In the Octavia log I could see only these type of errors:
>
>
Sounds like https://bugs.launchpad.net/heat/+bug/1632054.

We just check provisioning_status of the loadbalancer when adding members
and mark the resource as CREATE_COMPLETE.  I think octavia had added
provisioning_status for all top level objects like listener etc[1], but I
don't think those attributes are available with lbaasv2 api for us to check.

[1] https://review.openstack.org/#/c/372791/


>
> 2017-07-26 08:12:08.639 1 INFO octavia.api.v1.controllers.member
> [req-749be397-dd63-4fb6-9d86-b717f6d59e3d - 989bbadfe4134722b478ca799217833e
> - default default] Member cannot be created or modified because the Load
> Balancer is in an immutable state
>
> 2017-07-26 08:12:08.698 1 DEBUG wsme.api 
> [req-749be397-dd63-4fb6-9d86-b717f6d59e3d
> - 989bbadfe4134722b478ca799217833e - default default] Client-side error:
> Load Balancer b12a29db-81d0-451a-af9c-d563b636bf01 is immutable and
> cannot be updated. format_exception /opt/octavia/lib/python2.7/
> site-packages/wsme/api.py:222
>
>
>
> I think what happens is that it takes some time until the configuration is
> updated on an amphora and during that time the Load Balancer is in UPDATE
> state and new configuration cannot be added.
>
>
>
> Is this scenario validated or it is still work in progress?
>
>
>
> Thanks,
>
> Mihaela Balas
>
>
>
>
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged 
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete 
> this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been 
> modified, changed or falsified.
> Thank you.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Deprecate/Remove deferred_auth_method=password config option

2017-06-21 Thread Rabi Mishra
On Fri, Jun 16, 2017 at 7:03 PM, Zane Bitter <zbit...@redhat.com> wrote:
[snip]

>
> I'm not sure whether this works with keystone v2 and anyone is using
>> it or not. Keeping in mind that heat-cli is deprecated and keystone
>> v3 is now the default, we've 2 options
>>
>> 1. Continue to support 'deferred_auth_method=passsword' option and
>> fix all the above issues.
>>
>> 2. Remove/deprecate the option in pike itlsef.
>>
>> I would prefer option 2, but probably I miss some history and use
>> cases for it.
>>
>
> Am I right in thinking that any user (i.e. not just the [heat] service
> user) can create a trust? I still see occasional requests about 'standalone
> mode' for clouds that don't have Heat available to users (which I suspect
> is broken, otherwise people wouldn't be asking), and I'm guessing that
> standalone mode has heretofore required deferred_auth_method=password.
>

I think standalone heat is broken in more than one way based on my testing.
It seems changes have not kept up with heat standalone as 'authpassword'
middleware is broken[1] and we don't seem to pass correct domain details in
the rpc context.  I've tried to fix them in[2].

I'm also not sure why heat standalone historically restricts
deferred_auth_method to 'password'[3]. It seems to work well with 'trusts'
though.


[1]  https://bugs.launchpad.net/heat/+bug/1699418
[2]  https://review.openstack.org/#/c/476014/
[3]  https://github.com/openstack/heat/blob/master/devstack/lib/heat#L74


> So if we're going to remove the option then we should probably either
> officially disown standalone mode or rewrite the instructions such that it
> can be used with the trusts method.
>
> I think disowning the standalone mode would be an easier option. Probably
we should rewrite the instructions for it to be used with 'trusts' method
as it seems to work, unless I miss something. However, without any testing
at the gate we would surely break it from time to time.


> cheers,
> Zane.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Deprecate/Remove deferred_auth_method=password config option

2017-06-15 Thread Rabi Mishra
Hi All,

As we know,  'deferred_auth_method=trusts' being the default, we use
trust_auth_plugin whenever a resource requires deferred_auth (any resource
derived from SignalResponder and StackResource). We also support
'deferred_auth_method=password' where  'X-Auth-User'/username and
'X-Auth-Key'/password is passed in the request header and we then store
them in 'user_creds' (rather than 'trust_id')  to create a 'password'
auth_plugin when loading the stack with stored context for signalling. I
assume for this very reason we've the '--include-pass' option in heat cli.

However, when using keystone session(which is the default), we don't have
the above implemented with SessionClient (i.e to pass the headers). There
is a bug[1] and patch[2]  to add this to SessionClient in the review queue.
Aslo, we don't have anything like '--include-pass' for osc.

I've noticed that 'deferred_auth_method=password' is broken and does not
work with keystone v3 at all. As we don't store the 'user_domain_id/name'
in 'user_creds', we can not even intialize the 'password' auth_plugin when
creating the StoredContext, as it would not be able to authenticate the
user without the user_domain[3].

I'm not sure whether this works with keystone v2 and anyone is using it or
not. Keeping in mind that heat-cli is deprecated and keystone v3 is now the
default, we've 2 options

1. Continue to support 'deferred_auth_method=passsword' option and fix all
the above issues.

2. Remove/deprecate the option in pike itlsef.

I would prefer option 2, but probably I miss some history and use cases for
it.

Thoughts?


[1] https://bugs.launchpad.net/python-heatclient/+bug/1665321

[2] https://review.openstack.org/435213

[3]
https://github.com/openstack/heat/blob/master/heat/common/context.py#L292

-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-01 Thread Rabi Mishra
 I think it's a fair position and IMO should be the way forward.

>
> --
> Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> freenode: cdent tw: @anticdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-24 Thread Rabi Mishra
On Wed, May 24, 2017 at 9:24 AM, Rabi Mishra <ramis...@redhat.com> wrote:

> On Tue, May 23, 2017 at 11:57 PM, Zane Bitter <zbit...@redhat.com> wrote:
>
>> On 23/05/17 01:23, Rabi Mishra wrote:
>>
>>> Hi All,
>>>
>>> As per the updated community goal[1]  for api deployment with wsgi,
>>> we've to transition to use uwsgi rather than mod_wsgi at the gate. It
>>> also seems mod_wsgi support would be removed from devstack in Queens.
>>>
>>> I've been working on a patch[2] for the transition and encountered a few
>>> issues as below.
>>>
>>> 1. We encode stack_indentifer( along with the path
>>> separator in heatclient. So, requests with encoded path separators are
>>> dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
>>> directive in the site/vhost config[3].
>>>
>>
>> We'd probably want 'AllowEncodedSlashes NoDecode'.
>>
>
> Yeah, that would be ideal  for supporting slashes in stack and resource
> names where we take care of the encoding and decoding.
>
>
>> Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
>>> ubuntu.  From my testing It seems, it has to be set in 000-default.conf
>>> for ubuntu.
>>>
>>> Rather than messing with the devstack plugin code, I went ahead proposed
>>> a change to not encode the path separators in heatclient[5] ( Anyway
>>> they would be decoded by apache with the directive 'AllowEncodedSlashes
>>> On' before it's consumed by the service) which seem to have fixed those
>>> 404s.
>>>
>>
>> Pasting my comment from the patch:
>>
>> One potential problem with this is that you can probably craft a stack
>> name in such a way that heatclient ends up calling a real but unexpected
>> URL. (I don't think this is a new problem, but it's likely the problem that
>> the default value of AllowEncodedSlashes is designed to fix, and we're
>> circumventing it here.)
>>
>
>> It seems to me the ideal would be to force '/'s to be encoded when they
>> occur in the stack and resource names. Clearly they should never have been
>> encoded when they're actual path separators (e.g. between the stack name
>> and stack ID).
>>
> It'd be even better if Apache were set to "AllowEncodedSlashes NoDecode"
>> and we could then decode stack/resource names that include slashes after
>> splitting at the path separators, so that those would actually work. I
>> don't think the routing framework can handle that though.
>>
>>
> I don't think we even support slashes (encoded or not) in stack name. The
> validation below would not allow it.
>
> https://git.openstack.org/cgit/openstack/heat/tree/heat/
> engine/stack.py#n143
>
> As far as resource names are concerned, we don't encode or decode them
> appropriately for it to work as expected. Creating a stack with resource
> name containing '/' fails with validation error as it's not encoded for
> being inside the template snippet and the validation below would fail.
>
> https://git.openstack.org/cgit/openstack/heat/tree/heat/
> engine/resource.py#n214
>
> For that reason I believe we disallow slashes in stack/resource names. So
>> with "AllowEncodedSlashes Off" we'd get the right behaviour (which is to
>> always 404 when the stack/resource name contains a slash).
>>
>
>>
> Is there a generic way to set the above directive (when using
>>> apache+mod_proxy_uwsgi) in the devstack plugin?
>>>
>>> 2.  With the above, most of the tests seem to work fine other than the
>>> ones using waitcondition, where we signal back from the vm to the api
>>>
>>
>> Not related to the problem below, but I believe that when signalling
>> through the heat-cfn-api we use an arn to identify the stack, and I suspect
>> that slashes in the arn are escaped at or near the source. So we may have
>> no choice but to find a way to turn on AllowEncodedSlashes. Or is it in the
>> query string part anyway?
>>
>> Yeah, it's not related to the problem below as the request not reaching
> apache at all. I've  taken care of the above issue in the patch itself[1]
> and the signal url looks ok to me[2].
>
> [1] https://review.openstack.org/#/c/462216/11/heat/common/identifier.py
>
> [2] http://logs.openstack.org/16/462216/11/check/gate-heat-
> dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-
> xenial/e7d9e90/console.html#_2017-05-20_07_04_30_500696
>
> services. I could see " curl: (7) Failed to connect to 10.0.1.78 port
>>> 80: No route to host" i

Re: [openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-23 Thread Rabi Mishra
On Tue, May 23, 2017 at 11:57 PM, Zane Bitter <zbit...@redhat.com> wrote:

> On 23/05/17 01:23, Rabi Mishra wrote:
>
>> Hi All,
>>
>> As per the updated community goal[1]  for api deployment with wsgi,
>> we've to transition to use uwsgi rather than mod_wsgi at the gate. It
>> also seems mod_wsgi support would be removed from devstack in Queens.
>>
>> I've been working on a patch[2] for the transition and encountered a few
>> issues as below.
>>
>> 1. We encode stack_indentifer( along with the path
>> separator in heatclient. So, requests with encoded path separators are
>> dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
>> directive in the site/vhost config[3].
>>
>
> We'd probably want 'AllowEncodedSlashes NoDecode'.
>

Yeah, that would be ideal  for supporting slashes in stack and resource
names where we take care of the encoding and decoding.


> Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
>> ubuntu.  From my testing It seems, it has to be set in 000-default.conf
>> for ubuntu.
>>
>> Rather than messing with the devstack plugin code, I went ahead proposed
>> a change to not encode the path separators in heatclient[5] ( Anyway
>> they would be decoded by apache with the directive 'AllowEncodedSlashes
>> On' before it's consumed by the service) which seem to have fixed those
>> 404s.
>>
>
> Pasting my comment from the patch:
>
> One potential problem with this is that you can probably craft a stack
> name in such a way that heatclient ends up calling a real but unexpected
> URL. (I don't think this is a new problem, but it's likely the problem that
> the default value of AllowEncodedSlashes is designed to fix, and we're
> circumventing it here.)
>

> It seems to me the ideal would be to force '/'s to be encoded when they
> occur in the stack and resource names. Clearly they should never have been
> encoded when they're actual path separators (e.g. between the stack name
> and stack ID).
>
It'd be even better if Apache were set to "AllowEncodedSlashes NoDecode"
> and we could then decode stack/resource names that include slashes after
> splitting at the path separators, so that those would actually work. I
> don't think the routing framework can handle that though.
>
>
I don't think we even support slashes (encoded or not) in stack name. The
validation below would not allow it.

https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/stack.py#n143

As far as resource names are concerned, we don't encode or decode them
appropriately for it to work as expected. Creating a stack with resource
name containing '/' fails with validation error as it's not encoded for
being inside the template snippet and the validation below would fail.

https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resource.py#n214

For that reason I believe we disallow slashes in stack/resource names. So
> with "AllowEncodedSlashes Off" we'd get the right behaviour (which is to
> always 404 when the stack/resource name contains a slash).
>

>
Is there a generic way to set the above directive (when using
>> apache+mod_proxy_uwsgi) in the devstack plugin?
>>
>> 2.  With the above, most of the tests seem to work fine other than the
>> ones using waitcondition, where we signal back from the vm to the api
>>
>
> Not related to the problem below, but I believe that when signalling
> through the heat-cfn-api we use an arn to identify the stack, and I suspect
> that slashes in the arn are escaped at or near the source. So we may have
> no choice but to find a way to turn on AllowEncodedSlashes. Or is it in the
> query string part anyway?
>
> Yeah, it's not related to the problem below as the request not reaching
apache at all. I've  taken care of the above issue in the patch itself[1]
and the signal url looks ok to me[2].

[1] https://review.openstack.org/#/c/462216/11/heat/common/identifier.py

[2]
http://logs.openstack.org/16/462216/11/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/e7d9e90/console.html#_2017-05-20_07_04_30_500696

services. I could see " curl: (7) Failed to connect to 10.0.1.78 port
>> 80: No route to host" in the vm console logs[6].
>>
>> It could connect to heat api services using ports 8004/8000 without this
>> patch, but not sure why not port 80? I tried testing this locally and
>> didn't see the issue though.
>>
>> Is this due to some infra settings or something else?
>>
>>
>> [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in
>> -wsgi.html
>>
>> [2] https://review.openstack.org/#/c/462216/
>>
>> [3]
>

[openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-23 Thread Rabi Mishra
Apologies for the spam. Resending  with the earlier missed [openstack-dev]
tag to the subject for greater visibility.

On Tue, May 23, 2017 at 10:53 AM, Rabi Mishra <ramis...@redhat.com> wrote:

> Hi All,
>
> As per the updated community goal[1]  for api deployment with wsgi, we've
> to transition to use uwsgi rather than mod_wsgi at the gate. It also seems
> mod_wsgi support would be removed from devstack in Queens.
>
> I've been working on a patch[2] for the transition and encountered a few
> issues as below.
>
> 1. We encode stack_indentifer( along with the path
> separator in heatclient. So, requests with encoded path separators are
> dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
> directive in the site/vhost config[3].
>
> Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
> ubuntu.  From my testing It seems, it has to be set in 000-default.conf for
> ubuntu.
>
> Rather than messing with the devstack plugin code, I went ahead proposed a
> change to not encode the path separators in heatclient[5] ( Anyway they
> would be decoded by apache with the directive 'AllowEncodedSlashes On'
> before it's consumed by the service) which seem to have fixed those 404s.
>
> Is there a generic way to set the above directive (when using
> apache+mod_proxy_uwsgi) in the devstack plugin?
>
> 2.  With the above, most of the tests seem to work fine other than the
> ones using waitcondition, where we signal back from the vm to the api
> services. I could see " curl: (7) Failed to connect to 10.0.1.78 port 80:
> No route to host" in the vm console logs[6].
>
> It could connect to heat api services using ports 8004/8000 without this
> patch, but not sure why not port 80? I tried testing this locally and
> didn't see the issue though.
>
> Is this due to some infra settings or something else?
>
>
> [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html
>
> [2] https://review.openstack.org/#/c/462216/
>
> [3]  https://github.com/openstack/heat/blob/master/devstack/
> files/apache-heat-api.template#L9
>
> [4] http://logs.openstack.org/16/462216/6/check/gate-heat-dsvm-
> functional-convg-mysql-lbaasv2-non-apache-ubuntu-
> xenial/fbd06d6/logs/apache_config/heat-wsgi-api.conf.txt.gz
>
> [5] https://review.openstack.org/#/c/463510/
>
> [6] http://logs.openstack.org/16/462216/11/check/gate-heat-
> dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-
> xenial/e7d9e90/console.html#_2017-05-20_07_04_30_718021
>
>
> --
> Regards,
> Rabi Mishra
>
>


-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-23 Thread Rabi Mishra
On Tue, May 23, 2017 at 11:18 AM, Juan Antonio Osorio <jaosor...@gmail.com>
wrote:

>
>
> On Tue, May 23, 2017 at 8:23 AM, Rabi Mishra <ramis...@redhat.com> wrote:
>
>> Hi All,
>>
>> As per the updated community goal[1]  for api deployment with wsgi, we've
>> to transition to use uwsgi rather than mod_wsgi at the gate. It also seems
>> mod_wsgi support would be removed from devstack in Queens.
>>
> What do you mean support for mod_wsgi will be removed from devstack in
> Queens? other projects have been using mod_wsgi and we've been deploying
> several services (even Heat) in TripleO.
>

I think it's mentioned in the community goal I linked earlier -  "with the
intent that the mod_wsgi support is deleted from devstack in Queens".
Atleast that's the intent I assume;)


>
>
>> I've been working on a patch[2] for the transition and encountered a few
>> issues as below.
>>
>> 1. We encode stack_indentifer( along with the path
>> separator in heatclient. So, requests with encoded path separators are
>> dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
>> directive in the site/vhost config[3].
>>
> That's correct. You might want to refer to the configuration we use in
> puppet/TripleO. We got it working with that :).
> https://github.com/openstack/puppet-heat/blob/master/
> manifests/wsgi/apache.pp#L111-L137
>
>>
>> Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
>> ubuntu.  From my testing It seems, it has to be set in 000-default.conf for
>> ubuntu.
>>
>> Rather than messing with the devstack plugin code, I went ahead proposed
>> a change to not encode the path separators in heatclient[5] ( Anyway they
>> would be decoded by apache with the directive 'AllowEncodedSlashes On'
>> before it's consumed by the service) which seem to have fixed those 404s.
>>
>> Is there a generic way to set the above directive (when using
>> apache+mod_proxy_uwsgi) in the devstack plugin?
>>
>> 2.  With the above, most of the tests seem to work fine other than the
>> ones using waitcondition, where we signal back from the vm to the api
>> services. I could see " curl: (7) Failed to connect to 10.0.1.78 port
>> 80: No route to host" in the vm console logs[6].
>>
>> It could connect to heat api services using ports 8004/8000 without this
>> patch, but not sure why not port 80? I tried testing this locally and
>> didn't see the issue though.
>>
>> Is this due to some infra settings or something else?
>>
>>
>> [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in
>> -wsgi.html
>>
>> [2] https://review.openstack.org/#/c/462216/
>>
>> [3]  https://github.com/openstack/heat/blob/master/devstack/files
>> /apache-heat-api.template#L9
>>
>> [4] http://logs.openstack.org/16/462216/6/check/gate-heat-dsvm-f
>> unctional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/
>> fbd06d6/logs/apache_config/heat-wsgi-api.conf.txt.gz
>>
>> [5] https://review.openstack.org/#/c/463510/
>>
>> [6] http://logs.openstack.org/16/462216/11/check/gate-heat-dsvm-
>> functional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/
>> e7d9e90/console.html#_2017-05-20_07_04_30_718021
>>
>>
>> --
>> Regards,
>> Rabi Mishra
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Juan Antonio Osorio R.
> e-mail: jaosor...@gmail.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-22 Thread Rabi Mishra
Hi All,

As per the updated community goal[1]  for api deployment with wsgi, we've
to transition to use uwsgi rather than mod_wsgi at the gate. It also seems
mod_wsgi support would be removed from devstack in Queens.

I've been working on a patch[2] for the transition and encountered a few
issues as below.

1. We encode stack_indentifer( along with the path
separator in heatclient. So, requests with encoded path separators are
dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
directive in the site/vhost config[3].

Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
ubuntu.  From my testing It seems, it has to be set in 000-default.conf for
ubuntu.

Rather than messing with the devstack plugin code, I went ahead proposed a
change to not encode the path separators in heatclient[5] ( Anyway they
would be decoded by apache with the directive 'AllowEncodedSlashes On'
before it's consumed by the service) which seem to have fixed those 404s.

Is there a generic way to set the above directive (when using
apache+mod_proxy_uwsgi) in the devstack plugin?

2.  With the above, most of the tests seem to work fine other than the ones
using waitcondition, where we signal back from the vm to the api services.
I could see " curl: (7) Failed to connect to 10.0.1.78 port 80: No route to
host" in the vm console logs[6].

It could connect to heat api services using ports 8004/8000 without this
patch, but not sure why not port 80? I tried testing this locally and
didn't see the issue though.

Is this due to some infra settings or something else?


[1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html

[2] https://review.openstack.org/#/c/462216/

[3]
https://github.com/openstack/heat/blob/master/devstack/files/apache-heat-api.template#L9

[4]
http://logs.openstack.org/16/462216/6/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/fbd06d6/logs/apache_config/heat-wsgi-api.conf.txt.gz

[5] https://review.openstack.org/#/c/463510/

[6]
http://logs.openstack.org/16/462216/11/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/e7d9e90/console.html#_2017-05-20_07_04_30_718021


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-05-01 Thread Rabi Mishra
On Fri, Apr 28, 2017 at 2:17 PM, Andrea Frittoli <andrea.fritt...@gmail.com>
wrote:

>
>
> On Fri, Apr 28, 2017 at 10:29 AM Rabi Mishra <ramis...@redhat.com> wrote:
>
>> On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli <
>> andrea.fritt...@gmail.com> wrote:
>>
>>> Dear stackers,
>>>
>>> starting in the Liberty cycle Tempest has defined a set of projects
>>> which are in scope for direct
>>> testing in Tempest [0]. The current list includes keystone, nova,
>>> glance, swift, cinder and neutron.
>>> All other projects can use the same Tempest testing infrastructure (or
>>> parts of it) by taking advantage
>>> the Tempest plugin and stable interfaces.
>>>
>>> Tempest currently hosts a set of API tests as well as a service client
>>> for the Heat project.
>>> The Heat service client is used by the tests in Tempest, which run in
>>> Heat gate as part of the grenade
>>> job, as well as in the Tempest gate (check pipeline) as part of the
>>> layer4 job.
>>> According to code search [3] the Heat service client is also used by
>>> Murano and Daisycore.
>>>
>>
>> For the heat grenade job, I've proposed two patches.
>>
>> 1. To run heat tree gabbi api tests as part of grenade 'post-upgrade'
>> phase
>>
>> https://review.openstack.org/#/c/460542/
>>
>> 2. To remove tempest tests from the grenade job
>>
>> https://review.openstack.org/#/c/460810/
>>
>>
>>
>>> I proposed a patch to Tempest to start the deprecation counter for Heat
>>> / orchestration related
>>> configuration items in Tempest [4], and I would like to make sure that
>>> all tests and the service client
>>> either find a new home outside of Tempest, or are removed, by the end
>>> the Pike cycle at the latest.
>>>
>>> Heat has in-tree integration tests and Gabbi based API tests, but I
>>> don't know if those provide
>>> enough coverage to replace the tests on Tempest side.
>>>
>>>
>> Yes, the heat gabbi api tests do not yet have the same coverage as the
>> tempest tree api tests (lacks tests using nova, neutron and swift
>> resources),  but I think that should not stop us from *not* running the
>> tempest tests in the grenade job.
>>
>> I also don't know if the tempest tree heat tests are used by any other
>> upstream/downstream jobs. We could surely add more tests to bridge the gap.
>>
>> Also, It's possible to run the heat integration tests (we've enough
>> coverage there) with tempest plugin after doing some initial setup, as we
>> do in all our dsvm gate jobs.
>>
>> It would propose to move tests and client to a Tempest plugin owned /
>>> maintained by
>>> the Heat team, so that the Heat team can have full flexibility in
>>> consolidating their integration
>>> tests. For Murano and Daisycloud - and any other team that may want to
>>> use the Heat service
>>> client in their tests, even if the client is removed from Tempest, it
>>> would still be available via
>>> the Heat Tempest plugin. As long as the plugin implements the service
>>> client interface,
>>> the Heat service client will register automatically in the service
>>> client manager and be available
>>> for use as today.
>>>
>>>
>> if I understand correctly, you're proposing moving the existing tempest
>> tests and service clients to a separate repo managed by heat team. Though
>> that would be collective decision, I'm not sure that's something I would
>> like to do. To start with we may look at adding some of the missing pieces
>> in heat tree itself.
>>
>
> I'm proposing to move tests and the service client outside of tempest to a
> new home.
>
> I also suggested that the new home could be a dedicate repo, since that
> would allow you to maintain the
> current branchless nature of those tests. A more detailed discussion about
> the topic can be found
> in the corresponding proposed queens goal [5],
>
> Using a dedicated repo *is not* a precondition for moving tests and
> service client out of Tempest.
>
>
We probably are mixing two different things here.

1. Moving in-tree heat templest plugn and tests to a dedicated repo

Though we don't have any plans for it now, we may have to do it when/if
it's accepted as a community goal.

2.  Moving tempest tree heat tests and heat service client to a new home
and owner.

I don't think that's something heat team would like to do given that we
don't use thes

Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-04-28 Thread Rabi Mishra
On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli <andrea.fritt...@gmail.com>
wrote:

> Dear stackers,
>
> starting in the Liberty cycle Tempest has defined a set of projects which
> are in scope for direct
> testing in Tempest [0]. The current list includes keystone, nova, glance,
> swift, cinder and neutron.
> All other projects can use the same Tempest testing infrastructure (or
> parts of it) by taking advantage
> the Tempest plugin and stable interfaces.
>
> Tempest currently hosts a set of API tests as well as a service client for
> the Heat project.
> The Heat service client is used by the tests in Tempest, which run in Heat
> gate as part of the grenade
> job, as well as in the Tempest gate (check pipeline) as part of the layer4
> job.
> According to code search [3] the Heat service client is also used by
> Murano and Daisycore.
>

For the heat grenade job, I've proposed two patches.

1. To run heat tree gabbi api tests as part of grenade 'post-upgrade' phase

https://review.openstack.org/#/c/460542/

2. To remove tempest tests from the grenade job

https://review.openstack.org/#/c/460810/



> I proposed a patch to Tempest to start the deprecation counter for Heat /
> orchestration related
> configuration items in Tempest [4], and I would like to make sure that all
> tests and the service client
> either find a new home outside of Tempest, or are removed, by the end the
> Pike cycle at the latest.
>
> Heat has in-tree integration tests and Gabbi based API tests, but I don't
> know if those provide
> enough coverage to replace the tests on Tempest side.
>
>
Yes, the heat gabbi api tests do not yet have the same coverage as the
tempest tree api tests (lacks tests using nova, neutron and swift
resources),  but I think that should not stop us from *not* running the
tempest tests in the grenade job.

I also don't know if the tempest tree heat tests are used by any other
upstream/downstream jobs. We could surely add more tests to bridge the gap.

Also, It's possible to run the heat integration tests (we've enough
coverage there) with tempest plugin after doing some initial setup, as we
do in all our dsvm gate jobs.

It would propose to move tests and client to a Tempest plugin owned /
> maintained by
> the Heat team, so that the Heat team can have full flexibility in
> consolidating their integration
> tests. For Murano and Daisycloud - and any other team that may want to use
> the Heat service
> client in their tests, even if the client is removed from Tempest, it
> would still be available via
> the Heat Tempest plugin. As long as the plugin implements the service
> client interface,
> the Heat service client will register automatically in the service client
> manager and be available
> for use as today.
>
>
if I understand correctly, you're proposing moving the existing tempest
tests and service clients to a separate repo managed by heat team. Though
that would be collective decision, I'm not sure that's something I would
like to do. To start with we may look at adding some of the missing pieces
in heat tree itself.

Andrea Frittoli (andreaf)
>
> [0] https://docs.openstack.org/developer/tempest/test_remova
> l.html#tempest-scope
> [1] https://docs.openstack.org/developer/tempest/plugin.html
> [2] https://docs.openstack.org/developer/tempest/library.html
> [3] http://codesearch.openstack.org/?q=self.orchestration_
> client=nope==
> [4] https://review.openstack.org/#/c/456843/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] change in behavior from Mitaka to Newton, is this a bug ?

2017-04-13 Thread Rabi Mishra
On Thu, Apr 13, 2017 at 1:04 PM, Saverio Proto 
wrote:

> Hello,
>
> I am looking at a strange change in default behavior in heat, after the
> upgrade from Mitaka to Newton.
>
> when you do
>
> openstack stack show uuid
>
> you get a table, and there is a property called 'links'
>
> In Mitaka the value was something like
>
> - href: https://URL
>   rel: self
>
> After I upgraded to Newton the 'links' field changed to
>
> http://URL (self)
>
> Also in the process the 's' from https went missing.
>
>
Though not sure, may be related to http_proxy_to_wsgi middleware related
change as mentioned here[1] in newton?

[1] https://bugs.launchpad.net/heat/+bug/1630778

May be add the below in heat.conf and check.

[oslo_middleware]
enable_proxy_headers_parsing = true



> This broke first of all our rally tests, that now fail with "Prohibited
> endpoint redirect"
>
> It also broke Horizon, because if I click on the stack name, I am not
> able to get to the page with the stack property, but I get a red box
> "Error: Unable to retrieve stack.".
>
> Before opening a strange bug that will be marked as "Invalid" I would
> like to understand better what is this 'links' property of the stack,
> and why it changed from Mitaka to Newton ? Is possible this is a
> regression bug ?
>
> Thank you
>
> Saverio
>
>
>
>
>
> --
> SWITCH
> Saverio Proto, Peta Solutions
> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
> phone +41 44 268 15 15, direct +41 44 268 1573
> saverio.pr...@switch.ch, http://www.switch.ch
>
> http://www.switch.ch/stories
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] Conditionally passing properties in Heat

2017-04-13 Thread Rabi Mishra
On Thu, Apr 13, 2017 at 2:14 AM, Dan Sneddon  wrote:

> On 04/12/2017 01:22 PM, Thomas Herve wrote:
> > On Wed, Apr 12, 2017 at 9:00 PM, Dan Sneddon 
> wrote:
> >> I'm implementing predictable control plane IPs for spine/leaf, and I'm
> >> running into a problem implementing this in the TripleO Heat templates.
> >>
> >> I have a review in progress [1] that works, but fails on upgrade, so I'm
> >> looking for an alternative approach. I'm trying to influence the IP
> >> address that is selected for overcloud nodes' Control Plane IP. Here is
> >> the current construct:
> >>
> >>   Controller:
> >> type: OS::TripleO::Server
> >> metadata:
> >>   os-collect-config:
> >> command: {get_param: ConfigCommand}
> >> properties:
> >>   image: {get_param: controllerImage}
> >>   image_update_policy: {get_param: ImageUpdatePolicy}
> >>   flavor: {get_param: OvercloudControlFlavor}
> >>   key_name: {get_param: KeyName}
> >>   networks:
> >> - network: ctlplane  # <- Here's where the port is created
> >>
> >> If I add fixed_ip: to the networks element at the end of the above, I
> >> can select an IP address from the 'ctlplane' network, like this:
> >>
> >>   networks:
> >> - network: ctlplane
> >>   fixed_ip: {get_attr: [ControlPlanePort, ip_address]}
> >>
> >> But the problem is that if I pass a blank string to fixed_ip, I get an
> >> error on deployment. This means that the old behavior of automatically
> >> selecting an IP doesn't work.
> >>
> >> I thought I has solved this by passing an external Neutron port, like
> this:
> >>
> >>   networks:
> >> - network: ctlplane
> >>   port: {get_attr: [ControlPlanePort, port_id]}
> >>
> >> Which works for deployments, but that fails on upgrades, since the
> >> original port was created as part of the Nova::Server resource, instead
> >> of being an external resource.
> >
> > Can you detail how it fails? I was under the impression we never
> > replaced servers no matter what (or we try to do that, at least). Is
> > the issue that your new port is not the correct one?
> >
> >> I'm now looking for a way to use Heat conditionals to apply the fixed_ip
> >> only if the value is not unset. Looking at the intrinsic functions [2],
> >> I don't see a way to do this. Is what I'm trying to do with Heat
> possible?
> >
> > You should be able to write something like that (not tested):
> >
> > networks:
> >   if:
> > - 
> > - network: ctlplane
> >   fixed_ip: {get_attr: [ControlPlanePort, ip_address]}
> > - network: ctlplane
> >
> > The question is how to define your condition. Maybe:
> >
> > conditions:
> >   fixed_ip_condition:
> >  not:
> > equals:
> >   - {get_attr: [ControlPlanePort, ip_address]}
> >   - ''
> >
> > To get back to the problem you stated first.
> >
> >
> >> Another option I'm exploring is conditionally applying resources. It
> >> appears that would require duplicating the entire TripleO::Server stanza
> >> in *-role.yaml so that there is one that uses fixed_ip and one that does
> >> not. Which one is applied would be based on a condition that tested
> >> whether fixed_ip was blank or not. The downside of that is that it would
> >> make the role definition confusing because there would be a large
> >> resource that was implemented twice, with only one line difference
> >> between them.
> >
> > You can define properties with conditions, so you shouldn't need to
> > rewrite everything.
> >
>
> Thomas,
>
> Thanks, I will try your suggestions and that should get me closer.
>
> The full error log is available here:
> http://logs.openstack.org/78/413278/11/check-tripleo/gate-
> tripleo-ci-centos-7-ovb-updates/8d91762/console.html
>
> We do an interface_detach/attach when a port is replaced.
It seems to be failing[1] as this is not implemented for ironic/baremetal
driver.  I could see a patch[2] to add that functionality though.

[1]
http://logs.openstack.org/78/413278/11/check-tripleo/gate-tripleo-ci-centos-7-ovb-updates/8d91762/logs/undercloud/var/log/nova/nova-compute.txt.gz#_2017-04-12_00_26_15_475

[2] https://review.openstack.org/#/c/419975/

We retry a few times to check whether the detach/attach is complete(it's an
async operation in nova and takes time), so the cryptic error below is
coming from tenacity library which fails after the configured number of
attempts.

Here are the errors I am getting:
>
> 2017-04-12 00:26:34.436655 | 2017-04-12 00:26:29Z
> [overcloud-CephStorage-bkucn6ign34i-0-2yq2jbtwuu7k.CephStorage]:
> UPDATE_FAILED  RetryError: resources.CephStorage: RetryError[ 0xdd62550 state=finished returned bool>]
> 2017-04-12 00:26:34.436808 | 2017-04-12 00:26:29Z
> [overcloud-CephStorage-bkucn6ign34i-0-2yq2jbtwuu7k]: UPDATE_FAILED
> RetryError: resources.CephStorage: RetryError[ state=finished returned bool>]
> 2017-04-12 00:26:34.436903 | 2017-04-12 00:26:29Z
> 

Re: [openstack-dev] [stable][heat] Heat stable-maint additions

2017-02-17 Thread Rabi Mishra
On Fri, Feb 17, 2017 at 8:44 PM, Matt Riedemann <mriede...@gmail.com> wrote:

> On 2/15/2017 12:40 PM, Zane Bitter wrote:
>
>> Traditionally Heat has given current and former PTLs of the project +2
>> rights on stable branches for as long as they remain core reviewers.
>> Usually I've done that by adding them to the heat-release group.
>>
>> At some point the system changed so that the review rights for these
>> branches are no longer under the team's control (instead, the
>> stable-maint core team is in charge), and as a result at least the
>> current PTL (Rico Lin) and the previous PTL (Rabi Mishra), and possibly
>> others (Thomas Herve, Sergey Kraynev), haven't been added to the group.
>> That's slowing down getting backports merged, amongst other things.
>>
>> I'd like to request that we update the membership to be the same as
>> https://review.openstack.org/#/admin/groups/152,members
>>
>> Rabi Mishra
>> Rico Lin
>> Sergey Kraynev
>> Steve Baker
>> Steven Hardy
>> Thomas Herve
>> Zane Bitter
>>
>> I also wonder if the stable-maint team would consider allowing the Heat
>> team to manage the group membership again if we commit to the criteria
>> above (all current/former PTLs who are also core reviewers) by just
>> adding that group as a member of heat-stable-maint?
>>
>> thanks,
>> Zane.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Reviewing patches on stable branches have different guidelines, expressed
> here [1]. In the past when this comes up I've asked if the people being
> asked to be added to the stable team for a project have actually been doing
> reviews on the stable branches to show they are following the guidelines,
> and at times when this has come up the people proposed (usually PTLs)
> haven't, so I've declined at that time until they start actually doing
> reviews and can show they are following the guidelines.
>
> There are reviewstats tools for seeing the stable review numbers for Heat,
> I haven't run that though to check against those proposed above, but it's
> probably something I'd do first before just adding a bunch of people.
>

Would it not be appropriate to trust the stable cross-project liaison for
heat when he nominates stable cores? Having been the PTL for Ocata and one
who struggled to get the backports on time for a stable release as planned,
I don't recall seeing many reviews from stable maintenance core team for
them to be able to judge the quality of reviews. So I don't think it's fair
to decide eligibility only based on the review numbers and stats.


> [1] https://docs.openstack.org/project-team-guide/stable-branches.html
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] glance v2 support?

2017-01-10 Thread Rabi Mishra
On Tue, Jan 10, 2017 at 1:03 PM, Flavio Percoco <fla...@redhat.com> wrote:

> On 10/01/17 12:35 +0530, Rabi Mishra wrote:
>
>> On Mon, Jan 9, 2017 at 4:45 PM, Flavio Percoco <fla...@redhat.com> wrote:
>>
>> On 06/01/17 09:34 +0530, Rabi Mishra wrote:
>>>
>>> On Fri, Jan 6, 2017 at 4:38 AM, Emilien Macchi <emil...@redhat.com>
>>>> wrote:
>>>>
>>>> Greetings Heat folks!
>>>>
>>>>>
>>>>> My question is simple:
>>>>> When do you plan to support Glance v2?
>>>>> https://review.openstack.org/#/c/240450/
>>>>>
>>>>> The spec looks staled while Glance v1 was deprecated in Newton (and v2
>>>>> was started in Kilo!).
>>>>>
>>>>>
>>>>> Hi Emilien,
>>>>>
>>>>
>>>> I think we've not been able to move to v2 due to v1/v2
>>>> incompatibility[1]
>>>> with respect to the location[2] property. Moving to v2 would break all
>>>> existing templates using that property.
>>>>
>>>> I've seen several discussions around that without any conclusion.  I
>>>> think
>>>> we can support a separate v2 image resource and deprecate the current
>>>> one,
>>>> unless there is a better path available.
>>>>
>>>>
>>> Hi Rabi,
>>>
>>> Could you elaborate on why Heat depends on the location attribute? I'm
>>> not
>>> familiar with Heat and knowing this might help me to propose something
>>> (or
>>> at
>>> least understand the difficulties).
>>>
>>> I don't think putting this on hold will be of any help. V1 ain't coming
>>> back and
>>> the improvements for v2 are still under heavy coding. I'd probably
>>> recommend
>>> moving to v2 with a proper deprecation path rather than sticking to v1.
>>>
>>>
>>> Hi Flavio,
>>
>> As much as we would like to move to v2, I think we still don't have a
>> acceptable solution for the question below. There is an earlier ML
>> thread[1], where it was discussed in detail.
>>
>> - What's the migration path for images created with v1 that use the
>> location attribute pointing to an external location?
>>
>
> Moving to Glance v2 shouldn't break this. As in, Glance will still be able
> to
> pull the images from external locations.
>
> Also, to be precise more precise, you actually *can* use locations in V2.
> Glance's node needs to have 2 settings enabled. The first is
> `show_multple_locations` and the second one is a policy config[0]. It's
> however
> not recommended to expose that to end users but that's why it was shielded
> behind policies.
>
> I'd recommend Heat to not use locations as that will require deployers to
> either
> enable them for everyone or have a dedicate glance-api node for Heat.
>
> All that being said, switching to v2 won't prevent Glance from reading
> images
> from external locations if the image records exist already.
>
> [0] https://github.com/openstack/glance/blob/master/etc/policy.j
> son#L16-L18
>
> While answering the above we've to keep in mind the following constraint.
>>
>> - Any change in the image id(new image) would potentially result in nova
>> servers using them in the template being rebuilt/replaced, and we would
>> like to avoid it.
>>
>> There was a suggestion to allow the 'copy-from'  with v2, which would
>> possibly make it easier for us. Is that still an option?
>>
>
> May be, in the long future. The improvements for v2 are still under heavy
> development.
>
> I assume we can probably use glance upload api to upload the image
>> data(after getting it from the external location) for an existing image?
>> Last time i tried to do it, it seems to be not allowed for an 'active'
>> image. It's  possible I'm missing something here.  We don't have a way at
>> present,  for a user to upload an image to heat engine( not sure if we
>> would like do to it either) or heat engine downloading the image from an
>> 'external location' and then uploading it to glance while
>> creating/updating
>> an image resource.
>>
>
> Downloading the image locally and uploading it is a workaround, yes. Not
> ideal
> but it's simple. However, you won't need it for the migration to v2, I
> believe,
> since you can re-use existing images.


AFAIK, we can't do without it, unless 'copy-from' is made available soon,
for two reasons.

-  Image files are always 'external' to heat-engine

Re: [openstack-dev] [heat] glance v2 support?

2017-01-09 Thread Rabi Mishra
On Mon, Jan 9, 2017 at 4:45 PM, Flavio Percoco <fla...@redhat.com> wrote:

> On 06/01/17 09:34 +0530, Rabi Mishra wrote:
>
>> On Fri, Jan 6, 2017 at 4:38 AM, Emilien Macchi <emil...@redhat.com>
>> wrote:
>>
>> Greetings Heat folks!
>>>
>>> My question is simple:
>>> When do you plan to support Glance v2?
>>> https://review.openstack.org/#/c/240450/
>>>
>>> The spec looks staled while Glance v1 was deprecated in Newton (and v2
>>> was started in Kilo!).
>>>
>>>
>>> Hi Emilien,
>>
>> I think we've not been able to move to v2 due to v1/v2 incompatibility[1]
>> with respect to the location[2] property. Moving to v2 would break all
>> existing templates using that property.
>>
>> I've seen several discussions around that without any conclusion.  I think
>> we can support a separate v2 image resource and deprecate the current one,
>> unless there is a better path available.
>>
>
> Hi Rabi,
>
> Could you elaborate on why Heat depends on the location attribute? I'm not
> familiar with Heat and knowing this might help me to propose something (or
> at
> least understand the difficulties).
>
> I don't think putting this on hold will be of any help. V1 ain't coming
> back and
> the improvements for v2 are still under heavy coding. I'd probably
> recommend
> moving to v2 with a proper deprecation path rather than sticking to v1.
>
>
Hi Flavio,

As much as we would like to move to v2, I think we still don't have a
acceptable solution for the question below. There is an earlier ML
thread[1], where it was discussed in detail.

- What's the migration path for images created with v1 that use the
location attribute pointing to an external location?

While answering the above we've to keep in mind the following constraint.

- Any change in the image id(new image) would potentially result in nova
servers using them in the template being rebuilt/replaced, and we would
like to avoid it.

There was a suggestion to allow the 'copy-from'  with v2, which would
possibly make it easier for us. Is that still an option?

I assume we can probably use glance upload api to upload the image
data(after getting it from the external location) for an existing image?
Last time i tried to do it, it seems to be not allowed for an 'active'
image. It's  possible I'm missing something here.  We don't have a way at
present,  for a user to upload an image to heat engine( not sure if we
would like do to it either) or heat engine downloading the image from an
'external location' and then uploading it to glance while creating/updating
an image resource.

Also, glance location api could probably have been useful here. However, we
were advised in the earlier thread not to use it, as exposing the location
to the end user is perceived as a security risk.


[1]  http://lists.openstack.org/pipermail/openstack-dev/2016-May/094598.html


Cheers,
> Flavio
>
>
>> [1] https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability
>> [2] https://github.com/openstack/heat/blob/master/heat/engine/
>> resources/openstack/glance/image.py#L107-L112
>>
>>
>> As an user, I need Glance v2 support so I can remove Glance Registry
>>> from my deployment. and run pure v2 everywhere in my cloud.
>>>
>>> Thanks for your help,
>>> --
>>> Emilien Macchi
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Regards,
>> Rabi Misra
>>
>
> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Rolling upgrades vs. duplication of prop data

2017-01-05 Thread Rabi Mishra
On Thu, Jan 5, 2017 at 10:28 PM, Zane Bitter  wrote:

> On 05/01/17 11:41, Crag Wolfe wrote:
>
>> Hi,
>>
>> I have a patch[1] to support the de-duplication of resource properties
>> data between events and resources. In the ideal rolling-upgrade world,
>> we would be writing data to the old and new db locations, only reading
>> from the old in the first release (let's assume Ocata). The problem is
>> that in this particular case, we would be duplicating a lot of data, so
>> [1] for now does not take that approach. I.e., it is not rolling-upgrade
>> friendly.
>>
>> So, we need to decide what to do for Ocata:
>>
>> A. Support assert:supports-upgrade[2] and resign ourselves to writing
>> duplicated resource prop. data through Pike (following the standard
>> strategy of write to old/new and read from old, write to old/new and
>> read from new, write/read from new over O,P,Q).
>>
>> B. Push assert:supports-upgrade back until Pike, and avoid writing
>> resource prop. data in multiple locations in Ocata.
>>
>
> +1
>
> Rabi mentioned that we don't yet have tests in place to claim the tag in
> Ocata anyway, so I vote for making it easy on ourselves until we have to.
> Anything that involves shifting stuff between tables like this inevitably
> gets pretty gnarly.
>
>
Yeah, as per governance requirements to claim the tag we would need gate
tests to validate that mixed-version services work together properly[1]. We
would probably need a multi-node grenade job running services of n-1/n
releases.

I could not find one for any other project to refer to, though there are
few projects that already have this tag.


[1]
https://governance.openstack.org/tc/reference/tags/assert_supports-rolling-upgrade.html#requirements

C. DB triggers.

>
> -2! -2!
>
>
> I vote for B. I'm pretty sure there is not much support for C (count me
>> in that group :), but throwing it out there just in case.
>>
>> Thanks,
>>
>> --Crag
>>
>> [1] https://review.openstack.org/#/c/363415/
>>
>> [2] https://review.openstack.org/#/c/407989/
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] glance v2 support?

2017-01-05 Thread Rabi Mishra
On Fri, Jan 6, 2017 at 4:38 AM, Emilien Macchi  wrote:

> Greetings Heat folks!
>
> My question is simple:
> When do you plan to support Glance v2?
> https://review.openstack.org/#/c/240450/
>
> The spec looks staled while Glance v1 was deprecated in Newton (and v2
> was started in Kilo!).
>
>
Hi Emilien,

I think we've not been able to move to v2 due to v1/v2 incompatibility[1]
with respect to the location[2] property. Moving to v2 would break all
existing templates using that property.

I've seen several discussions around that without any conclusion.  I think
we can support a separate v2 image resource and deprecate the current one,
unless there is a better path available.


[1] https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability
[2] https://github.com/openstack/heat/blob/master/heat/engine/
resources/openstack/glance/image.py#L107-L112


> As an user, I need Glance v2 support so I can remove Glance Registry
> from my deployment. and run pure v2 everywhere in my cloud.
>
> Thanks for your help,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Pike PTG sessions

2017-01-05 Thread Rabi Mishra
HI All,

I've started an etherpad[1] to collect topic ideas for the PTG. We would
have a meeting room for 3 days(Wednesday-Friday). Feel free to add whatever
you think we should discuss/implement.

Basic information about the PTG (schedule, layout etc) is available at
https://www.openstack.org/ptg/ .

[1] https://etherpad.openstack.org/p/heat-pike-ptg-sessions

---
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] project specific question for the next user survey

2017-01-01 Thread Rabi Mishra
Hi All,

We have an opportunity to submit a heat adoption related question (for
those who are USING, TESTING, or INTERESTED in heat) to be included in the
User Survey.

Please provide your suggestions/questions.*The deadline for this is 9th
Jan.*

-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] No IRC meeting next week

2016-12-21 Thread Rabi Mishra
Hi All,



As discussed in the meeting today, I'm cancelling the next IRC meeting on 28th
Dec. We'll meet again on 4th Jan 2017.


Wish you all merry Christmas and happy new year.

-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] A question on creating Manila Share

2016-12-08 Thread Rabi Mishra
Hi zengchen,

Yeah, the constraint looks incorrect. Not sure if we got it wrong or manila
has changed it afterwards. It would be good raise a bug/propose a fix.


On Fri, Dec 9, 2016 at 8:26 AM, zengchen  wrote:

> Hi, Heat stackers:
> May I ask a question about creating Manila Share. I see Heat define
> some constraints
>  for property schema 'ACCESS_TYPE' at
>  heat.engine.resources.openstack.manila.share.properties_schema[ACCESS_
> RULES].
>  I copy the codes as bellow. The allowed values for 'ACCESS_TYPE' are  'ip',
> 'domain'.
>
> ACCESS_TYPE: properties.Schema(
> properties.Schema.STRING,
> _('Type of access that should be provided to guest.'),
> constraints=[constraints.AllowedValues(
> ['ip', 'domain'])],
> required=True
> ),
>
> However, I see manila has defined different allowed value for 'ACCESS_TYPE', 
> which
>
>  includes 'ip', 'user', 'cert', 'cephx'. So, my question is does heat need 
> some updates? Or
>
> do I miss something?  Hope for your reply. Thanks very much!
>
>
> cheers
>
> zengchen
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Heat: signaling using SW config/deployment API.

2016-12-01 Thread Rabi Mishra
Moving to openstack-dev for more visibility and discussion.

We currently have signal API for heat resources(not for standalone software
config/deployment). However, you can probably use a workaround with swift
temp_url like tripleo[1] to achieve your use case.

We do have rpc api[2] for signalling deployments. It would probably not be
that difficult to add REST API support for native/cfn signalling, though I
don't know if there are more reasons for it not being added yet.

Steve Baker(original author) would probably know more about it and can give
you a better answer:)


[1]
https://github.com/openstack/tripleo-common/blob/master/tripleo_common/actions/deployment.py
[2]
https://github.com/openstack/heat/blob/master/heat/engine/service_software_config.py#L262

On Wed, Nov 30, 2016 at 5:54 PM, Pasquale Lepera 
wrote:

> Hi,
> we're trying to use the Heat Software configuration APIs, but we're facing
> a problem with the signaling.
> We quite well know how to use Software config/deployment inside stack
> Templates, and in that case what we got on the target VM is something like
> this:
>
> #os-collect-config --print
> inputs:[
> …
>  {
>   "type": "String",
>   "name": "deploy_signal_transport",
>   "value": "CFN_SIGNAL",
>   "description": "How the server should signal to heat with the
> deployment output values."
>  },
>  {
>   "type": "String",
>   "name": "deploy_signal_id",
>   "value": "http://ctrl-liberty.nuvolacsi.it:8000/v1/signal/arn%
> 3Aopenstack%3Aheat%3A%3Ab570fe9ea2c94cb8ba72fe07fa034b62%
> 3Astacks%2FStack_test_from_view_galera-53040%2F15d0e95a-
> e422-4994-9f17-bb2f543952f7%2Fresources%2Fdeployment_sw_
> mariadb2?Timestamp=2016-11-24T16%3A35%3A12Z=HmacSHA256&
> AWSAccessKeyId=72ef8cef2e174926b74252754617f347&
> SignatureVersion=2=H5QcAv7yIZgBQzhztb4%2B0NJi7Z3q
> O%2BmwToqINUiKbvw%3D",
>   "description": "ID of signal to use for signaling output values"
>  },
>  {
>   "type": "String",
>   "name": "deploy_signal_verb",
>   "value": "POST",
>   "description": "HTTP verb to use for signaling output values"
>  }
>
> This part, we suppose, is generated by heat during the Template processing
> and is pushed to the target so that, when the deployment is finished, the
> os-apply-config uses CFN to signal to the orchestrator the SUCCESS/FAILED
> job.
>
> The problem is that, when we try to use directly the software config
> creation API and the deployment one, what we got in the target VM is
> something like this:
>
> #os-collect-config --print
> ...
>{
> "inputs": [],
> "group": null,
> "name": "test_key_gen_9aznXZ7DE9",
> "outputs": [],
> "creation_time": "2016-11-24T15:50:50",
> "options": {},
> "config": "#!/bin/bash\ntouch /tmp/test \nhostname > /tmp/test \n",
> "id": "d9395163-4238-4e94-902f-1e8abdbfa2bb"
>}
>
> This appens because we pass to the create SW config API no explicit
> parameter in the “inputs” key.
> Of course, this config causes no signaling back to Heat.
>
> So the questions are:
>
> Is it possible to use the cfn signaling with the software
> configuration/deployment creation APIs?
>
> How?
>
> Is it possible to have a signaling back to the orchestration without
> passing manually a deploy_signal_id inside the API's configuration
> parameters?
>
> If not, another way to give a signal back to Orchestrator, could be a
> workaround creating a self-standing stack containing only
> “OS::Heat::WaitCondition” and “OS::Heat::WaitConditionHandlewaitsignals”
> resources, but before using this workaround we want to be sure that is not
> possible in other ways.
>
> Thanks
>
> Pasquale
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][stable] Fwd: [Openstack-stable-maint] Stable check of openstack/heat failed

2016-11-30 Thread Rabi Mishra
On Wed, Nov 30, 2016 at 5:53 PM, Ihar Hrachyshka 
wrote:

> Anyone working on heat stable periodic doc job fix? Any patches to review?
>
> I see it failing for quite some time for L and M.
>
> https://review.openstack.org/#/c/404725/ should fix
periodic-heat-docs-mitaka. We decided not to fix it for liberty(to be EOL
tagged).

> Begin forwarded message:
>
> *From: *"A mailing list for the OpenStack Stable Branch test reports." <
> openstack-stable-ma...@lists.openstack.org>
> *Subject: **[Openstack-stable-maint] Stable check of openstack/heat
> failed*
> *Date: *30 November 2016 at 07:17:02 GMT+1
> *To: *openstack-stable-ma...@lists.openstack.org
> *Reply-To: *openstack-dev@lists.openstack.org
>
> Build failed.
>
> - periodic-heat-docs-liberty http://logs.openstack.org/
> periodic-stable/periodic-heat-docs-liberty/165492a/ : FAILURE in 2m 51s
> - periodic-heat-python27-db-liberty http://logs.openstack.org/
> periodic-stable/periodic-heat-python27-db-liberty/3a60bd4/ : SUCCESS in
> 6m 08s
> - periodic-heat-docs-mitaka http://logs.openstack.org/
> periodic-stable/periodic-heat-docs-mitaka/58868c7/ : FAILURE in 2m 49s
> - periodic-heat-python27-db-mitaka http://logs.openstack.org/
> periodic-stable/periodic-heat-python27-db-mitaka/990c8e6/ : SUCCESS in
> 13m 05s
> - periodic-heat-docs-newton http://logs.openstack.org/
> periodic-stable/periodic-heat-docs-newton/f1a64db/ : SUCCESS in 2m 32s
> - periodic-heat-python27-db-newton http://logs.openstack.org/
> periodic-stable/periodic-heat-python27-db-newton/c1fc398/ : SUCCESS in 9m
> 43s
>
> ___
> Openstack-stable-maint mailing list
> openstack-stable-ma...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Design Summit Summary

2016-11-01 Thread Rabi Mishra
Hi All,

We had number of very productive sessions at the summit last week. I
thought it would be good to summarize them in ML thread, for us to be able
to prioritize and work on them. As Ocata cycle is shorter than usual, we
may not be able to finish lot of them.

More detail on the discussions and action items are available in the summit
etherpads[1].

1. Convergence Phase-1

We all agreed that one of our top priorities this cycle would be make
convergence engine perform at par, if not better than legacy. Though
convergence is expected to perform better in a scaled out/distributed heat
deployment, we would try to make it optimal for not distributed deployments
like 'Tripleo undercloud'. Some of the action items to achieve this are as
below:

   - Investigate memory issue w/ convergence and identify some quick gains.


   - Store outputs of  stacks to avoid loading stacks for output retrieval
   and avoid making client calls for outputs.


   - Add heat job with stable tripleo templates(any large stack would
   probably do) and nova fake virt driver.


   - Test with python3(profile with tracemalloc)


   - Investigate possibility of using proton for RPC


In addition to the above we would improve/add documentation for convergence
with architecture and migration (from legacy to convergence) docs.

2. Convergence Phase-2

   - Merge the remaining 'observe reality' patches


   - Add special flag for stack-update using get reality

We may not be able to do more on convergence phase-2 (i.e continuous
observer) in this cycle.

3. Rolling Upgrade

As a community wide goal for this cycle, rolling upgrade would be one of
our priorities for Ocata cycle. This would involve testing out the
different approaches we discussed in ML and during the summit.


   - vhost change solution


   - DB triggers vs writing to multiple locations etc


   - Grenade job for rolling upgrade from stable to master

4. Validation Improvements

We agreed that this work would mostly spill over to the next cycle. As a
minimum we would write/freeze the spec on validation improvements (rename
the validations, placeholder stuff, output schema etc)

5. API Versioning

We would create specs on resource versioning and v2 API, before we discuss
about supporting api micro versions in the later cycles.

6. Test Improvements and Defcore

We agreed to use gabbi to write a new set of REST api tests in heat tree
and then propose
a subset of it to tempest for Defcore.

7. Heat Maturity

Our discussion with the ops-tags-team was very productive. We seem to
support 9 SDKs( 2 more than the 7 credited to us) and this has been
corrected now.

>From the user survey point of view (use of heat in production deployments),
we probably are missing some score, as users of projects like magnum,
sahara, murano, tacker in production may not be mentioning heat. After we
send a mail confirming that, heat would be implicitly added by the survey
team, when one of those projects  mentioned by the end user.


Note: I may have missed some important action items that should be
communicated here. Please feel free to add them as necessary.

[1] https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads#Heat

Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Team meeting this week cancelled

2016-10-24 Thread Rabi Mishra
Hi All,

I'm cancelling our IRC meeting this week, as many of us are attending the
Summit. We would have our meeting next week as scheduled.

-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Rolling Upgrades

2016-10-20 Thread Rabi Mishra
Thanks Crag on starting the thread. Few comments inline.

On Fri, Oct 21, 2016 at 5:32 AM, Crag Wolfe  wrote:

> At Summit, folks will be discussing the rolling upgrade issue across a
> couple of sessions. I personally won't be able to attend, but thought
> I would share my thoughts on the subject.
>
> To handle rolling upgrades, there are two general cases to consider:
> database model changes and RPC method signature changes.
>
> For DB Model changes (this has already been well discussed on the
> mailing list, see the footnotes), let's assume for the moment we don't
> want to use triggers. If we are moving data from one column/table to
> another, the pattern looks like:
>
> legacy release: write to old location
> release+1: write to old and new location, read from old
> release+2: write to old and new location, read from new,
>provide migration utility
> release+3: write to new location, read from new
>
Not sure I understand this. Is it always about changing the table name or
column name of a table?
What about adding a new column to an existing table? I assume the db api
implementation have to ignore the additional column values when writing to
old location.


> Works great! The main issue is if the duplicated old and new data
> happens to be large. For a heat-specific example (one that is close to
> my heart), consider moving resource/event properties data into a
> separate table.
>
> We could speed up the process by adding config variables that specify
> where to read from, but that is putting a burden on the operator,
> creating a risk that data is lost if the config variables are not
> updated in the correct order after each full rolling restart, etc.
>
> Which brings us back to triggers. AFAIK, only sqlalchemy+mariadb is
> being used in production, so we really only have one backend we would
> have to write triggers for. If the data duplication is too unpalatable
> for a given migration (using the +1, +2, +3 pattern above), we may
> have to wade into the less simple world of triggers.
>
I think we can only enable the trigger during the upgrade process and then
disable it.


> For RPC changes, we don't have a great solution right now (looking
> specifically at heat/engine/service.py). If we add a field, an older
> running heat-engine will break if it receives a request from a newer
> running heat-engine. For a relevant example, consider adding the
> "root_id" as an argument (
> https://review.openstack.org/#/c/354621/13/heat/engine/service.py ).
>
> Looking for the simplest solution -- if we introduce a mandatory
> "future_args" arg (a dict) now to all rpc methods (perhaps provide a
> decorator to do so), then we could follow this pattern post-Ocata:
>
> legacy release: accepts the future_args param (but does nothing with it).
> release+1: accept the new parameter with a default of None,
>pass the value of the new parameter in future_args.
> release+2: accept the new parameter, pass the value of the new parameter
>in its proper placeholder, no longer in future_args.
>
This is something similar to the one is being used by neutron for the
agents,
i.e consistently capturing those new/unknown arguments with keyword
arguments
and ignoring them on agent side; and by not enforcing newer RPC entry point
versions on server side. However,  this makes the rpc api less strict and
not ideal.

The best way would be do some kind of rpc pinning on the new engines when
they send messages(new engines can receive old messages). I was also
wondering
if it's possible/good idea to restrict engines not to communicate with
other engines
only during the upgrade process.

But, we don't have a way of deleting args. That's not super
> awful... old args never die, they just eventually get ignored. As for
> adding new api's, the pattern would be to add them in release+1, but
> not call them until release+2. [If we really have a case where we need
> to add and use a new api in release+1, the solution may be to have two
> rpc api messaging targets in release+1, one for the previous
> major.minor release and another for the major+1.0 release that has the
> new api. Then, we of course we could remove outdated args in
> major+1.0.]
>
I'm not sure we ever delete args, as we make the rpc servers backward
compatible.


> Finally, a note about Oslo versioned objects: they don't really help
> us. They work great for nova where there is just nova-conductor
> reading and writing to the DB, but we have multiple heat-engines doing
> that that need to be restarted in a rolling manner. See the references
> below for greater detail.
>
> --Crag
>
> References
> --
>
> [openstack-dev] [Heat] Versioned objects upgrade patterns
> http://lists.openstack.org/pipermail/openstack-dev/2016-
> May/thread.html#95245
>
> [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades:
> database triggers and oslo.versionedobjects
> http://lists.openstack.org/pipermail/openstack-dev/2016-
> 

Re: [openstack-dev] [Heat] OpenStack Summit Barcelona: Heat Meetup (evening)

2016-10-20 Thread Rabi Mishra
Thanks Rico for taking the initiative. Appreciate it. I'm fine for any
location on any evening other than Monday.

On Thu, Oct 20, 2016 at 8:58 AM, Rico Lin  wrote:

> Hi everyone,
>
> We're planning to have an evening Heat contributors meetup in Barcelona
> Summit.
> We would like every contributor, ops, users join us and have fun.
> We need to decide which day of that week would be most suited for all of
> us. If you would like to attend, please put your name and possible days at:
> http://doodle.com/poll/dyy6tdnawchnddvy
>
> As for location, feel free to suggest any.
> I would suggest `Bambu Beach Bar`[1], drink and tapas which nearby venue,
> or `Cervecería Catalana`[2] and Tapas 24 [3] which a little far from the
> venue. All nice and relax places(Not like the evening place from the last
> summit I promise!!). Most importantly, all place served beers and drinks(
> This is very essential if we want to attract our Steve!!).
>
>
> [1] https://www.tripadvisor.com.tw/Restaurant_Review-
> g187497-d4355271-Reviews-Bambu_Beach_Bar-Barcelona_Catalonia.htm
> [2] https://www.tripadvisor.com.tw/Restaurant_Review-
> g187497-d782944-Reviews-Cerveceria_Catalana-Barcelona_Catalonia.html
> [3] https://www.tripadvisor.com.tw/Restaurant_Review-
> g187497-d1314895-Reviews-Tapas_24-Barcelona_Catalonia.html
>
> --
> May The Force of OpenStack Be With You,
>
> *Rico Lin*irc: ricolin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Design Summit Schedule

2016-10-14 Thread Rabi Mishra
Hi All,

As agreed in the last team meeting, I've pushed the final design summit
schedule[1][2] for heat. Please have a look and let me know, if we need to
change anything (I assume, we can still make some changes in the last week
before summit).

Note: I've added some of us as moderators/chairs for few sessions (mostly
fishbowl ones).

[1]
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Heat%3A
[2] https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads#Heat

-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [magnum] Subjects to discuss during the summit

2016-10-11 Thread Rabi Mishra
Hi Spyros,

Thanks for starting this thread. My initial understanding was that the
planned session would more around
heat performance/scalability issues w/ magnum.

As most of the additional stuff you mentioned are around heat best
practices, I think the specs/reviews
would be a great place to start the discussion and we can also squeeze them
as part of the same session.

Some comments inline.

On Mon, Oct 10, 2016 at 9:24 PM, Spyros Trigazis  wrote:

> Hi Sergey,
>
> I have seen the session, I wanted to add more details to
> start the discussion earlier and to be better prepared.
>
> Thanks,
> Spyros
>
>
> On 10 October 2016 at 17:36, Sergey Kraynev  wrote:
>
>> Hi Spyros,
>>
>> AFAIK we already have special session slot related with your topic.
>> So thank you for the providing all items here.
>> Rabi, can we add link on this mail to etherpad ? (it will save our time
>> during session :) )
>>
>> On 10 October 2016 at 18:11, Spyros Trigazis  wrote:
>>
>>> Hi heat and magnum.
>>>
>>> Apart from the scalability issues that have been observed, I'd like to
>>> add few more subjects to discuss during the summit.
>>>
>>> 1. One nested stack per node and linear scale of cluster creation
>>> time.
>>>
>>> 1.1
>>> For large stacks, the creation of all nested stack scales linearly. We
>>> haven't run any tested using the convergence-engine.
>>>
>>
>From what I understand, magnum uses ResourceGroups and Template Resources.
(ex. Cluster->RGs->master/nodes) to build the cluster.

As the nested stack operations happen over rpc, they should be distributed
across all available engines.
So, the finding that the build time increases linearly is not good. It
would probably be worth providing more
details of heat configuration(ex. number of engine workers etc) on your
test setup. it would also be useful
to do some tests with convergence enabled, as that is the default from
newton.

Magnum seems to use a collection of software configs (scipts) as a
multipart mime with server
user_data. So the the build time for 'every node' would be dependent on the
time taken by these scripts
at boot.

1.2
>>> For large stacks, 1000 nodes, the final call to heat to fetch the
>>> IPs for all nodes takes 3 to 4 minutes. In heat, the stack has status
>>> CREATE_COMPLETE but magnum's state is updated when this long final
>>> call is done. Can we do better? Maybe fetch only the master IPs or
>>> get he IPs in chunks.
>>>
>>

We seem load the nested stacks in memory to retrieve their outputs. That
would probably explain the
behaviour above, where you load all the nested stacks for the nodes to
fetch their ips. There is some
work[1][2] happening atm to change that.

[1] https://review.openstack.org/#/c/383839/
[2] https://review.openstack.org/#/c/384718


> 1.3
>>> After the stack create API call to heat, magnum's conductor
>>> busy-waits heat with a thread/cluster. (In case of a magnum conductor
>>> restart, we lose that thread and we can't update the status in
>>> magnum). Investigate better ways to sync the status between magnum
>>> and heat.
>>>
>> Rather than waiting/polling, probably you can implement an observer that
consumes events
from heat/event-sink and updates magnum accordingly? May be there are
better options too.


> 2. Next generation magnum clusters
>>>
>>> A need that comes up frequently in magnum is heterogeneous clusters.
>>> * We want to able to create cluster on different hardware, (e.g. spawn
>>>   vms on nodes with SSDs and nodes without SSDs or other special
>>>   hardware available only in some nodes of the cluster FPGA, GPU)
>>> * Spawn cluster across different AZs
>>>
>>> I'll describe briefly our plan here, for further information we have a
>>> detailed spec under review. [1]
>>>
>>> To address this issue we introduce the node-group concept in magnum.
>>> Each node-group will correspond to a different heat stack. The master
>>> nodes can be organized in one or more stacks, so as the worker nodes.
>>>
>>> We investigate how to implement this feature. We consider the
>>> following:
>>> At the moment, we have three template files, cluster, master and
>>> node, and all three template files create one stack. The new
>>> generation of clusters will have a cluster stack containing
>>> the resources in the cluster template, specifically, networks, lbaas
>>> floating-ips etc. Then, the output of this stack would be passed as
>>> input to create the master node stack(s) and the worker nodes
>>> stack(s).
>>>
>>


> 3. Use of heat-agent
>>>
>>> A missing feature in magnum is the lifecycle operations in magnum. For
>>> restart of services and COE upgrades (upgrade docker, kubernetes and
>>> mesos) we consider using the heat-agent. Another option is to create a
>>> magnum agent or daemon like trove.
>>>
>>> 3.1
>>> For restart, a few systemctl restart or service restart commands will
>>> be issued. [2]
>>>
>>> 3.2
>>> For upgrades there are three scenarios:
>>> 1. 

[openstack-dev] [heat] Presence at PTG, Atlanta

2016-10-06 Thread Rabi Mishra
Hi All,

As you would probably know, the first Project Teams Gathering(PTG) will
happen in Atlanta from Feb 20-24, 2017.

Organizers are working on the event space layout and have asked all project
teams on their plans to join the event and whether they would require/use a
separate room.

As this is expected to be a substitute for the 'design summit' plus
'mid-cycle meet up' (I'm not sure if we had one before), I assume most the
contributors would be planning to attend it.


We can respond with one of the options below on 'Whether the project team
is planning to gather for the event?'

1. Yes, Absolutely
2. Maybe, Still Considering it
3. No Certainly Not

I think for us it's '1'. However, please let us know if you have different
idea/opinion on this. We can also discuss about it in the team meeting this
week.


-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Meeting times

2016-09-27 Thread Rabi Mishra
Hi All,

I think the current meeting times  i.e. 08:00 UTC and 15:00 UTC on
alternate weeks are working well for us. Though 15:00 UTC is little late
for me, I propose we continue with the same for this cycle.

With the geographical spread of the team, it's difficult to arrive at a
time that suits all. However, if you've any other/better suggestion, do let
me know.

-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][heat][octavia] Heat engine doesn't detect lbaas listener failures

2016-09-27 Thread Rabi Mishra
On Wed, Sep 28, 2016 at 6:21 AM, Jiahao Liang <
jiahao.li...@oneconvergence.com> wrote:

>
>
> On Tue, Sep 27, 2016 at 5:35 PM, Rabi Mishra <ramis...@redhat.com> wrote:
>
>> On Wed, Sep 28, 2016 at 1:01 AM, Zane Bitter <zbit...@redhat.com> wrote:
>>
>>> On 27/09/16 15:11, Jiahao Liang wrote:
>>>
>>>> Hello all,
>>>>
>>>> I am trying to use heat to launch lb resources with Octavia as backend.
>>>> The template I used is
>>>> from https://github.com/openstack/heat-templates/blob/master/hot/
>>>> lbaasv2/lb_group.yaml.
>>>>
>>>> Following are a few observations:
>>>>
>>>> 1. Even though Listener was created with ERROR status, heat will still
>>>> go ahead and mark it Creation Complete. As in the heat code, it only
>>>> check whether root Loadbalancer status is change from PENDING_UPDATE to
>>>> ACTIVE. And Loadbalancer status will be changed to ACTIVE anyway no
>>>> matter Listener's status.
>>>>
>>>
>>> That sounds like a clear bug.
>>>
>>
>> It seems we're checking for any exceptions from the client[1], before
>> checking for the
>> loadbalancer status. I could not see any other way to check the listener
>> status afterwards.
>> Probably a lbaas bug with octavia driver?
>>
>> Could you please raise a bug with the heat/lbaas logs?
>>
>
>> [1]  https://git.openstack.org/cgit/openstack/heat/tree/heat/engi
>> ne/resources/openstack/neutron/lbaas/listener.py#n183
>>
>
>  In Octavia, creating resources (listeners, pools, etc.) is an async
> operation which it wouldn't raise any exception.
> A normal workflow is:
> 1. heat/neutron client send a create api to Octavia
> 2. Octavia return a response to client and set the resource to
> PENDING_CREATE (no exception will throw to client if the api goes through.)
> 3. It creation succeeds, Octavia set that resource to ACTIVE; otherwise,
> set it to ERROR.
>

Unlike loadbalancer, I don't see any provisioning_status attribute for
listener in lbaas api[1].

[1]
http://git.openstack.org/cgit/openstack/neutron-lbaas/tree/neutron_lbaas/extensions/loadbalancerv2.py#n192
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][heat][octavia] Heat engine doesn't detect lbaas listener failures

2016-09-27 Thread Rabi Mishra
On Wed, Sep 28, 2016 at 1:01 AM, Zane Bitter  wrote:

> On 27/09/16 15:11, Jiahao Liang wrote:
>
>> Hello all,
>>
>> I am trying to use heat to launch lb resources with Octavia as backend.
>> The template I used is
>> from https://github.com/openstack/heat-templates/blob/master/hot/
>> lbaasv2/lb_group.yaml.
>>
>> Following are a few observations:
>>
>> 1. Even though Listener was created with ERROR status, heat will still
>> go ahead and mark it Creation Complete. As in the heat code, it only
>> check whether root Loadbalancer status is change from PENDING_UPDATE to
>> ACTIVE. And Loadbalancer status will be changed to ACTIVE anyway no
>> matter Listener's status.
>>
>
> That sounds like a clear bug.
>

It seems we're checking for any exceptions from the client[1], before
checking for the
loadbalancer status. I could not see any other way to check the listener
status afterwards.
Probably a lbaas bug with octavia driver?

Could you please raise a bug with the heat/lbaas logs?

[1]
https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resources/openstack/neutron/lbaas/listener.py#n183

>
> 2. As heat engine wouldn't know the Listener's creation failure, it will
>> continue to create Pool\Member\Heatthmonitor on top of an Listener which
>> actually doesn't exist. It causes a few undefined behaviors.  As a
>> result, those LBaaS resources in ERROR state are unable to be cleaned up
>> with either normal neutron or heat api.
>>
>>
>> Is this a bug regarding LBaaS V2 for heat, or is it designed that way on
>> purpose?  In my opinion, it would be more natural if heat reports
>> CREATION_FAILURE if any of the LBaaS resources fails.
>>
>> Thanks,
>> Jiahao Liang
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Design sessions for Barcelona

2016-09-27 Thread Rabi Mishra
Hi All,

We've 3 Fish-bowl and 6 WR slots available this summit for the design
sessions. We're collecting the session ideas on this[1] etherpad. Please
add any other topic/ideas, that you would like to be included.

We'll discuss these in this/next week team meetings to prioritize them.

[1] https://etherpad.openstack.org/p/ocata-heat-sessions

-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Nomination for Heat PTL

2016-09-12 Thread Rabi Mishra
Hi
All,


I would like to nominate myself to take the role of Heat PTL for
the
Ocata
cycle.


I have been involved with the project since the last few years, first as
a
contributor and then a core reviewer and have had the opportunity to
work
with a great team and community. I strongly believe that with
ever
increasing adoption of Heat, we are well placed to live up to
the
expectations of the end users and OpenStack ecosystem at large,
by
continuously evolving, encouraging participation and consensus
building.


We achieved some significant milestones in the newton cycle by
making
convergence the default architecture and improving on some of the
key
non functional areas like stability and performance. I believe we
have
more work to do in these areas, when projects like TriplO start
using
Heat with convergence
enabled.


We have the fortune of having a number of experienced and previous
PTLs
in the team that makes the job of a  new PTL easier and I don't
see
the PTL anything more than a communication bridge and a team
catalyst.


I believe our focus in the Ocata cycle would
include:


- Continuously improve the stability and
performance
- Upgrades with zero
downtime
- Increase/improve the the test coverage without increasing
the
  CI
time
- Convergence
Phase-II
- Validation
improvements


And not to mention the run-of-the-mill
activities:


- Ensuring that CI jobs are healthy (and we don't break other
projects)
- On time releases with efficient coordination and
collaboration

I'm sure we would be able to achieve the goals we set for ourselves
and
would be happy to work as the team
catalyst.


Thank
you.

Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat] Re-evaluate conditions specification

2016-03-31 Thread Rabi Mishra
> The conditions function has been requested for a long time, and there have
> been several previous discussions, which all ended up in debating the
> implementation, and no result.
> https://review.openstack.org/#/c/84468/3/doc/source/template_guide/hot_spec.rst
> https://review.openstack.org/#/c/153771/1/specs/kilo/resource-enabled-meta-property.rst
> 
> I think we should focus on the simplest possible way(same as AWS) to meet the
> user requirement, and follows the AWS, there is no doubt that we will get a
> very good compatibility.
> And the patches are good in-progress. I don't want everything back to zero:)
> https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/support-conditions-function
> 
> In the example you given of 'variables', seems there's no relation with
> resource/output/property conditions, it seems as another function which
> likes really 'variables' to used in template.

If I understand the suggestion correctly, the only relation it has with 
conditions is,
conditions are nothing but variables(boolean).

conditions: {
'for_prod': {equals: [{get_param: env_type}, 'prod']}
  }

would be

variables:
   for_prod: {equals: [{get_param: env_type}, prod]}


then you can use it in your example as:

floating_ip:
  type: OS::Nova::FloatingIP
  condition: {get_variable: for_prod}

so suggestion is to make it more generic, so that it can be used for other 
things
and reduce some of the verbosity in the templates.

However, I think the term 'variable' makes is sound more like a programming 
thing. May
be we can use something better. However, personally I kind of like the idea.
 
> -邮件原件-
> 发件人: Thomas Herve [mailto:the...@redhat.com]
> 发送时间: 2016年3月31日 19:55
> 收件人: OpenStack Development Mailing List (not for usage questions)
> 主题: Re: [openstack-dev] [Heat] Re-evaluate conditions specification
> 
> On Thu, Mar 31, 2016 at 10:40 AM, Thomas Herve  wrote:
> > Hi all,
> >
> > As the patches for conditions support are incoming, I've found
> > something in the code (and the spec) I'm not really happy with. We're
> > creating a new top-level section in the template called "conditions"
> > which holds names that can be reused for conditionally creating
> > resource.
> >
> > While it's fine and maps to what AWS does, I think it's a bit
> > short-sighted and limited. What I have suggested in the past is to
> > have a "variables" (or whatever you want to call it) section, where
> > one can declare names and values. Then we can add an intrinsic
> > function to retrieve data from there, and use that for examples for
> > conditions.
> 
> I was asked to give examples, here's at least one that can illustrate what I
> meant:
> 
> parameters:
>host:
>   type: string
>port:
>   type: string
> 
> variables:
>endpoint:
>   str_replace:
> template:
>http://HOST:PORT/
> params:
>HOST: {get_param: host}
>PORT: {get_param: port}
> 
> resources:
>config1:
>   type: OS::Heat::StructuredConfig
>   properties:
> config:
>hosts: [{get_variable: endpoint}]
> 
> --
> Thomas
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-23 Thread Rabi Mishra
> On Wed, Mar 23, 2016 at 05:25:57PM +0300, Sergey Kraynev wrote:
> >Hello,
> >It looks similar on issue, which was discussed here [1]
> >I suppose, that the root cause is incorrect using get_attr for your
> >case.
> >Probably you got "list"  instead of "string".
> >F.e. if I do something similar:
> >outputs:
> >  rg_1:
> >    value: {get_attr: [rg_a, rg_a_public_ip]}
> >  rg_2:
> >    value: {get_attr: [rg_a, rg_a_public_ip, 0]}
> >                  
> >  rg_3:
> >    value: {get_attr: [rg_a]}
> >  rg_4:
> >    value: {get_attr: [rg_a, resource.0.rg_a_public_ip]}
> >where rg_a is also resource group which uses custom template as
> >resource.
> >the custom template has output value rg_a_public_ip.
> >The output for it looks like [2]
> >So as you can see, that in first case (like it is used in your example),
> >get_attr returns list with one element.
> >rg_2 is also wrong, because it takes first symbol from sting with IP
> >address.
> 
> Shouldn't rg_2 and rg_4 be equivalent?

They are the same for template version 2013-05-23. However, they behave 
differently
from the next  version(2014-10-16) onward and return a list of characters. I 
think 
this is due to the fact that `get_attr` function mapping is changed from 
2014-10-16.


2013-05-23 -  
https://github.com/openstack/heat/blob/master/heat/engine/hot/template.py#L70
2014-10-16 -  
https://github.com/openstack/heat/blob/master/heat/engine/hot/template.py#L291

This makes me wonder why would a template author do something like 
{get_attr: [rg_a, rg_a_public_ip, 0]} when he can easily do 
{get_attr: [rg_a, resource.0.rg_a_public_ip]} or {get_attr: [rg_a, resource.0, 
rg_a_public_ip]}
for specific resource atrributes.

I understand that {get_attr: [rg_a, rg_a_public_ip]} cane be useful when we 
just want to use
the list of attributes.


> 
> {get_attr: [rg_a, rg_a_public_ip]} should return a list of all
> rg_a_public_ip attributes (one list item for each resource in the group),
> then the 0 should select the first item from that list?
> 
> If it's returning the first character of the first element, that sounds
> like a bug to me?
> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-19 Thread Rabi Mishra
> Hi Heaters,
> 
> The Mitaka release is close to finish, so it's good time for reviewing
> results of work.
> One of this results is analyze contribution results for the last release
> cycle.
> According to the data [1] we have one good candidate for nomination to
> core-review team:
> Oleksii Chuprykov.
> During this release he showed significant value of review metric.
> His review were valuable and useful. Also He has enough level of
> expertise in Heat code.
> So I think he is worthy to join to core-reviewers team.
> 
> I ask you to vote and decide his destiny.
>  +1 - if you agree with his candidature
>  -1  - if you disagree with his candidature
> 
> [1] http://stackalytics.com/report/contribution/heat-group/120

+1
 
> --
> Regards,
> Sergey.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] Bug 1544227

2016-02-10 Thread Rabi Mishra
Hi,

We did some analysis of the issue you are facing.

One of the issues from heat side is, we convert None(singleton) resource 
references 
to 'None'(string) and the translation logic is not ignoring them. Though we 
don't
apply translation rules to resource references[1].We don't see this issue after
this patch[2].

The issue you mentioned below with respect to SD and SDG, does not look like
something to do with this patch. I also see the similar issues when you tested 
with
the reverted patch[3].

I also noticed that there are some 404 from neutron in the engine logs[4] for 
the test patch. 
I did not notice them when I tested locally with the templates you had provided.


Having said that, we can still revert the patch, if that resolves your issue. 

[1] 
https://github.com/openstack/heat/blob/master/heat/engine/translation.py#L234
[2] https://review.openstack.org/#/c/278576/
[3]http://logs.openstack.org/78/278778/1/check/gate-functional-dsvm-magnum-k8s/ea48ba2/console.html#_2016-02-11_03_07_49_039
[4] 
http://logs.openstack.org/78/278578/1/check/gate-functional-dsvm-magnum-swarm/51eeb3b/logs/screen-h-eng.txt


Regards,
Rabi

> Hi Heat team,
> 
> As mentioned in IRC, magnum gate broke with bug 1544227 . Rabi submitted on a
> fix (https://review.openstack.org/#/c/278576/), but it doesn't seem to be
> enough to unlock the broken gate. In particular, it seems templates with
> SoftwareDeploymentGroup resource failed to complete (I have commented on the
> review above for how to reproduce).
> 
> Right now, I prefer to merge the reverted patch
> (https://review.openstack.org/#/c/278575/) to unlock our gate immediately,
> unless someone can work on a quick fix. We appreciate the help.
> 
> Best regards,
> Hongbin
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] spec-lite for simple feature requests

2016-01-20 Thread Rabi Mishra
Hi All,

As discussed in the team meeting, below is the proposed spec-lite process for 
simple feature requests. This is already being used in Glance project. 
Feedback/comments/concerns are welcome, before we update the contributor docs 
with this:).


tl;dr - spec-lite is a simple feature request created as a bug with enough 
details and with a `spec-lite` tag. Once triaged with status 'Triaged' and 
importance changed to 'Whishlist', it's approved. Status 'Won’t fix' signifies 
the request is rejected and 'Invalid' means it would require a full spec.


Heat Spec Lite
--

Lite specs are small feature requests tracked as Launchpad bugs, with status 
'Wishlist' and tagged with 'spec-lite' tag. These allow for submission and 
review of these feature requests before code is submitted.

These can be used for simple features that don’t warrant a detailed spec to be 
proposed, evaluated, and worked on. The team evaluates these requests as it 
evaluates specs. Once a bug has been approved as a Request for Enhancement 
(RFE), it’ll be targeted for a release.


The workflow for the life of a spec-lite in Launchpad is as follows:

1. File a bug with a small summary of what the request change is and tag it as 
spec-lite.
2. The bug is triaged and importance changed to Wishlist.
3. The bug is evaluated and marked as Triaged to announce approval or to Won’t 
fix to announce rejection or Invalid to request a full spec.
4. The bug is moved to In Progress once the code is up and ready to review.
5. The bug is moved to Fix Committed once the patch lands.

In summary the states are:

New:This is where spec-lite starts, as filed by the community.
Triaged:Drivers - Move to this state to mean, “you can start working on 
it”
Won’t Fix:  Drivers - Move to this state to reject a lite-spec.
Invalid:Drivers - Move to this state to request a full spec for this 
request

Lite spec Submission Guidelines
---

When a bug is submitted, there are two fields that must be filled: ‘summary’ 
and ‘further information’. The ‘summary’ must be brief enough to fit in one 
line.

The ‘further information’ section must be a description of what you would like 
to see implemented in heat. The description should provide enough details for a 
knowledgeable developer to understand what is the existing problem and what’s 
the proposed solution.

Add spec-lite tag to the bug.


Thanks,
Rabi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][telemetry] gate-ceilometer-dsvm-integration broken

2016-01-04 Thread Rabi Mishra
> On Mon, Dec 28, 2015 at 01:52:45PM +0100, Julien Danjou wrote:
> > On Mon, Dec 28 2015, Rabi Mishra wrote:
> > 
> > > Yes, this has started happening after keystone/trusts config changes by
> > > the
> > > devstack patch you mentioned. I've no idea how this can be fixed. As
> > > Steve
> > > Hardy is away, either someone with keystone knowledge should fix this or
> > > we
> > > merge the devstack patch revert[3] that I tested few days ago.
> > 
> > Why don't you just revert the devstack change?
> > 
> > This is way saner than disabling the test! Steve will be able to rework
> > his initial change when he come back.
> 
> Firstly, I'm very sorry for the breakage here, and I agree that in general
> a quick-revert is the best policy when something like this happens.
> 
> I'm a little unclear how this occurred tho, since I had a clear CI run on
> this patch:
> 
> https://review.openstack.org/#/c/256315/
> 
> Which had a Depends-On to the devstack change, anyone know why that didn't
> fail with the CeilometerAlarmTest.test_alarm before the devstack change
> merged?

It seems the test was skipped[1], as it was disabled for another bug[2].

[1] 
http://logs.openstack.org/15/256315/2/check/gate-heat-dsvm-functional-orig-mysql/bffccd5/console.html.gz#_2015-12-14_23_33_13_394
[2] https://bugs.launchpad.net/heat/+bug/1523337

> Regardless, we've got several fixes now which can be considered:
> 
> 1. Rabi's devstack revert:
> 
> https://review.openstack.org/#/c/261310/
> 
> 2. Fix the actual issue in heat:
> 
> https://review.openstack.org/#/q/topic:bug/1529058
> 
> Given that the review latency on Devstack is quite high, it seems possible
> we'll land (2) before (1) lands, but if not then I'll re-propose it and
> hopefully figure out where I went wrong with Depends-On to confirm all is
> fixed before it lands.
> 
> Also, there's this fix:
> 
> https://review.openstack.org/#/c/261398/
> 
> I've not yet confirmed if this also fixes the issue referencing the default
> domain which broke the alarm tests.
> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][telemetry] gate-ceilometer-dsvm-integration broken

2016-01-04 Thread Rabi Mishra
> 
> Hi,
> 
> >> Which had a Depends-On to the devstack change, anyone know why that
> >> didn't
> >> fail with the CeilometerAlarmTest.test_alarm before the devstack
> >> change
> >> merged?
> > 
> > It seems the test was skipped[1], as it was disabled for another
> > bug[2].
> > 
> > [1]
> > http://logs.openstack.org/15/256315/2/check/gate-heat-dsvm-functional-orig-mysql/bffccd5/console.html.gz#_2015-12-14_23_33_13_394
> > [2] https://bugs.launchpad.net/heat/+bug/1523337
> 
> This is unrelated, this is an old issue, this bug have already been
> fixed in Aodh[1], and Heat have re-enabled the ceilometer tests [2] just
> after the fix was merged.

Sure, the issues are unrelated. However, the patch with ceilometer test 
re-enabled landed on 16th Dec[1] and CI run for Steve's patch with 'Depends-On' 
was on 15th Dec[2], while the test was still disabled:)

Hope this clarifies why we missed the regression.

[1] https://review.openstack.org/#/c/254081/
[2] https://review.openstack.org/#/c/256315/
 
> I think we just forget to set the status of #1523337 (heat side) when
> [2] was merged. (I have just set it)
> 
> [1] https://review.openstack.org/#/c/254078/
> [2]
> http://git.openstack.org/cgit/openstack/heat/commit/?id=53e16655ab899f56bd0fd5d4997bb27a76be53df
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][telemetry] gate-ceilometer-dsvm-integration broken

2015-12-28 Thread Rabi Mishra
> Hi there,
> 
> The gate for telemetry projects is broken:
> 
>   https://bugs.launchpad.net/heat/+bug/1529583
> 
> The failure appears in Heat from what I understand:
> 
>  BadRequest: Expecting to find domain in project - the server could not
>  comply with the request since it is either malformed or otherwise
>  incorrect. The client is assumed to be in error. (HTTP 400)
>  (Request-ID: req-3f39cc92-c356-4b92-9ab8-401738c8d31d

Hi Julien,

We're already tracking this with bug[1] for heat. As a temporary fix we've 
disabled the ceilometer tests from heat dsvm gate jobs[2].

Yes, this has started happening after keystone/trusts config changes by the 
devstack patch you mentioned. I've no idea how this can be fixed. As Steve 
Hardy is away, either someone with keystone knowledge should fix this or we 
merge the devstack patch revert[3] that I tested few days ago.


[1] https://bugs.launchpad.net/heat/+bug/1529058
[2] https://review.openstack.org/#/c/261272/
[3] https://review.openstack.org/#/c/261308/

Regards,
Rabi
> 
> I've dig a bit, and I *think* that the problem lies in this recent
> devsatck patch:
> 
>   https://review.openstack.org/#/c/254755/
> 
> Could someone from Heat tell me if I'm a good Sherlock or if I am
> completely out? :)
> 
> Cheers,
> --
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Rico Lin for heat-core

2015-12-07 Thread Rabi Mishra
- Original Message -
> Hi all.
> 
> I'd like to nominate Rico Lin for heat-core. He did awesome job with
> providing useful and valuable reviews. Also his contribution is really high
> [1] .
> 
> [1] http://stackalytics.com/report/contribution/heat-group/60
> 
> Heat core-team, please vote with:
> +1 - if you agree
> -1 - if you disagree

+1

> 
> --
> Regards,
> Sergey.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team nomination

2015-10-21 Thread Rabi Mishra
- Original Message -
> Congrats Rabi and Peter :)
> 
> -Original Message-
> From: Sergey Kraynev [mailto:skray...@mirantis.com]
> Sent: Wednesday, October 21, 2015 12:57 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Heat] core team nomination
> 
> Rabi, Peter, my congradulations. You were elected by a unanimous vote :) I
> will add you to heat-core group. Enjoy and stay on this course ! :)

Thank you all. I'll try and maintain my current level of activity.

> 
> On 21 October 2015 at 09:45, Qiming Teng <teng...@linux.vnet.ibm.com> wrote:
> > +1 to both.
> >
> > Qiming
> >
> > On Tue, Oct 20, 2015 at 04:38:12PM +0300, Sergey Kraynev wrote:
> >> I'd like to propose new candidates for heat core-team:
> >> Rabi Mishra
> >> Peter Razumovsky
> >>
> >> According statistic both candidate made a big effort in Heat as
> >> reviewers and as contributors [1][2].
> >> They were involved in Heat community work  during last several
> >> releases and showed good understanding of Heat code.
> >> I think, that they are ready to became core-reviewers.
> >>
> >> Heat-cores, please vote with +/- 1.
> >>
> >> [1] http://stackalytics.com/report/contribution/heat-group/180
> >> [2] http://stackalytics.com/?module=heat-group=person-day
> >> --
> >> Regards,
> >> Sergey.
> >>
> >> _
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Regards,
> Sergey.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Regards,
Rabi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Use block_device_mapping_v2 for swap?

2015-09-02 Thread Rabi Mishra


Rabi Mishra
+91-7757924167

- Original Message -
> On 31/08/15 11:19, TIANTIAN wrote:
> > 
> > 
> > At 2015-08-28 21:48:11, "marios" <mar...@redhat.com> wrote:
> >> I am working with the OS::Nova::Server resource and looking at the tests
> >> [1], it should be possible to just define 'swap_size' and get a swap
> >> space created on the instance:
> >>
> >>  NovaCompute:
> >>type: OS::Nova::Server
> >>properties:
> >>  image:
> >>{get_param: Image}
> >>  ...
> >>  block_device_mapping_v2:
> >>- swap_size: 1
> >>
> >> When trying this the first thing I hit is a validation code nit that is
> >> already fixed @ [2] (I have slightly older heat) and I applied that fix.
> >> However, when I try and deploy with a Flavor that has a 2MB swap for
> >> example, and with the above template, I still end up with a 2MB swap.
> >>
> >> Am I right in my assumption that the above template is the equivalent of
> >> specifying --swap on the nova boot cli (i.e. should this work?)? I am
> >> working with the Ironic nova driver btw and when deploying using the
> >> nova cli using --swap works; has anyone used/tested this property
> >> recently? I'm not yet sure if this is worth filing a bug for yet.
> > 
> >>
> > --According to the codes of heat and novaclient, the above template is
> > the equivalent of
> >specifying --swap on the nova boot cli:
> > https://github.com/openstack/python-novaclient/blob/master/novaclient/v2/shell.py#L142-L146
> > https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/nova/server.py#L822-L831
> > 
> > 
> > But don't know much about nova, and not sure how does nova behave if
> > specified different swap size on Flavor
> > 
> > 
> > and Bdm.
> 
> Hey TianTian, thanks very much for the pointers and sanity check. Yeah I
> think it is intended to work that way (e.g. the tests on the heatclient
> also cover this as per my original), I was mostly looking for 'yeah did
> this recently worked ok for me'.

Hi Marios,

This seems to work fine with master and I do see swap created with size of the
'swap_size' specified in the template. 

[fedora@test-stack-novacompute-nwownbcokzra ~]$ swapon -s
FilenameTypeSizeUsedPriority
/dev/vdbpartition   524284  0   -1


Though I did face a novaclient issue with python-novaclient==2.26.0.

The above issue has been resolved by the below commit.
https://github.com/openstack/python-novaclient/commit/0a8fbaa48083ba2e79abf67096efa59fa18b

When specifying swap_size more than that permitted by the flavor we get
'CREATE_FAILED' with the following error. So I assume it works as expected.


resources.NovaCompute: Swap drive requested is larger   
|
|   | than instance type allows. (HTTP 400) (Request-ID: 
req- |
|   | 276150f5-082d-4c00-bb73-645c59e52727) 
 


Thanks,
Rabi

> WRT the different swap size on flavor, in this case what is on the
> flavor becomes the effective maximum you can specify (and can override
> with --swap on the nova cli).
> 
> thanks! marios
> 
> > 
> > 
> >> thanks very much for reading! marios > >[1]
> >> >https://github.com/openstack/heat/blob/a1819ff0696635c516d0eb1c59fa4f70cae27d65/heat/tests/nova/test_server.py#L2446
> >> >[2]
> >> >https://review.openstack.org/#/q/I2c538161d88a51022b91b584f16c1439848e7ada,n,z
> >> >
> >> >__
> >> >OpenStack Development Mailing List (not for usage questions)
> >> >Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Kubernetes AutoScaling with Heat AutoScalingGroup and Ceilometer

2015-04-28 Thread Rabi Mishra

- Original Message -
 On Mon, Apr 27, 2015 at 12:28:01PM -0400, Rabi Mishra wrote:
  Hi All,
  
  Deploying Kubernetes(k8s) cluster on any OpenStack based cloud for
  container based workload is a standard deployment pattern. However,
  auto-scaling this cluster based on load would require some integration
  between k8s OpenStack components. While looking at the option of
  leveraging Heat ASG to achieve autoscaling, I came across few requirements
  that the list can discuss and arrive at the best possible solution.
  
  A typical k8s deployment scenario on OpenStack would be as below.
  
  - Master (single VM)
  - Minions/Nodes (AutoScalingGroup)
  
  AutoScaling of the cluster would involve both scaling of minions/nodes and
  scaling Pods(ReplicationControllers).
  
  1. Scaling Nodes/Minions:
  
  We already have utilization stats collected at the hypervisor level, as
  ceilometer compute agent polls the local libvirt daemon to acquire
  performance data for the local instances/nodes.
 
 I really doubts if those metrics are so useful to trigger a scaling
 operation. My suspicion is based on two assumptions: 1) autoscaling
 requests should come from the user application or service, not from the
 controller plane, the application knows best whether scaling is needed;
 2) hypervisor level metrics may be misleading in some cases. For
 example, it cannot give an accurate CPU utilization number in the case
 of CPU overcommit which is a common practice.

I agree that correct utilization statistics is complex with virtual 
infrastructure.
However, I think physical+hypervisor metrics (collected by compute agent) 
should be a 
good point to start.
 
  Also, Kubelet (running on the node) collects the cAdvisor stats. However,
  cAdvisor stats are not fed back to the scheduler at present and scheduler
  uses a simple round-robin method for scheduling.
 
 It looks like a multi-layer resource management problem which needs a
 wholistic design. I'm not quite sure if scheduling at the container
 layer alone can help improve resource utilization or not.

k8s scheduler is going to improve over time to use the cAdvisor/heapster 
metrics for
better scheduling. IMO, we should leave that for k8s to handle.

My point is on getting that metrics to ceilometer either from the nodes or from 
the \
scheduler/master.

  Req 1: We would need a way to push stats from the kubelet/cAdvisor to
  ceilometer directly or via the master(using heapster). Alarms based on
  these stats can then be used to scale up/down the ASG.
 
 To send a sample to ceilometer for triggering autoscaling, we will need
 some user credentials to authenticate with keystone (even with trusts).
 We need to pass the project-id in and out so that ceilometer will know
 the correct scope for evaluation. We also need a standard way to tag
 samples with the stack ID and maybe also the ASG ID. I'd love to see
 this done transparently, i.e. no matching_metadata or query confusions.
 
  There is an existing blueprint[1] for an inspector implementation for
  docker hypervisor(nova-docker). However, we would probably require an
  agent running on the nodes or master and send the cAdvisor or heapster
  stats to ceilometer. I've seen some discussions on possibility of
  leveraging keystone trusts with ceilometer client.
 
 An agent is needed, definitely.
 
  Req 2: Autoscaling Group is expected to notify the master that a new node
  has been added/removed. Before removing a node the master/scheduler has to
  mark node as
  unschedulable.
 
 A little bit confused here ... are we scaling the containers or the
 nodes or both?

We would only focusing on the nodes. However, adding/removing nodes without the 
k8s master/scheduler 
knowing about it (so that it can schedule pods or make them unschedulable)would 
be useless.

  Req 3: Notify containers/pods that the node would be removed for them to
  stop accepting any traffic, persist data. It would also require a cooldown
  period before the node removal.
 
 There have been some discussions on sending messages, but so far I don't
 think there is a conclusion on the generic solution.
 
 Just my $0.02.

Thanks Qiming.

 BTW, we have been looking into similar problems in the Senlin project.

Great. We can probably discuss these during the Summit? I assume there is 
already a session
on Senlin planned, right?

 
 Regards,
   Qiming
 
  Both requirement 2 and 3 would probably require generating scaling event
  notifications/signals for master and containers to consume and probably
  some ASG lifecycle hooks.
  
  
  Req 4: In case of too many 'pending' pods to be scheduled, scheduler would
  signal ASG to scale up. This is similar to Req 1.
  
  
  2. Scaling Pods
  
  Currently manual scaling of pods is possible by resizing
  ReplicationControllers. k8s community is working on an abstraction,
  AutoScaler[2] on top of ReplicationController(RC) that provides
  intention/rule based autoscaling. There would be a requirement

[openstack-dev] [heat] Kubernetes AutoScaling with Heat AutoScalingGroup and Ceilometer

2015-04-27 Thread Rabi Mishra
Hi All,

Deploying Kubernetes(k8s) cluster on any OpenStack based cloud for container 
based workload is a standard deployment pattern. However, auto-scaling this 
cluster based on load would require some integration between k8s OpenStack 
components. While looking at the option of leveraging Heat ASG to achieve 
autoscaling, I came across few requirements that the list can discuss and 
arrive at the best possible solution.

A typical k8s deployment scenario on OpenStack would be as below.

- Master (single VM)
- Minions/Nodes (AutoScalingGroup)

AutoScaling of the cluster would involve both scaling of minions/nodes and 
scaling Pods(ReplicationControllers). 

1. Scaling Nodes/Minions:

We already have utilization stats collected at the hypervisor level, as 
ceilometer compute agent polls the local libvirt daemon to acquire performance 
data for the local instances/nodes. Also, Kubelet (running on the node) 
collects the cAdvisor stats. However, cAdvisor stats are not fed back to the 
scheduler at present and scheduler uses a simple round-robin method for 
scheduling.

Req 1: We would need a way to push stats from the kubelet/cAdvisor to 
ceilometer directly or via the master(using heapster). Alarms based on these 
stats can then be used to scale up/down the ASG. 

There is an existing blueprint[1] for an inspector implementation for docker 
hypervisor(nova-docker). However, we would probably require an agent running on 
the nodes or master and send the cAdvisor or heapster stats to ceilometer. I've 
seen some discussions on possibility of leveraging keystone trusts with 
ceilometer client. 

Req 2: Autoscaling Group is expected to notify the master that a new node has 
been added/removed. Before removing a node the master/scheduler has to mark 
node as 
unschedulable. 

Req 3: Notify containers/pods that the node would be removed for them to stop 
accepting any traffic, persist data. It would also require a cooldown period 
before the node removal. 

Both requirement 2 and 3 would probably require generating scaling event 
notifications/signals for master and containers to consume and probably some 
ASG lifecycle hooks.  


Req 4: In case of too many 'pending' pods to be scheduled, scheduler would 
signal ASG to scale up. This is similar to Req 1. 
 

2. Scaling Pods

Currently manual scaling of pods is possible by resizing 
ReplicationControllers. k8s community is working on an abstraction, 
AutoScaler[2] on top of ReplicationController(RC) that provides intention/rule 
based autoscaling. There would be a requirement to collect cAdvisor/Heapster 
stats to signal the AutoScaler too. Probably this is beyond the scope of 
OpenStack.

Any thoughts and ideas on how to realize this use-case would be appreciated.


[1] 
https://review.openstack.org/gitweb?p=openstack%2Fceilometer-specs.git;a=commitdiff;h=6ea7026b754563e18014a32e16ad954c86bd8d6b
[2] 
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/autoscaling.md

Regards,
Rabi Mishra


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Custom Resource

2014-04-15 Thread Rabi Mishra
 IIRC implementing something like this had been discussed quite a while back.
 I think we discussed the possibility of using web hooks and a defined
 api/payload in place of the SNS/SQS type stuff. I don't think it ever made
 it to the backlog, but I'd be happy to discuss further design and maybe add
 a design session to the summit if you're unable to make it.

Thanks. As suggested, I've added a design session for this. 

http://summit.openstack.org/cfp/details/308

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Custom Resource

2014-04-14 Thread Rabi Mishra
Hi All,

Recently, I've come across some requirements for external 
integrations/resources that can be managed like stack resources 
(create,update,delete) from the stack.

1. Adding/Removing DNS records for instances created as part of a stack.
2. Integration with IPAM solutions for allocate/release of IPs (IP allocation 
pool for provider network) 
3. Other custom integration for dynamic parameters to stacks.

IMHO, it would probably make sense to create a custom resource like 'AWS CFN 
Custom Resource'[1] that can be used for these kind of use cases. I have 
created a blueprint[2] for this.


Please let me know your thoughts on this.


Regards,
Rabi Mishra


[1]http://blogs.aws.amazon.com/application-management/post/Tx2FNAPE4YGYSRV/Customers-CloudFormation-and-Custom-Resources
[2]https://blueprints.launchpad.net/heat/+spec/implement-custom-resource

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Custom Resource

2014-04-14 Thread Rabi Mishra
Hi Steve,

Thanks a lot for your prompt response. I can't agree more that the CFN custom 
resource implementation is complex with it's dependency on SNS and SQS. 
However, it  decouples the implementation of the resource life-cycle from the 
resource itself. IMO, this has some advantages from the template complexity and 
flexibility point of view. 

On choices you mentioned:

1. Custom Python Plugins - I do think this is the best approach for my 
use-cases. However, asking a customer to develop custom plugins and maintaining 
them can be too much ask(asking their 3rd party tool vendors to do it is even 
more difficult), than plugging in some of their existing infra script snippets.

2. Provider Resource - Use of environment files for mapping nested template and 
exchange of parameters/attributes looks sleek. However, I am yet to understand 
how to wrap code snippets (many of them existing scripts) for the resource 
life-cycle in the nested template to achieve these use-cases. 

With the CFN Custom resource, addition of some bits of code to the existing 
scripts to parse the JSON snippets based on the stack life-cycle method is 
what's required. 

However, my understanding of what's possible with the Provider Resource is 
limited at the moment. I'll spend more time and go through it before coming 
back with an answer to the use-case feasibility and constraints.


Regards,
Rabi Mishra

- Original Message -
 Hi Rabi,
 
 On Mon, Apr 14, 2014 at 06:44:44AM -0400, Rabi Mishra wrote:
  Hi All,
  
  Recently, I've come across some requirements for external
  integrations/resources that can be managed like stack resources
  (create,update,delete) from the stack.
  
  1. Adding/Removing DNS records for instances created as part of a stack.
  2. Integration with IPAM solutions for allocate/release of IPs (IP
  allocation pool for provider network)
  3. Other custom integration for dynamic parameters to stacks.
  
  IMHO, it would probably make sense to create a custom resource like 'AWS
  CFN Custom Resource'[1] that can be used for these kind of use cases. I
  have created a blueprint[2] for this.
 
 Heat already has a couple of ways for custom resources to be defined.
 
 The one which probably matches your requirements best is the provider
 resource interface, which allows template defined resources to be mapped
 to user-definable resource types, via an environment file:
 
 http://hardysteven.blogspot.co.uk/2013/10/heat-providersenvironments-101-ive.html
 http://docs.openstack.org/developer/heat/template_guide/environment.html
 
 Provider resources can be defined by both users, and deployers (who can use
 templates to e.g wrap an existing resource with something like DNS
 registration logic, and expose the type transparently to the end-user)
 
 For deployer requirements not satisfied by provider resources (for example
 integration with third-party services), Heat also provides a python plugin
 API, which enables deployers to create their own resource plugins as
 needed:
 
 http://docs.openstack.org/developer/heat/pluginguide.html
 
 Personally, I think these two models provide sufficient flexibility that we
 should be able to avoid the burden of maintaining a CFN compatible custom
 resource plugin API.  I've not looked at it in detail, but the CFN model
 you refer to has always seemed pretty complex to me, and seems like
 something we don't necessarily want to replicate.
 
 If there are gaps where things are not yet possible via the provider
 resource interface, I'd rather discuss incremental improvements to that
 instead of wholesale reimplementation of something compatible with AWS.
 
 Can you provide any more feedback on your use-cases, and whether the
 interfaces I linked can be used to satisfy them?
 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Significance of subnet_id for LBaaS Pool

2014-02-25 Thread Rabi Mishra

- Original Message -
 From: Mark McClain mmccl...@yahoo-inc.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, February 26, 2014 3:43:59 AM
 Subject: Re: [openstack-dev] [neutron] Significance of subnet_id for LBaaS 
 Pool
 
 
 On Feb 25, 2014, at 1:06 AM, Rabi Mishra ramis...@redhat.com wrote:
 
  Hi All,
  
  'subnet_id' attribute of LBaaS Pool resource has been documented as The
  network that pool members belong to
  
  However, with 'HAProxy' driver, it allows to add members belonging to
  different subnets/networks to a lbaas Pool.
  
 Rabi-
 
 The documentation is a bit misleading here.  The subnet_id in the pool is
 used to create the port that the load balancer instance uses to connect with
 the members.

I assume then the validation in horizon to force the VIP ip from this pool 
subnet is incorrect. i.e VIP address can be from a different subnet.

 
 mark
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Significance of subnet_id for LBaaS Pool

2014-02-24 Thread Rabi Mishra
   |
| session_persistence |  |
| status  | PENDING_CREATE   |
| status_description  |  |
| subnet_id   | b1557101-c8f1-415a-846d-6d165a8e8fc2 |
| tenant_id   | c46ae2b06ee54d06828c346f77fb5628 |
+-+--+
[stack@devstack-rabi devstack]$ neutron lb-vip-list
+--+--+---+--+++
| id   | name | address   | protocol | 
admin_state_up | status |
+--+--+---+--+++
| 409e72e6-5a3c-4a7b-be0b-6a8784193dfc | http-vip | 10.10.0.4 | HTTP | True 
  | ACTIVE |
+--+--+---+--+++
   


Regards,
Rabi Mishra 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev