Re: [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2016-12-02 Thread Lingxian Kong
Hi, Takashi,

Thanks for working on this project, we (Catalyst Cloud) also provide VPNaaS
in our OpenStack based public cloud, so maybe we can also provide help from
our side.


Cheers,
Lingxian Kong (Larry)

On Mon, Nov 28, 2016 at 5:50 PM, Takashi Yamamoto 
wrote:

> On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  wrote:
> > Hi
> >
> > As of today, the project neutron-vpnaas is no longer part of the neutron
> > governance. This was a decision reached after the project saw a dramatic
> > drop in active development over a prolonged period of time.
> >
> > What does this mean in practice?
> >
> > From a visibility point of view, release notes and documentation will no
> > longer appear on openstack.org as of Ocata going forward.
> > No more releases will be published by the neutron release team.
> > The neutron team will stop proposing fixes for the upstream CI, if not
> > solely on a voluntary basis (e.g. I still felt like proposing [2]).
> >
> > How does it affect you, the user or the deployer?
> >
> > You can continue to use vpnaas and its CLI via the python-neutronclient
> and
> > expect it to work with neutron up until the newton
> > release/python-neutronclient 6.0.0. After this point, if you want a
> release
> > that works for Ocata or newer, you need to proactively request a release
> > [5], and reach out to a member of the neutron release team [3] for
> approval.
> > Assuming that the vpnaas CI is green, you can expect to have a working
> > vpnaas system upon release of its package in the foreseeable future.
> > Outstanding bugs and new bug reports will be rejected on the basis of
> lack
> > of engineering resources interested in helping out in the typical
> OpenStack
> > review workflow.
> > Since we are freezing the development of the neutron CLI in favor of the
> > openstack unified client (OSC), the lack of a plan to make the VPN
> commands
> > available in the OSC CLI means that at some point in the future the
> neutron
> > client CLI support for vpnaas may be dropped (though I don't expect this
> to
> > happen any time soon).
> >
> > Can this be reversed?
> >
> > If you are interested in reversing this decision, now it is time to step
> up.
> > That said, we won't be reversing the decision for Ocata. There is quite a
> > curve to ramp up to make neutron-vpnaas worthy of being classified as a
> > neutron stadium project, and that means addressing all the gaps
> identified
> > in [6]. If you are interested, please reach out, and I will work with
> you to
> > add your account to [4], so that you can drive the neutron-vpnaas agenda
> > going forward.
> >
> > Please do not hesitate to reach out to ask questions and/or
> clarifications.
>
> hi,
>
> i'm interested in working on the project.
> well, at least on the parts which is used by networking-midonet.
>
> >
> > Cheers,
> > Armando
> >
> > [1] https://review.openstack.org/#/c/392010/
> > [2] https://review.openstack.org/#/c/397924/
> > [3] https://review.openstack.org/#/admin/groups/150,members
> > [4] https://review.openstack.org/#/admin/groups/502,members
> > [5] https://github.com/openstack/releases
> > [6]
> > http://specs.openstack.org/openstack/neutron-specs/specs/
> stadium/ocata/neutron-vpnaas.html
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Allowing Teams Based on Vendor-specific Drivers

2016-12-02 Thread Armando M.
On 30 November 2016 at 02:23, Kevin Benton  wrote:

> >I'll let someone from the Neutron team fill in the details behind their 
> >decision,
> because I don't want to misrepresent them.
>
> I can shed a bit of light on this since I'm a core and had been working
> for a driver vendor at the time of the split. There were a few areas of
> contention:
>
> * Releases and stable branches:
> Vendors develop features for their driver and want them available to all
> of their customers immediately after they do their own QA. Additionally,
> they want them available to the customers running security-only and even
> EOL branches. This obviously violates the release process for upstream
> openstack stuff, so terrible, terrible things were done to apply patches to
> these old branches at customer sites.
>

This is actually a good point worth emphasising because this might have
been unique to our situation at the time: there was an infra patch applied
to all neutron stadium projects that modified gerrit ACLs so that stable
backports would be under the control of the neutron-stable-main team.

Because of the example that Kevin described, members of the team were faced
with the paradox of having to either turn a blind eye, or try to fight the
battle of educating contributors and fixing the 'malpractice' at the root.
Now irrespective of whether the openstack stable policy is deemed too rigid
by some or not, we started to observe that within the same governance we
had individual initiatives behaving totally differently, so differently
that some of us started to wonder what the stadium was for, what was the
point of it, and whether it was misused as a marketing tool.

That's when I came up with the proposal of defining the neutron stadium as
a list of projects that behave consistently and adhere to a common set of
agreed principles, such as common backporting strategies, testing
procedures, including our ability to claim the entire technology stack to
be fully open and completely exercised with upstream infra resources: a
list of projects that any member of the neutron core team should be able to
stand behind and support it without too many ideological clashes.

It's been a long journey and we're almost at the end of it. The neutron
core team has been very supportive of this journey. Now I am not sure
whether they did that just to make me happy and will undo all of it when I
step down :) but I genuinely think it has been a great effort that allowed
us to improve what we've been building by means of setting ourselves
achievable and measurable goals.


>
> * Pass-through drivers:
> In response to the issue above, many vendors ended up creating
> 'vendor-lib' or an HTTP/RPC API to which their Neutron in-tree driver would
> just pass every call with as little logic as possible. When drivers went
> this direction, we could never tell their current functioning state because
> we were always one vendor release (of either vendor-lib or vendor HTTP API)
> away from them breaking something.
>
> IIRC there was a design session in the summit about Cinder having this
> problem. They were trying to determine how thin a driver was allowed to be
> before the cores would refuse to accept it. I don't think they reached a
> consensus on what the limit is or if there should even be a limit.
>
> * Changes impossible to judge for cores:
> For the logic changes that do occur in tree, cores could only really tell
> if they looked like correct python and appeared to do something sane at a
> very high level. Judging if the change even worked was entirely dependent
> on a good 3rd-party CI response. Judging things like backwards
> compatibility with older vendor backends was completely out of the question
> because no vendor offered a full matrix CI test with every version of their
> product. So reviewing driver changes became somewhat of a rubber stamping
> process that many were not interested in and/or comfortable doing.
>
>
> >I hope I'm not the only one who thinks drivers are important?
>
> Of course we care about drivers (see neutron-lib effort). However, it
> wasn't clear what the point of having them in tree was when cores couldn't
> reason about the changes or even try them without special-purpose hardware.
> How do you foresee the drivers improving if we bring them back in tree?
>
> On Tue, Nov 29, 2016 at 11:08 AM, Doug Hellmann 
> wrote:
>
>> Excerpts from Zane Bitter's message of 2016-11-29 12:36:03 -0500:
>> > On 29/11/16 10:28, Doug Hellmann wrote:
>> > > Excerpts from Chris Friesen's message of 2016-11-29 09:09:17 -0600:
>> > >> On 11/29/2016 08:03 AM, Doug Hellmann wrote:
>> > >>> I'll rank my preferred solutions, because I don't actually like any
>> of
>> > >>> them.
>> > >>
>> > >> Just curious...what would you "actually like"?
>> > >>
>> > >> Chris
>> > >>
>> > >
>> > > My preference is to have teams just handle the drivers voluntarily,
>> > > without needing to make it a rule or provide a way to have teams
>> > > that only work on

[openstack-dev] Recall: [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread P Kumaralingam
P Kumaralingam would like to recall the message, "[openstack-dev] [nova] 
Nominating Stephen Finucane for nova-core".
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread P Kumaralingam
Hi Siva/Anusha,
What is the meaning of +1/-1 mentioned in below mail…

Also please subscribe to this mailing list… (on Monday).

Thanks & Regards,
P. Kumaralingam

From: Alex Xu [mailto:sou...@gmail.com]
Sent: Saturday, December 03, 2016 6:06 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

+1

2016-12-02 23:22 GMT+08:00 Matt Riedemann 
mailto:mrie...@linux.vnet.ibm.com>>:
I'm proposing that we add Stephen Finucane to the nova-core team. Stephen has 
been involved with nova for at least around a year now, maybe longer, my 
ability to tell time in nova has gotten fuzzy over the years. Regardless, he's 
always been eager to contribute and over the last several months has done a lot 
of reviews, as can be seen here:

https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com

http://stackalytics.com/report/contribution/nova/180

Stephen has been a main contributor and mover for the config option cleanup 
series that last few cycles, and he's a go-to person for a lot of the 
NFV/performance features in Nova like NUMA, CPU pinning, huge pages, etc.

I think Stephen does quality reviews, leaves thoughtful comments, knows when to 
hold a +1 for a patch that needs work, and when to hold a -1 from a patch that 
just has some nits, and helps others in the project move their changes forward, 
which are all qualities I look for in a nova-core member.

I'd like to see Stephen get a bit more vocal / visible, but we all handle that 
differently and I think it's something Stephen can grow into the more involved 
he is.

So with all that said, I need a vote from the core team on this nomination. I 
honestly don't care to look up the rules too much on number of votes or 
timeline, I think it's pretty obvious once the replies roll in which way this 
goes.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] priorities for the coming week

2016-12-02 Thread Brian Rosmaita
This is aimed particularly at Glance core reviewers, many of whom have
been very quiet lately.  Ideally, people will reply to this message
saying "I've got #2", for example, so that we don't duplicate efforts.

As discussed at the Glance weekly meeting yesterday, the priorities for
12/1 through 12/8 are:

Highest priority:

(1) rolling upgrades spec:
https://review.openstack.org/#/c/331489/
Stuart is actively reviewing, we need someone else to step up (Hemanth
and I are co-authors, so neither of us can +2).  It would be good to
have a non-Rackspace person so we don't get too inbred on this thing.

(2) database strategy for rolling upgrades:
https://review.openstack.org/#/c/331740/
Again, Hemanth and I are co-authors of this spec and so we can't +2 it
ourselves, and non-Rackspace people would be preferable to avoid
groupthink.   (Erno has reviewed, but only put a +1 on it because he's
not comfortable with database work.)  If you want to see a video
explaining the approach and giving a demo, look at:
https://www.youtube.com/watch?v=Z4iwJRlPqOw

(3) glance expand/contract migrations with alembic:
https://review.openstack.org/#/c/374278/
We need another +2 on this one, preferably from a non-Rackspace person.

The above three specs need to be reviewed as soon as possible.  We are
blocking Alex and Hemanth, and O-2 is fast approaching.


Really high priority (would be highest if the specs were already approved):

(4) Patch to fix a glance_store regression:
https://review.openstack.org/#/c/387719/
and patch to prevent a related backend misconfiguration:
https://review.openstack.org/#/c/388944/

(5) Patch to enable better request-id tracking:
https://review.openstack.org/#/c/352892/
This will be nice for operators, let's get it reviewed and merged!

(6) Request for some insights and opinions for bug
https://bugs.launchpad.net/glance/+bug/1585917


Please take a look:
(7) glanceclient problem: https://review.openstack.org/#/c/319960/

thanks,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Allowing Teams Based on Vendor-specific Drivers

2016-12-02 Thread Armando M.
On 29 November 2016 at 10:08, Doug Hellmann  wrote:

> Excerpts from Zane Bitter's message of 2016-11-29 12:36:03 -0500:
> > On 29/11/16 10:28, Doug Hellmann wrote:
> > > Excerpts from Chris Friesen's message of 2016-11-29 09:09:17 -0600:
> > >> On 11/29/2016 08:03 AM, Doug Hellmann wrote:
> > >>> I'll rank my preferred solutions, because I don't actually like any
> of
> > >>> them.
> > >>
> > >> Just curious...what would you "actually like"?
> > >>
> > >> Chris
> > >>
> > >
> > > My preference is to have teams just handle the drivers voluntarily,
> > > without needing to make it a rule or provide a way to have teams
> > > that only work on a driver. That's not one of the options we proposed,
> > > but the results are like what we would get with option 6 (minus the
> > > precedent of the TC telling teams what code they must manage).
> >
> > I don't have a lot of background on why the driver was removed from the
> > Neutron stadium, but reading between the lines it sounds like you think
> > that Neutron made the Wrong Call, and that you would like, in order of
> > preference:
> >
> > a) Neutron to start agreeing with you; or
> > b) The TC to tell Neutron to agree with you; or
>
> I hope I'm not the only one who thinks drivers are important?
>
> I would prefer not to impose obligations on anyone. I wrote up that
> option to explore what it would look like, not because I think it's
> the best outcome.  At the same time, the current approach is actively
> harmful to the overall health of the community by pushing away
> contributors and useful contributions, especially considering the
> different responses to vendor-related issues in other teams.  And
> this does fall within the scope of issues and policies the TC is
> meant to manage.
>
> > c) To do an end run around Neutron by adding it as a separate project
>
> I wouldn't categorize that last one as an end-run. We wouldn't be
> adding the driver team to Neutron, we would be adding it to OpenStack.
> The Neutron team would have no more responsibility for the output of
> a driver team than they do anyone else.
>
> > Individual projects (like Neutron) have pretty wide latitude to add
> > repositories if they want, and are presumably closer to the issues than
> > anyone. So it seems strange that we're starting with a discussion about
> > how to override their judgement, rather than one about why we think
> > that's necessary.
>
> I did, in the original post, try to explain why I think it's necessary.
>
>   The OpenStack community wants to encourage collaboration by
>   emphasizing contributions to projects that abstract differences
>   between vendor-specific products, while still empowering vendors
>   to integrate their products with OpenStack through drivers that
>   can be consumed by the abstraction layers
>
> In addition to wanting collaboration between experts in a given
> field, projects support drivers to give deployers choices. Encouraging
> vendors to write drivers furthers both goals. It also encourages
> those same vendors to be active in the community in other ways,
> such as sponsoring events and the Foundation. Whether we achieve
> *that* goal depends on a lot of factors, and we're more successful
> with some vendors than others. Turning away contributions does not
> encourage their participation in any way I can understand.
>
> > What are the obstacles to the Neutron team agreeing to host these
> > drivers? Perhaps the TC is in a position to remove some of those
> > obstacles? That seems preferable to imposing new obligations on projects.
>
> I'll let someone from the Neutron team fill in the details behind their
> decision, because I don't want to misrepresent them.
>

I replied to Zane's initial email. I hope that provides some insight as to
why we went down the path we did.

Thanks,
Armando


>
> Doug
>
> >
> > cheers,
> > Zane.
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Allowing Teams Based on Vendor-specific Drivers

2016-12-02 Thread Armando M.
On 29 November 2016 at 09:36, Zane Bitter  wrote:

> On 29/11/16 10:28, Doug Hellmann wrote:
>
>> Excerpts from Chris Friesen's message of 2016-11-29 09:09:17 -0600:
>>
>>> On 11/29/2016 08:03 AM, Doug Hellmann wrote:
>>>
 I'll rank my preferred solutions, because I don't actually like any of
 them.

>>>
>>> Just curious...what would you "actually like"?
>>>
>>> Chris
>>>
>>>
>> My preference is to have teams just handle the drivers voluntarily,
>> without needing to make it a rule or provide a way to have teams
>> that only work on a driver. That's not one of the options we proposed,
>> but the results are like what we would get with option 6 (minus the
>> precedent of the TC telling teams what code they must manage).
>>
>
> I don't have a lot of background on why the driver was removed from the
> Neutron stadium, but reading between the lines it sounds like you think
> that Neutron made the Wrong Call, and that you would like, in order of
> preference:
>

In a nutshell: scalability. The list became huge, the core team was made in
charge of dealing with releases requests, backport requests, infra,
governance and doc changes etc. Any of those changes required a neutron
liasion vouching for them. This became untenable, distracting and defeating
the whole point of breaking down the monolithic codebase we were trying to
move away from. I (the PTL since Mitaka) personally felt that we needed to
empower the individual efforts to be in charge or their own destiny and at
the same time making sure that the neutron project as described by the
governance repo was cohesive and made sense to the eye of someone looking
at the project list.

If the eviction or exclusion of a driver caused a project and its
contributors lose their ATC status, access to horizontal teams services
(e.g. representation on docs.o.o, release.o.o), etc, I always thought that
was wrong; that should have not happened, and I hope this effort led by
Doug can fix that.

The neutron team cares about drivers, and I personally believe that are
very important to the success of the OpenStack community. That's why we
enabled the innovation by breaking them out and keeping/augmenting the
extension points provided by the core platform so that they are not stifled
by the chokepoint that the core team may represent. At the same time, I
care about quality and consistency, and I want to be proudly standing
behind the stuff I am involved in, and as such I don't want to be
erroneously associated with initiatives I (and the core team) cannot ever
have the bandwidth to deal with.


> a) Neutron to start agreeing with you; or
> b) The TC to tell Neutron to agree with you; or
> c) To do an end run around Neutron by adding it as a separate project
>
> Individual projects (like Neutron) have pretty wide latitude to add
> repositories if they want, and are presumably closer to the issues than
> anyone. So it seems strange that we're starting with a discussion about how
> to override their judgement, rather than one about why we think that's
> necessary.
>
> What are the obstacles to the Neutron team agreeing to host these drivers?
> Perhaps the TC is in a position to remove some of those obstacles? That
> seems preferable to imposing new obligations on projects.
>
> cheers,
> Zane.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread Alex Xu
+1

2016-12-02 23:22 GMT+08:00 Matt Riedemann :

> I'm proposing that we add Stephen Finucane to the nova-core team. Stephen
> has been involved with nova for at least around a year now, maybe longer,
> my ability to tell time in nova has gotten fuzzy over the years.
> Regardless, he's always been eager to contribute and over the last several
> months has done a lot of reviews, as can be seen here:
>
> https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com
>
> http://stackalytics.com/report/contribution/nova/180
>
> Stephen has been a main contributor and mover for the config option
> cleanup series that last few cycles, and he's a go-to person for a lot of
> the NFV/performance features in Nova like NUMA, CPU pinning, huge pages,
> etc.
>
> I think Stephen does quality reviews, leaves thoughtful comments, knows
> when to hold a +1 for a patch that needs work, and when to hold a -1 from a
> patch that just has some nits, and helps others in the project move their
> changes forward, which are all qualities I look for in a nova-core member.
>
> I'd like to see Stephen get a bit more vocal / visible, but we all handle
> that differently and I think it's something Stephen can grow into the more
> involved he is.
>
> So with all that said, I need a vote from the core team on this
> nomination. I honestly don't care to look up the rules too much on number
> of votes or timeline, I think it's pretty obvious once the replies roll in
> which way this goes.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Question about config cinder volume

2016-12-02 Thread Jason HU
Hi Kolla team,
I want to use Cinder and LVM as backend driver. Questions are:
1. Will Glance use Cinder volume as storage automatically when I enable Cinder?
2. I heard that kolla has a third party plugin method which can be used to do 
some setup on target node. Can it be used to setup the cinder-volume vg? Is 
there any example on doing this?

3. The storage system is HDS AMS2300 which seems do not supported by any Cinder 
driver. So I have to use LVM backend. According to the cinder/glance bug 
mentioned in Kolla doc, does it mean that I can not deploy multi-controller 
scenario, unless using Ceph?


B.R.,
Zhijiang HU__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Re: [neutron][networking-midonet] [Openstack-stable-maint] Stable check of openstack/networking-midonet failed

2016-12-02 Thread Takashi Yamamoto
On Tue, Nov 29, 2016 at 11:53 AM, Takashi Yamamoto
 wrote:
> release team,
>
> can we (networking-midonet) branch stable/newton from a past commit
> with a RC tag, backport some changes [1], and then cut the first release
> on the branch?

to answer myself, RC or beta-looking tag doesn't seem to be allowed for
release:independent projects. [1]
so i went ahead with 3.0.0. [2]

[1] 
https://github.com/openstack/releases/blob/745554fdec87b18fe0a39fa25cdc481b23f28d24/openstack_releases/versionutils.py#L48-L49
[2] https://review.openstack.org/#/c/404078/

>
> [1] some addititonal features without db migrations (qos, lbaasv2, ...) and
> removal of some unsupported code (lbaasv1, ...)
>
> On Fri, Nov 25, 2016 at 11:32 PM, Ihar Hrachyshka  wrote:
>>
>>> On 25 Nov 2016, at 15:23, Takashi Yamamoto  wrote:
>>>
>>> On Fri, Nov 25, 2016 at 8:02 PM, Ihar Hrachyshka  
>>> wrote:

> On 25 Nov 2016, at 11:02, Takashi Yamamoto  wrote:
>
> On Fri, Nov 25, 2016 at 6:54 PM, Ihar Hrachyshka  
> wrote:
>>
>>> On 25 Nov 2016, at 09:25, Takashi Yamamoto  
>>> wrote:
>>>
>>> On Fri, Nov 25, 2016 at 5:18 PM, Ihar Hrachyshka  
>>> wrote:

> On 25 Nov 2016, at 05:26, Takashi Yamamoto  
> wrote:
>
> hi,
>
> networking-midonet doesn't have stable/newton branch yet.
> newton jobs failures are false alarms.
>
> branching has been delayed because development of some futures
> planned for newton has not been completed yet.
>
> the plan is to revert ocata-specific changes after branching newton.

 I don’t think it’s a good idea since you will need to tag a release on 
 branch creation, that is supposed to be compatible with next releases 
 in that same branch.
>>>
>>> can't we create the tag after the revert?
>>>
>>
>> No, that’s release team requirement that they branch on a release tag.
>
> ok, i didn't know the requirement. thank you.
>
>>
>>> anyway no one think this is a good idea.
>>> it's just an unfortunate compromise we ended up.
>>> we are trying to make the schedule better for next release.
>>
>> It would make more sense to tag on a compatible commit from the past and 
>> consider it a first stable release. (Of course it means that feature 
>> development would need to be aligned appropriately.)
>
> in that case, can we backport the features?
> (namely qos and lbaas drivers are in my mind)

 No, I don’t think so. Though maybe we can release an RC as the first tag 
 in the branch and backport features before releasing a final version? I 
 dunno, I guess you will need to talk to OpenStack release folks on how to 
 proceed.
>>>
>>> is it a release team matter?
>>> i thought these were a policy inside neutron.
>>> after all networking-midonet is release:independent.
>>
>> Neutron does not override global policies. I explicitly asked during the 
>> last summit if we can branch before a tag; the answer was no, it’s not an 
>> option.
>>
>> Adding [release] tag since it becomes a matter beyond neutron.
>>
>> Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 4

2016-12-02 Thread Jay Pipes
On Dec 2, 2016 5:21 PM, "Matt Riedemann"  wrote:

On 12/2/2016 12:04 PM, Chris Dent wrote:

>
>
> Latest news on what's going on with resource providers and the
> placement API. I've made some adjustments in the structure of this
> since last time[0]. The new structure tries to put the stuff we need to
> talk about, including medium and long term planning, at the top and
> move the stuff that is summaries of what's going on on gerrit towards
> the bottom. I think we need to do this to enhance the opportunities for
> asynchronous resolution of some of the topics on our plates. If we
> keep waiting until the next meeting where we are all there at the same
> time, stuff will sit for too long.
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2016-Nove
> mber/107982.html
>
>
> # Things to Think About
>
> (Note that I'm frequently going to be wrong or at least incomplete
> about the things I say here, because I'm writing off the top of my
> head. Half the point of writing this is to get it correct by
> collaborative action. If you see something that is wrong, please
> shout out in a response. This section is for discussion of stuff that
> isn't yet being tracked well or has vague conflicts.)
>
> The general goal with placement for Ocata is to have both the nova
> scheduler and resource tracker talking to the API to usefully limit
> the number of hosts that the scheduler evaluates when selecting
> destinations. There are several segments of work coming together to
> make this possible, some of which are further along than others.
>
> ## Update Client Side to Consider Aggregates
>
> When the scheduler requests a list of resource providers, that list
> ought to include compute nodes that are are associated, via
> aggregates, with any shared resource provides (such as shared disk)
> that can satisfy the resource requirements in the request.
>
> Meanwhile, when a compute node places a VM that uses shared disk the
> allocation of resources made by the resource tracker need to go to
> the right resource providers.
>
> This is a thing we know we need to do but is not something for which
> (as far as I know) we've articulated a clear plan or really started
> on.
>

I'm glad I'm not the only one that was wondering what's going on with the
client side aggregates handling stuff.


I have it all done locally. Will push tomorrow...

Best,
- jay

I see the aggregates PUT/GET patches have merged but the resource tracker
stuff hasn't started, at least that I'm aware of. I was looking into this a
bit this week when writing up the Ocata priorities docs and needed to go
back into the generic-resource-pools spec from Newton to dig into the notes
on aggregates:

https://specs.openstack.org/openstack/nova-specs/specs/newto
n/implemented/generic-resource-pools.html

There is a lot of detail in there, which is good - even though we
retrospected at the summit that we spent too much time on details in the
specs in Newton, I guess in this case it might pay off. :)

If I'm understanding correctly, a 'resource pool' in that spec when talking
about aggregates is really a set of resource providers tied to an
aggregate. So I could have 3 compute nodes A, B, C all using the same
shared storage cluster X. So A, B, and C are in an aggregate for X and we
have the resource providers for compute nodes A, B and C all related to
that aggregate X in the placement service. How that ties back into the
scheduler and resource tracker is a bit fuzzy to me at the moment, but if ^
is correct then I could probably figure the rest out by digging back into
the spec details.



> ## Update Scheduler to Request Limited Resource Providers
>
> The "Scheduler Filters in DB" spec[1] has merged along with its
> pair, "Filter Resource Providers by Request"[2], and the work has
> started[3].
>
> There are some things to consider as that work progresses:
>
> * The bit about aggregates in the previous section: the list of
>   returned resource providers needs to include associated providers.
>

nit: I think you mean associated _aggregates_ here.


  To quote Mr. Pipes:
>
>   we will only return resource providers to the scheduler that
>   are compute nodes in Ocata. the resource providers that the
>   placement service returns will either have the resources
>   requested or will be associated with aggregates that have
>   providers that match the requested resources.
>

An example might be useful here, but I'm sure there is probably already one
in the generic resource pools spec linked above. I think it means:

"have the resources requested"

- means this is a resource provider that satisfies a request for some type
of resource class, maybe DISK_GB.


"or will be associated with aggregates that have providers that match the
requested resources."

- means there is a shared storage resource provider that's associated to an
aggregate in the placement service and that aggregate is associated with
some compute node resource providers? So in my exa

Re: [openstack-dev] [tacker] Weekly meeting time slot - doodle poll

2016-12-02 Thread Sridhar Ramaswamy
Thanks for the all those who responded. There is an overwhelming response
to go with an early UTC time. I'm picking the following slot,

Wednesdays 0530 UTC

I've pushed a request to switch to new this time effective Dec 14th [1]. We
will continue with use the existing Tuesdays 1600UTC slot for one more week
(Dec 6th).

[1] https://review.openstack.org/406390

On Tue, Nov 29, 2016 at 10:17 PM, Sridhar Ramaswamy 
wrote:

> Given the natural changes in the mix of our active members and the recent
> daylight savings time change, I'm opening a doodle poll to find the best
> slot for our weekly meeting with max coverage. Please respond to the doodle
> poll below,
>
> http://doodle.com/poll/ee9p34kfhskd2ucc
>
> Note, this is the same poll shared in today's weekly meeting, though I've
> added more timeslots to pick from. If you've already responded, please log
> back one more time and select all possible slots.
>
> thanks,
> Sridhar
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 4

2016-12-02 Thread Matt Riedemann

On 12/2/2016 12:04 PM, Chris Dent wrote:



Latest news on what's going on with resource providers and the
placement API. I've made some adjustments in the structure of this
since last time[0]. The new structure tries to put the stuff we need to
talk about, including medium and long term planning, at the top and
move the stuff that is summaries of what's going on on gerrit towards
the bottom. I think we need to do this to enhance the opportunities for
asynchronous resolution of some of the topics on our plates. If we
keep waiting until the next meeting where we are all there at the same
time, stuff will sit for too long.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107982.html


# Things to Think About

(Note that I'm frequently going to be wrong or at least incomplete
about the things I say here, because I'm writing off the top of my
head. Half the point of writing this is to get it correct by
collaborative action. If you see something that is wrong, please
shout out in a response. This section is for discussion of stuff that
isn't yet being tracked well or has vague conflicts.)

The general goal with placement for Ocata is to have both the nova
scheduler and resource tracker talking to the API to usefully limit
the number of hosts that the scheduler evaluates when selecting
destinations. There are several segments of work coming together to
make this possible, some of which are further along than others.

## Update Client Side to Consider Aggregates

When the scheduler requests a list of resource providers, that list
ought to include compute nodes that are are associated, via
aggregates, with any shared resource provides (such as shared disk)
that can satisfy the resource requirements in the request.

Meanwhile, when a compute node places a VM that uses shared disk the
allocation of resources made by the resource tracker need to go to
the right resource providers.

This is a thing we know we need to do but is not something for which
(as far as I know) we've articulated a clear plan or really started
on.


I'm glad I'm not the only one that was wondering what's going on with 
the client side aggregates handling stuff. I see the aggregates PUT/GET 
patches have merged but the resource tracker stuff hasn't started, at 
least that I'm aware of. I was looking into this a bit this week when 
writing up the Ocata priorities docs and needed to go back into the 
generic-resource-pools spec from Newton to dig into the notes on aggregates:


https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/generic-resource-pools.html

There is a lot of detail in there, which is good - even though we 
retrospected at the summit that we spent too much time on details in the 
specs in Newton, I guess in this case it might pay off. :)


If I'm understanding correctly, a 'resource pool' in that spec when 
talking about aggregates is really a set of resource providers tied to 
an aggregate. So I could have 3 compute nodes A, B, C all using the same 
shared storage cluster X. So A, B, and C are in an aggregate for X and 
we have the resource providers for compute nodes A, B and C all related 
to that aggregate X in the placement service. How that ties back into 
the scheduler and resource tracker is a bit fuzzy to me at the moment, 
but if ^ is correct then I could probably figure the rest out by digging 
back into the spec details.




## Update Scheduler to Request Limited Resource Providers

The "Scheduler Filters in DB" spec[1] has merged along with its
pair, "Filter Resource Providers by Request"[2], and the work has
started[3].

There are some things to consider as that work progresses:

* The bit about aggregates in the previous section: the list of
  returned resource providers needs to include associated providers.


nit: I think you mean associated _aggregates_ here.


  To quote Mr. Pipes:

  we will only return resource providers to the scheduler that
  are compute nodes in Ocata. the resource providers that the
  placement service returns will either have the resources
  requested or will be associated with aggregates that have
  providers that match the requested resources.


An example might be useful here, but I'm sure there is probably already 
one in the generic resource pools spec linked above. I think it means:


"have the resources requested"

- means this is a resource provider that satisfies a request for some 
type of resource class, maybe DISK_GB.


"or will be associated with aggregates that have providers that match 
the requested resources."


- means there is a shared storage resource provider that's associated to 
an aggregate in the placement service and that aggregate is associated 
with some compute node resource providers? So in my example up above, 
does that mean we have a resource provider for the shared storage 
cluster, let's call it X, which is associated with aggregate (again, X), 
and compute nodes A, B and C are in that aggreg

Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-02 Thread Matt Riedemann

On 12/2/2016 8:38 AM, Amrith Kumar wrote:

Thierry, when we were adding the #openstack-swg group, we had this
conversation and I observed that my own preference would be for a project's
meetings to be in that projects room. It makes it easier to then search for
logs for something (say SWG related) in the SWG room, and I do this
regularly for Trove but I have to store text logs of the trove meetings (in
#openstack-meeting-alt) with the logs of the trove room #openstack-trove.

While I understand the simplicity of just hanging around in four or five
conference rooms and being available for pings I submit to you that if
someone wants to ping you and you are not in that projects room, they know
where to go find you if you are a person who hangs around.

So I submit to you that rather than creating #openstack-meeting-5, let's
outlaw the meeting rooms altogether and allow projects to meet in their own
rooms. And people who are interested in projects can hang out in those rooms
(which people do quite a bit anyway), and others who just hangout in
#openstack or #openstack-dev or #openstack-infra.

-amrith



I tend to agree with Amrith here. If there are smaller projects/teams, 
then I'm not sure why they can't just have a meeting in their channel, 
unless it's an issue for the meeting bot?


But like we recently talked about the stable team meetings, we don't 
really need to be in a separate -alt room for those when we have the 
channel and anyone that cares about stable enough to be in the meeting 
is already in that channel, but sometimes the people in that channel 
forget about the meeting or which of the 20 alt rooms it's being held 
in, so they miss it (or Tony is biking down a volcano and we just don't 
have a meeting).


I'm only lurking in #openstack-meeting because of a rare ping, or 
mention, and I *MUST* be present to defend my honor, else I wouldn't be 
in there.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][ironic] Progress on custom resource classes

2016-12-02 Thread Matt Riedemann

On 12/2/2016 11:10 AM, Jay Pipes wrote:

Ironic colleagues, heads up, please read the below fully! I'd like your
feedback on a couple outstanding questions.

tl;dr
-

Work for custom resource classes has been proceeding well this cycle,
and we're at a point where reviews from the Ironic community and
functional testing of a series of patches would be extremely helpful.

https://review.openstack.org/#/q/topic:bp/custom-resource-classes+status:open


History
---

As a brief reminder, in Newton, the Ironic community added a
"resource_class" attribute to the primary Node object returned by the
GET /nodes/{uuid} API call. This resource class attribute represents the
"hardware profile" (for lack of a better term) of the Ironic baremetal
node.

In Nova-land, we would like to stop tracking Ironic baremetal nodes as
collections of vCPU, RAM, and disk space -- because an Ironic baremetal
node is consumed atomically, not piecemeal like a hypervisor node is.
We'd like to have the scheduler search for an appropriate Ironic
baremetal node using a simplified search that simply looks for node that
has a particular hardware profile [1] instead of searching for nodes
that have a certain amount of VCPU, RAM, and disk space.

In addition to the scheduling and "boot request" alignment issues, we
want to fix the reporting and account of resources in an OpenStack
deployment containing Ironic. Currently, Nova reports an aggregate
amount of CPU, RAM and disk space but doesn't understand that, when
Ironic is in the mix, that a significant chunk of that CPU, RAM and disk
isn't "targetable" for virtual machines. We would much prefer to have
resource reporting look like:

 48 vCPU total, 14 used
 204800 MB RAM total, 10240 used
 1340 GB disk total, 100 used
 250 baremetal profile "A" total, 120 used
 120 baremetal profile "B" total, 16 used

instead of mixing all the resources together.

Need review and functional testing on a few things
--

Now that the custom resource classes REST API endpoint is established
[2] in the placement REST API, we are figuring out an appropriate way of
migrating the existing inventory and allocation records for Ironic
baremetal nodes from the "old-style" way of storing inventory for VCPU,
MEMORY_MB and DISK_GB resources towards the "new-style" way of storing a
single inventory record of amount 1 for the Ironic node's
"resource_class" attribute.

The patch that does this online data migration (from within the
nova-compute resource tracker) is here:

https://review.openstack.org/#/c/404472/

I'd really like to get some Ironic contributor eyeballs on that patch
and provide me feedback on whether the logic in the
_cleanup_ironic_legacy_allocations() method is sound.

There are still a couple things that need to be worked out:

1) Should the resource tracker auto-create custom resource classes in
the placement REST API when it sees an Ironic node's resource_class
attribute set to a non-NULL value and there is no record of such a
resource class in the `GET /resource-classes` placement API call? My gut
reaction to this is "yes, let's just do it", but I want to check with
operators and Ironic devs first. The alternative is to ignore errors
about "no such resource class exists", log a warning, and wait for an
administrator to create the custom resource classes that match the
distinct Ironic node resource classes that may exist in the deployment.


Seems to me that if you had to go to the work of setting the 
node.resource_class field in Ironic already, then Nova could be helpful 
and just auto-create the custom resource class to match that if it 
doesn't exist. That's one less manual step that operators need to deal 
with to start using this stuff, which seems like goodness.




2) How we are going to modify the Nova baremetal flavors to specify that
the flavor requires one resource where the resource is of a set of
custom resource classes? For example, let's say I'm have an Ironic
installation with 10 different Ironic node hardware profiles. I've set
all my Ironic node's resource_class attributes to match one of those
hardware profiles. I now need to set up a Nova flavor that requests one
of those ten hardware profiles. How do I do that? One solution might be
to have a hacky flavor extra_spec called
"ironic_resource_classes=CUSTOM_METAL_A,CUSTOM_METAL_B..."  or similar.
When we construct the request_spec object that gets sent to the
scheduler (and later the placement service), we could look for that
extra_spec and construct a special request to the placement service that
says "find me a resource provider that has a capacity of 1 for any of
the following resource classes...". The flavor extra_specs thing is a
total hack, admittedly, but flavors are the current mess that Nova has
to specify requested resources and we need to work within that mess
unfortunately...

The following patch series:

https://review.openstack.org/#/q/topic:bp/custom-resource-classes+s

Re: [openstack-dev] [nova] placement/resource providers update 4

2016-12-02 Thread Jay Pipes
Thanks for the update, Chris, appreciated. No comments from me other 
than to say thanks :)


On 12/02/2016 01:04 PM, Chris Dent wrote:



Latest news on what's going on with resource providers and the
placement API. I've made some adjustments in the structure of this
since last time[0]. The new structure tries to put the stuff we need to
talk about, including medium and long term planning, at the top and
move the stuff that is summaries of what's going on on gerrit towards
the bottom. I think we need to do this to enhance the opportunities for
asynchronous resolution of some of the topics on our plates. If we
keep waiting until the next meeting where we are all there at the same
time, stuff will sit for too long.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107982.html


# Things to Think About

(Note that I'm frequently going to be wrong or at least incomplete
about the things I say here, because I'm writing off the top of my
head. Half the point of writing this is to get it correct by
collaborative action. If you see something that is wrong, please
shout out in a response. This section is for discussion of stuff that
isn't yet being tracked well or has vague conflicts.)

The general goal with placement for Ocata is to have both the nova
scheduler and resource tracker talking to the API to usefully limit
the number of hosts that the scheduler evaluates when selecting
destinations. There are several segments of work coming together to
make this possible, some of which are further along than others.

## Update Client Side to Consider Aggregates

When the scheduler requests a list of resource providers, that list
ought to include compute nodes that are are associated, via
aggregates, with any shared resource provides (such as shared disk)
that can satisfy the resource requirements in the request.

Meanwhile, when a compute node places a VM that uses shared disk the
allocation of resources made by the resource tracker need to go to
the right resource providers.

This is a thing we know we need to do but is not something for which
(as far as I know) we've articulated a clear plan or really started
on.

## Update Scheduler to Request Limited Resource Providers

The "Scheduler Filters in DB" spec[1] has merged along with its
pair, "Filter Resource Providers by Request"[2], and the work has
started[3].

There are some things to consider as that work progresses:

* The bit about aggregates in the previous section: the list of
  returned resource providers needs to include associated providers.
  To quote Mr. Pipes:

  we will only return resource providers to the scheduler that
  are compute nodes in Ocata. the resource providers that the
  placement service returns will either have the resources
  requested or will be associated with aggregates that have
  providers that match the requested resources.

* There is unresolved debate about the structure of the request being
  made to the API. Is it POST or a GET, does it have a body or use
  query strings? The plan is to resolve this discussion in the review
  of the code at [3].

[1]
http://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/resource-providers-scheduler-db-filters.html

[2]
http://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/resource-providers-get-by-request.html

[3] https://review.openstack.org/#/c/386242/

## Docs

In addition to needing an api-ref we also need a placement-dev.rst to
go alongside the placement.rst. The -dev would mostly explain the how
and the why of the placement API archicture, how the testing works,
etc. That's mostly on me.

## Placement Upgrade/Installation issues

(This is a straight copy from the previous message)

In his response[4] to this topic Matt R pointed out todos for this
topic:

* get the placement-api enabled by default in the various bits of
  ocata CI * ensure that microversions are being used on both sides of the
  placement API transactions (that's true in pending changes to
  both the API and the resource tracker)

[4]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107177.html


## Long Term Stuff

### Making Claims in the Placement API

After Ocata the placement API will evolve to make claims, on the
/allocations endpoint. When presented with a set of resources
requirements _the_ resource provider that satisfies those requiements
will be returned and the claim of resources made in a single step. To
quote Mr. Pipes again:

once we have a placement service actually doing claims, the
returned resource providers for an allocation will be the actual
resource providers that were allocated against (which include
*both* compute node providers as well as any resource provider of
a shared resource that was allocated)

Just so folk are aware.

### Moving Placement out of Nova

If this is something we ever plan to do (there appear to be multiple
points of view) then it is something we need to prepare for to 

Re: [openstack-dev] [qa] [openstack-health] Avoid showing non-official projects failure ratios

2016-12-02 Thread Ken'ichi Ohmichi
2016-12-02 5:39 GMT-08:00 Masayuki Igawa :
> Hi,
>
> On Fri, Dec 2, 2016 at 6:29 PM, Andreas Jaeger  wrote:
>> On 12/02/2016 10:03 AM, Thierry Carrez wrote:
>>> Ken'ichi Ohmichi wrote:
 Hi QA-team,

 In the big-tent policy, we continue creating new projects.
 On the other hand, some projects became non-active.
 That seems natural thing.

 Now openstack-health[1] shows non-active project as 100% failure ratio
 on "Project Status".
 The project has became non-official since
 https://review.openstack.org/#/c/324412/
 So I feel it is nice to have black-list or something to make it
 disappear from the dashboard for concentrating on active projects'
 failures.

 Any thoughts?
>>>
>>> Yes, I totally agree we should only list active official projects in
>>> there, otherwise long-dead things like Cue will make the view look bad.
>>> Looks like the system adds new ones but does not remove anything ? It
>>> should probably take its list from [1].
>>>
>>> [1]
>>> http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
>>
>> Is cue completely dead? Should we then retire it completely following
>> http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project ?
>>
>> It still has jobs setup and I see people submitting typo fixes etc.
>
> I'm not sure the cue is dead or not. But I think we should fix the
> failure of the job or remove the periodic jobs. Otherwise, the job
> just waste the resource of the OpenStack infra..

Yeah, that is a nice point.
And this case means openstack-health notifies this wasting resource on
the infra, that is good thing.
The failing job is already removed since
https://review.openstack.org/#/c/404375/
So we will not see the failure on the dashboard soon, thanks for helping that.

> But we should have the filter feature like a 'Project Type' of
> stackalitics, probably. I think it's useful for openstack-health
> users.

Yeah, it might be useful. But it is fine to wait for seeing the above result.
Maybe our motivation of the filter feature will become less after that ;)

Thanks
Ken Ohmichi

---

>>
>>
>> Andreas
>> --
>>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>>HRB 21284 (AG Nürnberg)
>> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread Ken'ichi Ohmichi
2016-12-02 7:22 GMT-08:00 Matt Riedemann :
> I'm proposing that we add Stephen Finucane to the nova-core team. Stephen
> has been involved with nova for at least around a year now, maybe longer, my
> ability to tell time in nova has gotten fuzzy over the years. Regardless,
> he's always been eager to contribute and over the last several months has
> done a lot of reviews, as can be seen here:
>
> https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com
>
> http://stackalytics.com/report/contribution/nova/180
>
> Stephen has been a main contributor and mover for the config option cleanup
> series that last few cycles, and he's a go-to person for a lot of the
> NFV/performance features in Nova like NUMA, CPU pinning, huge pages, etc.
>
> I think Stephen does quality reviews, leaves thoughtful comments, knows when
> to hold a +1 for a patch that needs work, and when to hold a -1 from a patch
> that just has some nits, and helps others in the project move their changes
> forward, which are all qualities I look for in a nova-core member.
>
> I'd like to see Stephen get a bit more vocal / visible, but we all handle
> that differently and I think it's something Stephen can grow into the more
> involved he is.
>
> So with all that said, I need a vote from the core team on this nomination.
> I honestly don't care to look up the rules too much on number of votes or
> timeline, I think it's pretty obvious once the replies roll in which way
> this goes.

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Propose to normalize namespaces

2016-12-02 Thread Morales, Victor
Hey there, 

There is a mismatch of namespaces in neutron which uses AGENT and agent which 
is addressed by Ihar in the patch[1].  That raised the question is 
olo-config-generator should be normalize this namespaces, maybe(with my limited 
knowledge of oslo.conf) this change can be placed in _clean_opts function[2].  
I personally like what Doug is suggesting[3], emitting a warning where is 
mixing case, which at least give us an idea of places where is having the same 
issue and eventually normalize them.  Any thoughts on this?

Regards, 

Victor Morales

[1] https://review.openstack.org/#/c/404362
[2] 
https://github.com/openstack/oslo.config/blob/master/oslo_config/generator.py#L331-L362
[3] https://bugs.launchpad.net/oslo.config/+bug/1646084/comments/2
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stepping down from core

2016-12-02 Thread Bhatia, Manjeet S
Henry,

Sad to see you stepping down, it was great learning experience working
With you. Thanks for all your help.

Best wishes !

Manjeet
> -Original Message-
> From: Henry Gessau [mailto:hen...@gessau.net]
> Sent: Thursday, December 1, 2016 2:51 PM
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [Neutron] Stepping down from core
> 
> I've already communicated this in the neutron meeting and in some neutron
> policy patches, but yesterday the PTL actually updated the gerrit ACLs so I
> thought I'd drop a note here too.
> 
> My work situation has changed and leaves me little time to keep up with my
> duties as core reviewer, DB lieutenant, and drivers team member.
> 
> Working with the diverse and very talented contributors to Neutron has been
> the best experience of my career (which started before many of you were
> born).
> Thank you all for making the team such a great community. Because of you the
> project is thriving and will continue to be successful!
> 
> I will still be around on IRC, contribute some small patches here and there, 
> and
> generally try to keep abreast of Neutron's progress. Don't hesitate to ping 
> me.
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-12-02 Thread Keen, Joe


On 12/2/16, 1:29 AM, "Mehdi Abaakouk"  wrote:

>On Fri, Dec 02, 2016 at 03:29:59PM +1100, Tony Breeds wrote:
>>On Thu, Dec 01, 2016 at 04:52:52PM +, Keen, Joe wrote:
>>
>>> Unfortunately there¹s nothing wrong on the Monasca side so far as we
>>>know.
>>>  We test new versions of the kafka-python library outside of Monasca
>>> before we bother to try integrating a new version.  Since 1.0 the
>>> kafka-python library has suffered from crashes and memory leaks severe
>>> enough that we¹ve never attempted using it in Monasca itself.  We
>>>reported
>>> the bugs we found to the kafka-python project but they were closed once
>>> they released a new version.
>>
>>So Opening bugs isn't working.  What about writing code?
>
>The bug https://github.com/dpkp/kafka-python/issues/55
>
>Reopening it would be the right solution here.
>
>I can't reproduce the segfault neither and I agree with dpkp, that looks
>like a
>ujson issue.


The bug I had was: https://github.com/dpkp/kafka-python/issues/551

In the case of that bug ujson was not an issue.  The behaviour remained
even using the standard json library.  The primary issue I found with it
was a memory leak over successive runs of the test script.  Eventually the
leak became so bad that the OOM killer killed the process which caused the
segfault I was seeing.  The last version I tested was 1.2.1 and it still
leaked badly.  I¹ll need to let the benchmark script run for a while and
make sure it¹s not still leaking.

>
>And my bench seems to confirm the perf issue have been solved:
>(but not in the pointed version...)
>
>$ pifpaf run kafka python kafka_test.py
>kafka-python version: 0.9.5
>...
>fetch size 179200 -> 45681.8728864 messages per second
>fetch size 204800 -> 47724.3810674 messages per second
>fetch size 230400 -> 47209.9841092 messages per second
>fetch size 256000 -> 48340.7719787 messages per second
>fetch size 281600 -> 49192.9896743 messages per second
>fetch size 307200 -> 50915.3291133 messages per second
>
>$ pifpaf run kafka python kafka_test.py
>kafka-python version: 1.0.2
>
>fetch size 179200 -> 8546.77931323 messages per second
>fetch size 204800 -> 9213.30958314 messages per second
>fetch size 230400 -> 10316.668006 messages per second
>fetch size 256000 -> 11476.2285269 messages per second
>fetch size 281600 -> 12353.7254386 messages per second
>fetch size 307200 -> 13131.2367288 messages per second
>
>(1.1.1 and 1.2.5 have also the same issue)
>
>$ pifpaf run kafka python kafka_test.py
>kafka-python version: 1.3.1
>fetch size 179200 -> 44636.9371873 messages per second
>fetch size 204800 -> 44324.7085365 messages per second
>fetch size 230400 -> 45235.8283208 messages per second
>fetch size 256000 -> 45793.1044121 messages per second
>fetch size 281600 -> 44648.6357019 messages per second
>fetch size 307200 -> 44877.8445987 messages per second
>fetch size 332800 -> 47166.9176281 messages per second
>fetch size 358400 -> 47391.0057622 messages per second
>
>Looks like it works well now :)

It¹s good that the performance problem has been fixed.  The remaining
issues on the Monasca side are verifying that the batch send method we
were using in 0.9.5 still works with the new async behaviour, seeing if
our consumer auto balance still functions or converting to use the Kafka
internal auto balance in Kafka 0.10, and finding a way to do efficient
synchronous writes with the new async methods.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 4

2016-12-02 Thread Chris Dent



Latest news on what's going on with resource providers and the
placement API. I've made some adjustments in the structure of this
since last time[0]. The new structure tries to put the stuff we need to
talk about, including medium and long term planning, at the top and
move the stuff that is summaries of what's going on on gerrit towards
the bottom. I think we need to do this to enhance the opportunities for
asynchronous resolution of some of the topics on our plates. If we
keep waiting until the next meeting where we are all there at the same
time, stuff will sit for too long.

[0] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107982.html

# Things to Think About

(Note that I'm frequently going to be wrong or at least incomplete
about the things I say here, because I'm writing off the top of my
head. Half the point of writing this is to get it correct by
collaborative action. If you see something that is wrong, please
shout out in a response. This section is for discussion of stuff that
isn't yet being tracked well or has vague conflicts.)

The general goal with placement for Ocata is to have both the nova
scheduler and resource tracker talking to the API to usefully limit
the number of hosts that the scheduler evaluates when selecting
destinations. There are several segments of work coming together to
make this possible, some of which are further along than others.

## Update Client Side to Consider Aggregates

When the scheduler requests a list of resource providers, that list
ought to include compute nodes that are are associated, via
aggregates, with any shared resource provides (such as shared disk)
that can satisfy the resource requirements in the request.

Meanwhile, when a compute node places a VM that uses shared disk the
allocation of resources made by the resource tracker need to go to
the right resource providers.

This is a thing we know we need to do but is not something for which
(as far as I know) we've articulated a clear plan or really started
on.

## Update Scheduler to Request Limited Resource Providers

The "Scheduler Filters in DB" spec[1] has merged along with its
pair, "Filter Resource Providers by Request"[2], and the work has
started[3].

There are some things to consider as that work progresses:

* The bit about aggregates in the previous section: the list of
  returned resource providers needs to include associated providers.
  To quote Mr. Pipes:

  we will only return resource providers to the scheduler that
  are compute nodes in Ocata. the resource providers that the
  placement service returns will either have the resources
  requested or will be associated with aggregates that have
  providers that match the requested resources.

* There is unresolved debate about the structure of the request being
  made to the API. Is it POST or a GET, does it have a body or use
  query strings? The plan is to resolve this discussion in the review
  of the code at [3].

[1] 
http://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/resource-providers-scheduler-db-filters.html
[2] 
http://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/resource-providers-get-by-request.html
[3] https://review.openstack.org/#/c/386242/

## Docs

In addition to needing an api-ref we also need a placement-dev.rst to
go alongside the placement.rst. The -dev would mostly explain the how
and the why of the placement API archicture, how the testing works,
etc. That's mostly on me.

## Placement Upgrade/Installation issues

(This is a straight copy from the previous message)

In his response[4] to this topic Matt R pointed out todos for this
topic:

* get the placement-api enabled by default in the various bits of
  ocata CI 
* ensure that microversions are being used on both sides of the

  placement API transactions (that's true in pending changes to
  both the API and the resource tracker)

[4] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107177.html

## Long Term Stuff

### Making Claims in the Placement API

After Ocata the placement API will evolve to make claims, on the
/allocations endpoint. When presented with a set of resources
requirements _the_ resource provider that satisfies those requiements
will be returned and the claim of resources made in a single step. To
quote Mr. Pipes again:

once we have a placement service actually doing claims, the
returned resource providers for an allocation will be the actual
resource providers that were allocated against (which include
*both* compute node providers as well as any resource provider of
a shared resource that was allocated)

Just so folk are aware.

### Moving Placement out of Nova

If this is something we ever plan to do (there appear to be multiple
points of view) then it is something we need to prepare for to ease
the eventual transition. Some of these things include:

* Removing as much 'nova.*' packages from the hierarchy of placement
  modu

Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-12-02 Thread Clint Byrum
Excerpts from Tony Breeds's message of 2016-12-02 15:26:40 +1100:
> On Thu, Dec 01, 2016 at 08:41:54AM -0800, Joshua Harlow wrote:
> > Keen, Joe wrote:
> > > I¹ll look into testing the newest version of kafka-python and see if it
> > > meets our needs.  If it still isn¹t stable and performant enough what are
> > > the available options?
> > 
> > Fix the kafka-python library or fix monasca; those seem to be the options to
> > me :)
> 
> Yup, Also worth including fix oslo.messaging to meet monasca's needs.  But
> *something* needs fixing.
>  
> > I'd also not like to block the rest of the world (from using newer versions
> > of kafka-python) during this as well. But then this may diverge/expand into
> > a discussion we had a few summits ago, about getting rid of
> > co-instability...
> 
> lalalalala not listening ;P
> 
> Less flippantly, there are a couple of ways to do this but IMO they're not in
> the best interest of OpenStack.
> 
> 1. vendor/fork python-kafka 0.X
> 2. Stop the proposal-bot from syncing with monasca, thereby allowing it to use
> python-kafka 0.X at the expense of co-installability.

Could there be a (3)?

 3. change the oslo driver to work with the currently pinned
 python-kafka version?

> 
> Fortunately either option is easy to reverse once the underlying issue is 
> fixed.
> 
> Yours Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][placement][ironic] Progress on custom resource classes

2016-12-02 Thread Jay Pipes
Ironic colleagues, heads up, please read the below fully! I'd like your 
feedback on a couple outstanding questions.


tl;dr
-

Work for custom resource classes has been proceeding well this cycle, 
and we're at a point where reviews from the Ironic community and 
functional testing of a series of patches would be extremely helpful.


https://review.openstack.org/#/q/topic:bp/custom-resource-classes+status:open

History
---

As a brief reminder, in Newton, the Ironic community added a 
"resource_class" attribute to the primary Node object returned by the 
GET /nodes/{uuid} API call. This resource class attribute represents the 
"hardware profile" (for lack of a better term) of the Ironic baremetal node.


In Nova-land, we would like to stop tracking Ironic baremetal nodes as 
collections of vCPU, RAM, and disk space -- because an Ironic baremetal 
node is consumed atomically, not piecemeal like a hypervisor node is.
We'd like to have the scheduler search for an appropriate Ironic 
baremetal node using a simplified search that simply looks for node that 
has a particular hardware profile [1] instead of searching for nodes 
that have a certain amount of VCPU, RAM, and disk space.


In addition to the scheduling and "boot request" alignment issues, we 
want to fix the reporting and account of resources in an OpenStack 
deployment containing Ironic. Currently, Nova reports an aggregate 
amount of CPU, RAM and disk space but doesn't understand that, when 
Ironic is in the mix, that a significant chunk of that CPU, RAM and disk 
isn't "targetable" for virtual machines. We would much prefer to have 
resource reporting look like:


 48 vCPU total, 14 used
 204800 MB RAM total, 10240 used
 1340 GB disk total, 100 used
 250 baremetal profile "A" total, 120 used
 120 baremetal profile "B" total, 16 used

instead of mixing all the resources together.

Need review and functional testing on a few things
--

Now that the custom resource classes REST API endpoint is established 
[2] in the placement REST API, we are figuring out an appropriate way of 
migrating the existing inventory and allocation records for Ironic 
baremetal nodes from the "old-style" way of storing inventory for VCPU, 
MEMORY_MB and DISK_GB resources towards the "new-style" way of storing a 
single inventory record of amount 1 for the Ironic node's 
"resource_class" attribute.


The patch that does this online data migration (from within the 
nova-compute resource tracker) is here:


https://review.openstack.org/#/c/404472/

I'd really like to get some Ironic contributor eyeballs on that patch 
and provide me feedback on whether the logic in the 
_cleanup_ironic_legacy_allocations() method is sound.


There are still a couple things that need to be worked out:

1) Should the resource tracker auto-create custom resource classes in 
the placement REST API when it sees an Ironic node's resource_class 
attribute set to a non-NULL value and there is no record of such a 
resource class in the `GET /resource-classes` placement API call? My gut 
reaction to this is "yes, let's just do it", but I want to check with 
operators and Ironic devs first. The alternative is to ignore errors 
about "no such resource class exists", log a warning, and wait for an 
administrator to create the custom resource classes that match the 
distinct Ironic node resource classes that may exist in the deployment.


2) How we are going to modify the Nova baremetal flavors to specify that 
the flavor requires one resource where the resource is of a set of 
custom resource classes? For example, let's say I'm have an Ironic 
installation with 10 different Ironic node hardware profiles. I've set 
all my Ironic node's resource_class attributes to match one of those 
hardware profiles. I now need to set up a Nova flavor that requests one 
of those ten hardware profiles. How do I do that? One solution might be 
to have a hacky flavor extra_spec called 
"ironic_resource_classes=CUSTOM_METAL_A,CUSTOM_METAL_B..."  or similar. 
When we construct the request_spec object that gets sent to the 
scheduler (and later the placement service), we could look for that 
extra_spec and construct a special request to the placement service that 
says "find me a resource provider that has a capacity of 1 for any of 
the following resource classes...". The flavor extra_specs thing is a 
total hack, admittedly, but flavors are the current mess that Nova has 
to specify requested resources and we need to work within that mess 
unfortunately...


The following patch series:

https://review.openstack.org/#/q/topic:bp/custom-resource-classes+status:open

contains all the outstanding patches for the custom resource classes 
work. Getting more eyeballs on these patches would be super. If you are 
an Ironic operator that has some time to play with the new code and 
offer feedback and testing, that would be super awesome. Please come 
find me, cdent, bauzas, dans

[openstack-dev] [tripleo] [ci]

2016-12-02 Thread Wesley Hayutin
Greetings,

I wanted to send a status update on the quickstart based containerized
compute ci.

The work is here:
https://review.openstack.org/#/c/393348/

I had two passes on the morning of Nov 30 in a row, then later that day the
deployment started to fail due the compute node loosing it's networking and
became unpingable.   After poking around and talking to a few folks its
likely that we're hitting at least one of two possible bugs [1-2]

I am on pto next week but will periodically check in and can easily retest
if these resolve.

Thank you!

[1] https://bugs.launchpad.net/ironic/+bug/1646477
[2] https://bugs.launchpad.net/tripleo/+bug/1646897 just filed
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread Sean Dague
+1

On 12/02/2016 10:22 AM, Matt Riedemann wrote:
> I'm proposing that we add Stephen Finucane to the nova-core team.
> Stephen has been involved with nova for at least around a year now,
> maybe longer, my ability to tell time in nova has gotten fuzzy over the
> years. Regardless, he's always been eager to contribute and over the
> last several months has done a lot of reviews, as can be seen here:
> 
> https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com
> 
> http://stackalytics.com/report/contribution/nova/180
> 
> Stephen has been a main contributor and mover for the config option
> cleanup series that last few cycles, and he's a go-to person for a lot
> of the NFV/performance features in Nova like NUMA, CPU pinning, huge
> pages, etc.
> 
> I think Stephen does quality reviews, leaves thoughtful comments, knows
> when to hold a +1 for a patch that needs work, and when to hold a -1
> from a patch that just has some nits, and helps others in the project
> move their changes forward, which are all qualities I look for in a
> nova-core member.
> 
> I'd like to see Stephen get a bit more vocal / visible, but we all
> handle that differently and I think it's something Stephen can grow into
> the more involved he is.
> 
> So with all that said, I need a vote from the core team on this
> nomination. I honestly don't care to look up the rules too much on
> number of votes or timeline, I think it's pretty obvious once the
> replies roll in which way this goes.
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread Kevin L. Mitchell
On Fri, 2016-12-02 at 09:22 -0600, Matt Riedemann wrote:
> I'm proposing that we add Stephen Finucane to the nova-core team. 

+1 from me.
-- 
Kevin L. Mitchell 


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-02 Thread Yolanda Robla Mota
Hi , Dmitry
That's what i didn't get very clear. If all the deployment steps are 
pre-imaging as that statement says, or every deploy step could be isolated and 
configured somehow.
I'm also a bit confused with that spec, because it mixes the concept of 
"deployment steps", will all the changes needed for runtime RAID. Could it be 
possible to separate into two separate ones?

- Original Message -
From: "Dmitry Tantsur" 
To: openstack-dev@lists.openstack.org
Sent: Friday, December 2, 2016 3:51:30 PM
Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel 
parameters on local boot

On 12/02/2016 01:28 PM, Yolanda Robla Mota wrote:
> Hi Dmitry
>
> So we've been looking at that spec you suggested, but we are wondering if 
> that will be useful for our use case. As the text says:
>
> The ``ironic-python-agent`` project and ``agent`` driver will be adjusted to
> support ``get_deploy_steps``. That way, ``ironic-python-agent`` will be able
> to declare deploy steps to run prior to disk imaging, and operators will be
> able to extend ``ironic-python-agent`` to add any custom step.
>
> Our needs are different, actually we need to create a deployment step after 
> imaging. We'd need an step that drops config on /etc/default/grub , and 
> updates it. This is a post-imaging deploy step, that modifies the base image. 
> Could ironic support these kind of steps, if there is a base system to just 
> define per-user steps?

I thought that all deployment operations are converted to steps, with 
partitioning, writing the image, writing the configdrive and installing the 
boot 
loader being four default ones (as you see, two steps actually happen after the 
image is written).

>
> The idea we had on mind is:
> - from tripleo, add a property to each flavor, that defines the boot 
> parameters:  openstack flavor set compute --property 
> os:kernel_boot_params='abc'
> - define a "ironic post-imaging deploy step", that will grab this property 
> from the flavor, drop it on /etc/default/grub and regenerate it
> - then on local boot, the proper kernel parameters will be applied
>
> What is your feedback there?
>
> - Original Message -
> From: "Dmitry Tantsur" 
> To: openstack-dev@lists.openstack.org
> Sent: Friday, December 2, 2016 12:44:29 PM
> Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel 
> parameters on local boot
>
> On 11/28/2016 04:46 PM, Jay Faulkner wrote:
>>
>>> On Nov 28, 2016, at 7:36 AM, Yolanda Robla Mota  wrote:
>>>
>>> Hi, good afternoon
>>>
>>> I wanted to start an email thread about how to properly setup kernel 
>>> parameters on local boot, for our overcloud images on TripleO.
>>> These parameters may vary depending on the needs of our end users, and even 
>>> can be different ( for different roles ) per deployment. As an example, we 
>>> need it for:
>>> - enable FIPS kernel in terms of security 
>>> (https://bugs.launchpad.net/tripleo/+bug/1640235)
>>> - enable functionality for DPDK/SR-IOV 
>>> (https://review.openstack.org/#/c/331564/)
>>> - enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
>>> - etc..
>>>
>>> So far, the solutions we got were on several directions:
>>>
>>> 1. Update the golden overcloud-full image with virt-customize, modifying 
>>> /etc/default/grub settings according to our needs: this is a manual 
>>> process, not really driven by TripleO. End users will want to avoid manual 
>>> steps as much as possible. Also if we announce that OpenStack ships 
>>> features in TripleO like DPDK, SR-IOV... doesn't make sense to tell end 
>>> users that if they want to consume that feature, they need to do manual 
>>> updates on the image. It shall be natively supported, or configurable per 
>>> TripleO environments.
>>>
>>> 2. Create our own images using diskimage-builder and custom elements: in 
>>> this case, we have the problem that the partners will loose support, as 
>>> building their own images is good for upstream, but not accepted into the 
>>> OSP environment. Also the combination of images needed can be huge, that 
>>> can be a blocker for QA.
>>>
>>> 3. Add Ironic support for it. Images can be uploaded to glance, and some 
>>> properties can be set on metadata, like a json with kernel parameters. 
>>> Ironic will modify these kernel parameters when deploying the image (in a 
>>> similar way that when it installs bootloader, or generates partitions).
>>>
>>
>> This has been proposed before in ironic-specs 
>> (https://review.openstack.org/#/c/331564/) and was rejected, as it would 
>> require Ironic to reach out and modify image contents, which traditionally 
>> has been considered out of scope for Ironic. I would personally recommend 
>> #4, as post-boot automation is the safest way to configure node-specific 
>> options inside an image.
>
> I'm still a bit divided about our decision back then.. On one hand, this does
> seem somewhat out of scope. On the other, I quite understand why reboot is
> suboptimal. I wonder if the ong

Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread Jay Pipes

+1

On 12/02/2016 10:22 AM, Matt Riedemann wrote:

I'm proposing that we add Stephen Finucane to the nova-core team.
Stephen has been involved with nova for at least around a year now,
maybe longer, my ability to tell time in nova has gotten fuzzy over the
years. Regardless, he's always been eager to contribute and over the
last several months has done a lot of reviews, as can be seen here:

https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com

http://stackalytics.com/report/contribution/nova/180

Stephen has been a main contributor and mover for the config option
cleanup series that last few cycles, and he's a go-to person for a lot
of the NFV/performance features in Nova like NUMA, CPU pinning, huge
pages, etc.

I think Stephen does quality reviews, leaves thoughtful comments, knows
when to hold a +1 for a patch that needs work, and when to hold a -1
from a patch that just has some nits, and helps others in the project
move their changes forward, which are all qualities I look for in a
nova-core member.

I'd like to see Stephen get a bit more vocal / visible, but we all
handle that differently and I think it's something Stephen can grow into
the more involved he is.

So with all that said, I need a vote from the core team on this
nomination. I honestly don't care to look up the rules too much on
number of votes or timeline, I think it's pretty obvious once the
replies roll in which way this goes.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread Daniel P. Berrange
On Fri, Dec 02, 2016 at 09:22:54AM -0600, Matt Riedemann wrote:
> I'm proposing that we add Stephen Finucane to the nova-core team. Stephen
> has been involved with nova for at least around a year now, maybe longer, my
> ability to tell time in nova has gotten fuzzy over the years. Regardless,
> he's always been eager to contribute and over the last several months has
> done a lot of reviews, as can be seen here:
> 
> https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com
> 
> http://stackalytics.com/report/contribution/nova/180
> 
> Stephen has been a main contributor and mover for the config option cleanup
> series that last few cycles, and he's a go-to person for a lot of the
> NFV/performance features in Nova like NUMA, CPU pinning, huge pages, etc.
> 
> I think Stephen does quality reviews, leaves thoughtful comments, knows when
> to hold a +1 for a patch that needs work, and when to hold a -1 from a patch
> that just has some nits, and helps others in the project move their changes
> forward, which are all qualities I look for in a nova-core member.
> 
> I'd like to see Stephen get a bit more vocal / visible, but we all handle
> that differently and I think it's something Stephen can grow into the more
> involved he is.
> 
> So with all that said, I need a vote from the core team on this nomination.
> I honestly don't care to look up the rules too much on number of votes or
> timeline, I think it's pretty obvious once the replies roll in which way
> this goes.

+1


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-02 Thread Jay Faulkner

> On Dec 2, 2016, at 3:44 AM, Dmitry Tantsur  wrote:
> 
> On 11/28/2016 04:46 PM, Jay Faulkner wrote:
>> 
>>> On Nov 28, 2016, at 7:36 AM, Yolanda Robla Mota  wrote:
>>> 
>>> Hi, good afternoon
>>> 
>>> I wanted to start an email thread about how to properly setup kernel 
>>> parameters on local boot, for our overcloud images on TripleO.
>>> These parameters may vary depending on the needs of our end users, and even 
>>> can be different ( for different roles ) per deployment. As an example, we 
>>> need it for:
>>> - enable FIPS kernel in terms of security 
>>> (https://bugs.launchpad.net/tripleo/+bug/1640235)
>>> - enable functionality for DPDK/SR-IOV 
>>> (https://review.openstack.org/#/c/331564/)
>>> - enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
>>> - etc..
>>> 
>>> So far, the solutions we got were on several directions:
>>> 
>>> 1. Update the golden overcloud-full image with virt-customize, modifying 
>>> /etc/default/grub settings according to our needs: this is a manual 
>>> process, not really driven by TripleO. End users will want to avoid manual 
>>> steps as much as possible. Also if we announce that OpenStack ships 
>>> features in TripleO like DPDK, SR-IOV... doesn't make sense to tell end 
>>> users that if they want to consume that feature, they need to do manual 
>>> updates on the image. It shall be natively supported, or configurable per 
>>> TripleO environments.
>>> 
>>> 2. Create our own images using diskimage-builder and custom elements: in 
>>> this case, we have the problem that the partners will loose support, as 
>>> building their own images is good for upstream, but not accepted into the 
>>> OSP environment. Also the combination of images needed can be huge, that 
>>> can be a blocker for QA.
>>> 
>>> 3. Add Ironic support for it. Images can be uploaded to glance, and some 
>>> properties can be set on metadata, like a json with kernel parameters. 
>>> Ironic will modify these kernel parameters when deploying the image (in a 
>>> similar way that when it installs bootloader, or generates partitions).
>>> 
>> 
>> This has been proposed before in ironic-specs 
>> (https://review.openstack.org/#/c/331564/) and was rejected, as it would 
>> require Ironic to reach out and modify image contents, which traditionally 
>> has been considered out of scope for Ironic. I would personally recommend 
>> #4, as post-boot automation is the safest way to configure node-specific 
>> options inside an image.
> 
> I'm still a bit divided about our decision back then.. On one hand, this does 
> seem somewhat out of scope. On the other, I quite understand why reboot is 
> suboptimal. I wonder if the ongoing deploy steps work will actually solve it 
> by allowing hardware managers to provide additional deploy steps.
> 

I’m not really of two minds on this at all. Modifying the filesystem directly 
would expose Ironic to a whole new world of complexity, including security 
issues, dealing with multiple incompatible filesystems, and the like. I’m 
obviously OK if anyone wants to use a customization point to do stuff that’d 
typically be outside of Ironic’s scope, but I don’t think this is a use case we 
should encourage.

The realm of configuring a machine beyond laying down the image has to lie in 
configuration management software, or else we open up to a huge scope increase 
and get away from our core mission.

-Jay


> Yolanda, you may want to check the spec 
> https://review.openstack.org/#/c/382091/ as it lays the foundation for the 
> deploy steps idea.
> 
>> 
>> Thanks,
>> Jay Faulkner
>> OSIC
>> 
>> 
>>> 4. Configure it post-deployment: there can be some puppet element that 
>>> updates kernel parameters. But it will need a node reboot to be applied, 
>>> and it's very far from being optimal and acceptable for the end users. 
>>> Reboots are slow, they can be a problem depending on the number of 
>>> nodes/hardware, and also the timing of reboot shall be totally controlled 
>>> (after all puppet has been applied properly).
>>> 
>>> 
>>> In the first three cases, we also hit the problem that TripleO only accepts 
>>> one single overcloud image for all deployments - there is no way to 
>>> instruct TripleO to upload and use several images, depending on the node 
>>> type (although Ironic supports it). Also, we are worried about upgrade 
>>> paths if we do image customizations. We need a clear way to move forward on 
>>> it.
>>> 
>>> So, we'd like to discuss the possible options there and the action items to 
>>> take (raise bugs, create some blueprints...). To summarize, our end goal is 
>>> the following:
>>> 
>>> - need to map overcloud-full images to roles
>>> - need to be done in an automated way, no manual steps enforced, and in a 
>>> way that can pass properly quality controls
>>> - reboots are sub-optimal
>>> 
>>> What are your thoughts there?
>>> 
>>> Best,
>>> 
>>> 
>>> Yolanda Robla
>>> yrobl...@redhat.com
>>> Principal Software Engineer - NFV Partner Engineer
>>> 

Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-02 Thread Jeremy Stanley
On 2016-12-02 11:35:05 +0100 (+0100), Thierry Carrez wrote:
[...]
> So I'm now wondering how much that artificial scarcity policy is hurting
> us more than it helps us. I'm still convinced it's very valuable to have
> a number of "meetings rooms" that you can lurk in and be available for
> pings, without having to join hundreds of channels where meetings might
> happen. But I'm not sure anymore that maintaining an artificial scarcity
> is helpful in limiting conflicts, and I can definitely see that it
> pushes some meetings away from the meeting channels, defeating their
> main purpose.
[...]

As someone who frequently gets pinged in random teams' meetings as
well as attending many regularly over the course of a week, I find
having them spread out as much as possible to be helpful to me, at
least. If everyone pings me at the same time because they're all
holding meetings in conflicting timeslots in many channels, I'll
probably just have to start scheduling "office hours" instead and
telling people to arrange any in-meeting input from me well in
advance.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread Sylvain Bauza


Le 02/12/2016 16:22, Matt Riedemann a écrit :
> I'm proposing that we add Stephen Finucane to the nova-core team.
> Stephen has been involved with nova for at least around a year now,
> maybe longer, my ability to tell time in nova has gotten fuzzy over the
> years. Regardless, he's always been eager to contribute and over the
> last several months has done a lot of reviews, as can be seen here:
> 
> https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com
> 
> http://stackalytics.com/report/contribution/nova/180
> 
> Stephen has been a main contributor and mover for the config option
> cleanup series that last few cycles, and he's a go-to person for a lot
> of the NFV/performance features in Nova like NUMA, CPU pinning, huge
> pages, etc.
> 
> I think Stephen does quality reviews, leaves thoughtful comments, knows
> when to hold a +1 for a patch that needs work, and when to hold a -1
> from a patch that just has some nits, and helps others in the project
> move their changes forward, which are all qualities I look for in a
> nova-core member.
> 
> I'd like to see Stephen get a bit more vocal / visible, but we all
> handle that differently and I think it's something Stephen can grow into
> the more involved he is.
> 
> So with all that said, I need a vote from the core team on this
> nomination. I honestly don't care to look up the rules too much on
> number of votes or timeline, I think it's pretty obvious once the
> replies roll in which way this goes.
> 


+1
Stephen did a great work on helping Nova, in particular what we call
"performance VMs" and I look forward to see him increasing his
contributions to Nova.

-S

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stepping down from core

2016-12-02 Thread Martin Hickey
Hi Henry,
 
It was a pleasure to work with you in Neutron. You were always a great help.
Wising you the best in your future adventures.
Regards,Martin
 
 
- Original message -From: Henry Gessau To: OpenStack Development Mailing List Cc:Subject: [openstack-dev] [Neutron] Stepping down from coreDate: Thu, Dec 1, 2016 10:51 PM 
I've already communicated this in the neutron meeting and in some neutronpolicy patches, but yesterday the PTL actually updated the gerrit ACLs so Ithought I'd drop a note here too.My work situation has changed and leaves me little time to keep up with myduties as core reviewer, DB lieutenant, and drivers team member.Working with the diverse and very talented contributors to Neutron has beenthe best experience of my career (which started before many of you were born).Thank you all for making the team such a great community. Because of you theproject is thriving and will continue to be successful!I will still be around on IRC, contribute some small patches here and there,and generally try to keep abreast of Neutron's progress. Don't hesitate toping me.__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread Matt Riedemann
I'm proposing that we add Stephen Finucane to the nova-core team. 
Stephen has been involved with nova for at least around a year now, 
maybe longer, my ability to tell time in nova has gotten fuzzy over the 
years. Regardless, he's always been eager to contribute and over the 
last several months has done a lot of reviews, as can be seen here:


https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com

http://stackalytics.com/report/contribution/nova/180

Stephen has been a main contributor and mover for the config option 
cleanup series that last few cycles, and he's a go-to person for a lot 
of the NFV/performance features in Nova like NUMA, CPU pinning, huge 
pages, etc.


I think Stephen does quality reviews, leaves thoughtful comments, knows 
when to hold a +1 for a patch that needs work, and when to hold a -1 
from a patch that just has some nits, and helps others in the project 
move their changes forward, which are all qualities I look for in a 
nova-core member.


I'd like to see Stephen get a bit more vocal / visible, but we all 
handle that differently and I think it's something Stephen can grow into 
the more involved he is.


So with all that said, I need a vote from the core team on this 
nomination. I honestly don't care to look up the rules too much on 
number of votes or timeline, I think it's pretty obvious once the 
replies roll in which way this goes.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alex Schultz core on puppet-tripleo

2016-12-02 Thread Giulio Fidente

On 12/01/2016 11:26 PM, Emilien Macchi wrote:

Team,

Alex Schultz (mwhahaha on IRC) has been active on TripleO since a few
months now.  While he's very active in different areas of TripleO, his
reviews and contributions on puppet-tripleo have been very useful.
Alex is a Puppet guy and also the current PTL of Puppet OpenStack. I
think he perfectly understands how puppet-tripleo works. His
involvement in the project and contributions on puppet-tripleo deserve
that we allow him to +2 puppet-tripleo.

Thanks Alex for your involvement and hard work in the project, this is
very appreciated!


+1 !


--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stepping down from core

2016-12-02 Thread Anita Kuno

On 2016-12-01 05:51 PM, Henry Gessau wrote:

I've already communicated this in the neutron meeting and in some neutron
policy patches, but yesterday the PTL actually updated the gerrit ACLs so I
thought I'd drop a note here too.

My work situation has changed and leaves me little time to keep up with my
duties as core reviewer, DB lieutenant, and drivers team member.

Working with the diverse and very talented contributors to Neutron has been
the best experience of my career (which started before many of you were born).
Thank you all for making the team such a great community. Because of you the
project is thriving and will continue to be successful!

I will still be around on IRC, contribute some small patches here and there,
and generally try to keep abreast of Neutron's progress. Don't hesitate to
ping me.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thank you Henry, it has been my pleasure to work with you.

Good wishes go with you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-02 Thread Dmitry Tantsur

On 12/02/2016 01:28 PM, Yolanda Robla Mota wrote:

Hi Dmitry

So we've been looking at that spec you suggested, but we are wondering if that 
will be useful for our use case. As the text says:

The ``ironic-python-agent`` project and ``agent`` driver will be adjusted to
support ``get_deploy_steps``. That way, ``ironic-python-agent`` will be able
to declare deploy steps to run prior to disk imaging, and operators will be
able to extend ``ironic-python-agent`` to add any custom step.

Our needs are different, actually we need to create a deployment step after 
imaging. We'd need an step that drops config on /etc/default/grub , and updates 
it. This is a post-imaging deploy step, that modifies the base image. Could 
ironic support these kind of steps, if there is a base system to just define 
per-user steps?


I thought that all deployment operations are converted to steps, with 
partitioning, writing the image, writing the configdrive and installing the boot 
loader being four default ones (as you see, two steps actually happen after the 
image is written).




The idea we had on mind is:
- from tripleo, add a property to each flavor, that defines the boot 
parameters:  openstack flavor set compute --property os:kernel_boot_params='abc'
- define a "ironic post-imaging deploy step", that will grab this property from 
the flavor, drop it on /etc/default/grub and regenerate it
- then on local boot, the proper kernel parameters will be applied

What is your feedback there?

- Original Message -
From: "Dmitry Tantsur" 
To: openstack-dev@lists.openstack.org
Sent: Friday, December 2, 2016 12:44:29 PM
Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel 
parameters on local boot

On 11/28/2016 04:46 PM, Jay Faulkner wrote:



On Nov 28, 2016, at 7:36 AM, Yolanda Robla Mota  wrote:

Hi, good afternoon

I wanted to start an email thread about how to properly setup kernel parameters 
on local boot, for our overcloud images on TripleO.
These parameters may vary depending on the needs of our end users, and even can 
be different ( for different roles ) per deployment. As an example, we need it 
for:
- enable FIPS kernel in terms of security 
(https://bugs.launchpad.net/tripleo/+bug/1640235)
- enable functionality for DPDK/SR-IOV 
(https://review.openstack.org/#/c/331564/)
- enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
- etc..

So far, the solutions we got were on several directions:

1. Update the golden overcloud-full image with virt-customize, modifying 
/etc/default/grub settings according to our needs: this is a manual process, 
not really driven by TripleO. End users will want to avoid manual steps as much 
as possible. Also if we announce that OpenStack ships features in TripleO like 
DPDK, SR-IOV... doesn't make sense to tell end users that if they want to 
consume that feature, they need to do manual updates on the image. It shall be 
natively supported, or configurable per TripleO environments.

2. Create our own images using diskimage-builder and custom elements: in this 
case, we have the problem that the partners will loose support, as building 
their own images is good for upstream, but not accepted into the OSP 
environment. Also the combination of images needed can be huge, that can be a 
blocker for QA.

3. Add Ironic support for it. Images can be uploaded to glance, and some 
properties can be set on metadata, like a json with kernel parameters. Ironic 
will modify these kernel parameters when deploying the image (in a similar way 
that when it installs bootloader, or generates partitions).



This has been proposed before in ironic-specs 
(https://review.openstack.org/#/c/331564/) and was rejected, as it would 
require Ironic to reach out and modify image contents, which traditionally has 
been considered out of scope for Ironic. I would personally recommend #4, as 
post-boot automation is the safest way to configure node-specific options 
inside an image.


I'm still a bit divided about our decision back then.. On one hand, this does
seem somewhat out of scope. On the other, I quite understand why reboot is
suboptimal. I wonder if the ongoing deploy steps work will actually solve it by
allowing hardware managers to provide additional deploy steps.

Yolanda, you may want to check the spec https://review.openstack.org/#/c/382091/
as it lays the foundation for the deploy steps idea.



Thanks,
Jay Faulkner
OSIC



4. Configure it post-deployment: there can be some puppet element that updates 
kernel parameters. But it will need a node reboot to be applied, and it's very 
far from being optimal and acceptable for the end users. Reboots are slow, they 
can be a problem depending on the number of nodes/hardware, and also the timing 
of reboot shall be totally controlled (after all puppet has been applied 
properly).


In the first three cases, we also hit the problem that TripleO only accepts one 
single overcloud image for all deployments - there is no way to instru

Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-02 Thread Yolanda Robla Mota
Yes, we are talking about physical hardware and large clusters. Also in a very 
demanding environment.
So reboots are really suboptimal in that case.

- Original Message -
From: "Alex Schultz" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Friday, December 2, 2016 3:38:53 PM
Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel 
parameters on local boot

On Fri, Dec 2, 2016 at 7:29 AM, Oliver Walsh  wrote:
> Hi Yolanda,
>
> I've the same requirements for a real-time compute role.
>
> I'm current using #4. Puppet sets the kernel cmdline in
> /etc/defaults/grub, then touches a file that triggers a reboot in an
> os-refresh-config post-configure script.
>
> How much of an issue is the reboot? In my dev env it doesn't make a
> significant difference to the overall deployment time of a node.
>

If this is physical hardware, the reboot can add 5-10+ minutes
depending on the vendor.  VM's aren't a good representation of the
actual cost of a reboot.

Thanks,
-Alex

> I'm thinking we could use #4 as the general solution. In cases where a
> reboot is too expensive then #1/#2/#3 could be used to prime the image
> with same /etc/default/grub that puppet would create. As puppet
> doesn't change it doesn't trigger a reboot.
>
>
> Re custom images. Support in python-tripleoclient could be improved
> but for now this is what I'm doing:
>
> Uploading a custom image:
> OS_IMAGE=custom-overcloud-full.qcow2 openstack overcloud image upload
>
> Set the Image for the custom role in param_defaults:
> parameter_defaults:
>   CustomRoleImage: custom-overcloud-full
>
> Thanks,
> Ollie
>
> On 28 November 2016 at 15:36, Yolanda Robla Mota  wrote:
>> Hi, good afternoon
>>
>> I wanted to start an email thread about how to properly setup kernel 
>> parameters on local boot, for our overcloud images on TripleO.
>> These parameters may vary depending on the needs of our end users, and even 
>> can be different ( for different roles ) per deployment. As an example, we 
>> need it for:
>> - enable FIPS kernel in terms of security 
>> (https://bugs.launchpad.net/tripleo/+bug/1640235)
>> - enable functionality for DPDK/SR-IOV 
>> (https://review.openstack.org/#/c/331564/)
>> - enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
>> - etc..
>>
>> So far, the solutions we got were on several directions:
>>
>> 1. Update the golden overcloud-full image with virt-customize, modifying 
>> /etc/default/grub settings according to our needs: this is a manual process, 
>> not really driven by TripleO. End users will want to avoid manual steps as 
>> much as possible. Also if we announce that OpenStack ships features in 
>> TripleO like DPDK, SR-IOV... doesn't make sense to tell end users that if 
>> they want to consume that feature, they need to do manual updates on the 
>> image. It shall be natively supported, or configurable per TripleO 
>> environments.
>>
>> 2. Create our own images using diskimage-builder and custom elements: in 
>> this case, we have the problem that the partners will loose support, as 
>> building their own images is good for upstream, but not accepted into the 
>> OSP environment. Also the combination of images needed can be huge, that can 
>> be a blocker for QA.
>>
>> 3. Add Ironic support for it. Images can be uploaded to glance, and some 
>> properties can be set on metadata, like a json with kernel parameters. 
>> Ironic will modify these kernel parameters when deploying the image (in a 
>> similar way that when it installs bootloader, or generates partitions).
>>
>> 4. Configure it post-deployment: there can be some puppet element that 
>> updates kernel parameters. But it will need a node reboot to be applied, and 
>> it's very far from being optimal and acceptable for the end users. Reboots 
>> are slow, they can be a problem depending on the number of nodes/hardware, 
>> and also the timing of reboot shall be totally controlled (after all puppet 
>> has been applied properly).
>>
>>
>> In the first three cases, we also hit the problem that TripleO only accepts 
>> one single overcloud image for all deployments - there is no way to instruct 
>> TripleO to upload and use several images, depending on the node type 
>> (although Ironic supports it). Also, we are worried about upgrade paths if 
>> we do image customizations. We need a clear way to move forward on it.
>>
>> So, we'd like to discuss the possible options there and the action items to 
>> take (raise bugs, create some blueprints...). To summarize, our end goal is 
>> the following:
>>
>> - need to map overcloud-full images to roles
>> - need to be done in an automated way, no manual steps enforced, and in a 
>> way that can pass properly quality controls
>> - reboots are sub-optimal
>>
>> What are your thoughts there?
>>
>> Best,
>>
>>
>> Yolanda Robla
>> yrobl...@redhat.com
>> Principal Software Engineer - NFV Partner Engineer
>>
>>
>> ___

Re: [openstack-dev] [Neutron] Stepping down from core

2016-12-02 Thread Morales, Victor
Henry, it has been a pleasure to have been working with you and thanks for 
supporting this community and helping us to get involved quickly.  Best wishes 
for your new adventure.

Thanks
Victor Morales

PS:  Hopefully your Linkedin photo was not taken couple days before you started 
working in Neutron :P




On 12/1/16, 4:51 PM, "Henry Gessau"  wrote:

>I've already communicated this in the neutron meeting and in some neutron
>policy patches, but yesterday the PTL actually updated the gerrit ACLs so I
>thought I'd drop a note here too.
>
>My work situation has changed and leaves me little time to keep up with my
>duties as core reviewer, DB lieutenant, and drivers team member.
>
>Working with the diverse and very talented contributors to Neutron has been
>the best experience of my career (which started before many of you were born).
>Thank you all for making the team such a great community. Because of you the
>project is thriving and will continue to be successful!
>
>I will still be around on IRC, contribute some small patches here and there,
>and generally try to keep abreast of Neutron's progress. Don't hesitate to
>ping me.
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-02 Thread Oliver Walsh
Hi Yolanda,

I've the same requirements for a real-time compute role.

I'm current using #4. Puppet sets the kernel cmdline in
/etc/defaults/grub, then touches a file that triggers a reboot in an
os-refresh-config post-configure script.

How much of an issue is the reboot? In my dev env it doesn't make a
significant difference to the overall deployment time of a node.

I'm thinking we could use #4 as the general solution. In cases where a
reboot is too expensive then #1/#2/#3 could be used to prime the image
with same /etc/default/grub that puppet would create. As puppet
doesn't change it doesn't trigger a reboot.


Re custom images. Support in python-tripleoclient could be improved
but for now this is what I'm doing:

Uploading a custom image:
OS_IMAGE=custom-overcloud-full.qcow2 openstack overcloud image upload

Set the Image for the custom role in param_defaults:
parameter_defaults:
  CustomRoleImage: custom-overcloud-full

Thanks,
Ollie

On 28 November 2016 at 15:36, Yolanda Robla Mota  wrote:
> Hi, good afternoon
>
> I wanted to start an email thread about how to properly setup kernel 
> parameters on local boot, for our overcloud images on TripleO.
> These parameters may vary depending on the needs of our end users, and even 
> can be different ( for different roles ) per deployment. As an example, we 
> need it for:
> - enable FIPS kernel in terms of security 
> (https://bugs.launchpad.net/tripleo/+bug/1640235)
> - enable functionality for DPDK/SR-IOV 
> (https://review.openstack.org/#/c/331564/)
> - enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
> - etc..
>
> So far, the solutions we got were on several directions:
>
> 1. Update the golden overcloud-full image with virt-customize, modifying 
> /etc/default/grub settings according to our needs: this is a manual process, 
> not really driven by TripleO. End users will want to avoid manual steps as 
> much as possible. Also if we announce that OpenStack ships features in 
> TripleO like DPDK, SR-IOV... doesn't make sense to tell end users that if 
> they want to consume that feature, they need to do manual updates on the 
> image. It shall be natively supported, or configurable per TripleO 
> environments.
>
> 2. Create our own images using diskimage-builder and custom elements: in this 
> case, we have the problem that the partners will loose support, as building 
> their own images is good for upstream, but not accepted into the OSP 
> environment. Also the combination of images needed can be huge, that can be a 
> blocker for QA.
>
> 3. Add Ironic support for it. Images can be uploaded to glance, and some 
> properties can be set on metadata, like a json with kernel parameters. Ironic 
> will modify these kernel parameters when deploying the image (in a similar 
> way that when it installs bootloader, or generates partitions).
>
> 4. Configure it post-deployment: there can be some puppet element that 
> updates kernel parameters. But it will need a node reboot to be applied, and 
> it's very far from being optimal and acceptable for the end users. Reboots 
> are slow, they can be a problem depending on the number of nodes/hardware, 
> and also the timing of reboot shall be totally controlled (after all puppet 
> has been applied properly).
>
>
> In the first three cases, we also hit the problem that TripleO only accepts 
> one single overcloud image for all deployments - there is no way to instruct 
> TripleO to upload and use several images, depending on the node type 
> (although Ironic supports it). Also, we are worried about upgrade paths if we 
> do image customizations. We need a clear way to move forward on it.
>
> So, we'd like to discuss the possible options there and the action items to 
> take (raise bugs, create some blueprints...). To summarize, our end goal is 
> the following:
>
> - need to map overcloud-full images to roles
> - need to be done in an automated way, no manual steps enforced, and in a way 
> that can pass properly quality controls
> - reboots are sub-optimal
>
> What are your thoughts there?
>
> Best,
>
>
> Yolanda Robla
> yrobl...@redhat.com
> Principal Software Engineer - NFV Partner Engineer
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-02 Thread Alex Schultz
On Fri, Dec 2, 2016 at 7:29 AM, Oliver Walsh  wrote:
> Hi Yolanda,
>
> I've the same requirements for a real-time compute role.
>
> I'm current using #4. Puppet sets the kernel cmdline in
> /etc/defaults/grub, then touches a file that triggers a reboot in an
> os-refresh-config post-configure script.
>
> How much of an issue is the reboot? In my dev env it doesn't make a
> significant difference to the overall deployment time of a node.
>

If this is physical hardware, the reboot can add 5-10+ minutes
depending on the vendor.  VM's aren't a good representation of the
actual cost of a reboot.

Thanks,
-Alex

> I'm thinking we could use #4 as the general solution. In cases where a
> reboot is too expensive then #1/#2/#3 could be used to prime the image
> with same /etc/default/grub that puppet would create. As puppet
> doesn't change it doesn't trigger a reboot.
>
>
> Re custom images. Support in python-tripleoclient could be improved
> but for now this is what I'm doing:
>
> Uploading a custom image:
> OS_IMAGE=custom-overcloud-full.qcow2 openstack overcloud image upload
>
> Set the Image for the custom role in param_defaults:
> parameter_defaults:
>   CustomRoleImage: custom-overcloud-full
>
> Thanks,
> Ollie
>
> On 28 November 2016 at 15:36, Yolanda Robla Mota  wrote:
>> Hi, good afternoon
>>
>> I wanted to start an email thread about how to properly setup kernel 
>> parameters on local boot, for our overcloud images on TripleO.
>> These parameters may vary depending on the needs of our end users, and even 
>> can be different ( for different roles ) per deployment. As an example, we 
>> need it for:
>> - enable FIPS kernel in terms of security 
>> (https://bugs.launchpad.net/tripleo/+bug/1640235)
>> - enable functionality for DPDK/SR-IOV 
>> (https://review.openstack.org/#/c/331564/)
>> - enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
>> - etc..
>>
>> So far, the solutions we got were on several directions:
>>
>> 1. Update the golden overcloud-full image with virt-customize, modifying 
>> /etc/default/grub settings according to our needs: this is a manual process, 
>> not really driven by TripleO. End users will want to avoid manual steps as 
>> much as possible. Also if we announce that OpenStack ships features in 
>> TripleO like DPDK, SR-IOV... doesn't make sense to tell end users that if 
>> they want to consume that feature, they need to do manual updates on the 
>> image. It shall be natively supported, or configurable per TripleO 
>> environments.
>>
>> 2. Create our own images using diskimage-builder and custom elements: in 
>> this case, we have the problem that the partners will loose support, as 
>> building their own images is good for upstream, but not accepted into the 
>> OSP environment. Also the combination of images needed can be huge, that can 
>> be a blocker for QA.
>>
>> 3. Add Ironic support for it. Images can be uploaded to glance, and some 
>> properties can be set on metadata, like a json with kernel parameters. 
>> Ironic will modify these kernel parameters when deploying the image (in a 
>> similar way that when it installs bootloader, or generates partitions).
>>
>> 4. Configure it post-deployment: there can be some puppet element that 
>> updates kernel parameters. But it will need a node reboot to be applied, and 
>> it's very far from being optimal and acceptable for the end users. Reboots 
>> are slow, they can be a problem depending on the number of nodes/hardware, 
>> and also the timing of reboot shall be totally controlled (after all puppet 
>> has been applied properly).
>>
>>
>> In the first three cases, we also hit the problem that TripleO only accepts 
>> one single overcloud image for all deployments - there is no way to instruct 
>> TripleO to upload and use several images, depending on the node type 
>> (although Ironic supports it). Also, we are worried about upgrade paths if 
>> we do image customizations. We need a clear way to move forward on it.
>>
>> So, we'd like to discuss the possible options there and the action items to 
>> take (raise bugs, create some blueprints...). To summarize, our end goal is 
>> the following:
>>
>> - need to map overcloud-full images to roles
>> - need to be done in an automated way, no manual steps enforced, and in a 
>> way that can pass properly quality controls
>> - reboots are sub-optimal
>>
>> What are your thoughts there?
>>
>> Best,
>>
>>
>> Yolanda Robla
>> yrobl...@redhat.com
>> Principal Software Engineer - NFV Partner Engineer
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.op

Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-02 Thread Amrith Kumar
Thierry, when we were adding the #openstack-swg group, we had this
conversation and I observed that my own preference would be for a project's
meetings to be in that projects room. It makes it easier to then search for
logs for something (say SWG related) in the SWG room, and I do this
regularly for Trove but I have to store text logs of the trove meetings (in
#openstack-meeting-alt) with the logs of the trove room #openstack-trove.

While I understand the simplicity of just hanging around in four or five
conference rooms and being available for pings I submit to you that if
someone wants to ping you and you are not in that projects room, they know
where to go find you if you are a person who hangs around.

So I submit to you that rather than creating #openstack-meeting-5, let's
outlaw the meeting rooms altogether and allow projects to meet in their own
rooms. And people who are interested in projects can hang out in those rooms
(which people do quite a bit anyway), and others who just hangout in
#openstack or #openstack-dev or #openstack-infra.

-amrith

> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Friday, December 2, 2016 7:52 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all] Creating a new IRC meeting room ?
> 
> Daniel P. Berrange wrote:
> > Do we have any real data on just how many contributors really do lurk
> > in the meeting rooms permanently, as opposed to merely joining rooms
> > at start of the meeting & leaving immediately thereafter ?
> 
> There are currently 488 permanent residents on #openstack-meeting, 270 on
> #openstack-meeting-4 (while no meeting is going on). So I'd say that most
> people stay around permanently.
> 
> > Likewise any data on how many contributors are actively participate in
> > meetings across different projects, vs silod just in their own one
> > project ?
> 
> That is harder to get numbers on.
> 
> > If the latter is in the clear majority, then you might as well just
> > have #openstack-meeting-$PROJECT and thus mostly avoid the problem of
> > conflicting demands for a limited set of channels.
> 
> Since there are between 300 and 500 people who find it interesting to lurk
in
> meeting channels, I'm pretty sure that would be a bad choice...
> 
> --
> Thierry Carrez (ttx)
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alex Schultz core on puppet-tripleo

2016-12-02 Thread Ryan Brady
On Thu, Dec 1, 2016 at 5:26 PM, Emilien Macchi  wrote:

> Team,
>
> Alex Schultz (mwhahaha on IRC) has been active on TripleO since a few
> months now.  While he's very active in different areas of TripleO, his
> reviews and contributions on puppet-tripleo have been very useful.
> Alex is a Puppet guy and also the current PTL of Puppet OpenStack. I
> think he perfectly understands how puppet-tripleo works. His
> involvement in the project and contributions on puppet-tripleo deserve
> that we allow him to +2 puppet-tripleo.
>
> Thanks Alex for your involvement and hard work in the project, this is
> very appreciated!
>
> As usual, I'll let the team to vote about this proposal.
>

+1


>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ryan Brady
Cloud Engineering
rbr...@redhat.com
919.890.8925
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila]: Fwd: [Nfs-ganesha-devel] Xenial PPA packages for FSALs?

2016-12-02 Thread Ramana Raja
- Forwarded Message -
> On Friday, December 2, 2016 at 6:08 PM, Kaleb S. KEITHLEY
>  wrote:
> > Hi,
> > 
> > fsal-vfs is in the nfs-ganesha-fsal .deb along with all the other FSALs.
> 
> Ah! I missed this. I see it now [2].
> 
> > 
> > I'm not aware of any compatible builds of Ceph in Launchpad PPAs that
> > could be used to build fsal-ceph. Same goes for fsal-rgw.
> 
> OK.
> Thanks, Kaleb!
> 
> -Ramana
> 
> [2] $ dpkg -L nfs-ganesha-fsal
> /.
> /usr
> /usr/lib
> /usr/lib/x86_64-linux-gnu
> /usr/lib/x86_64-linux-gnu/ganesha
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalproxy.so.4.2.0
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalnull.so.4.2.0
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgpfs.so.4.2.0
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalvfs.so.4.2.0
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalxfs.so.4.2.0
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgluster.so.4.2.0
> /usr/share
> /usr/share/doc
> /usr/share/doc/nfs-ganesha-fsal
> /usr/share/doc/nfs-ganesha-fsal/copyright
> /usr/share/doc/nfs-ganesha-fsal/changelog.Debian.gz
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalproxy.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalxfs.so
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalnull.so
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalxfs.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgpfs.so
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalproxy.so
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgpfs.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalvfs.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgluster.so
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgluster.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalnull.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalvfs.so
> 
> > 
> > 
> > On 12/02/2016 06:39 AM, Ramana Raja wrote:
> > > Hi,
> > >
> > > It'd be useful to have nfs-ganesha-vfs and nfs-ganesha-ceph
> > > packages for Xenial like those available for Fedora 24. Has
> > > anybody already built or is planning on building Xenial PPA
> > > packages for FSAL_CEPH and FSAL_VFS? I only see nfs-ganesha
> > > Xenial package [1] here,
> > > https://launchpad.net/~gluster/+archive/ubuntu/nfs-ganesha
> > > that doesn't install FSAL shared libraries that I'm interested
> > > in.
> > >
> > > I'm especially interested in FSAL_CEPH, and FSAL_VFS as they
> > > would soon be used in OpenStack Manila, File Systems
> > > as a Service project, to export NFS shares to OpenStack clients.
> > > To test such use-cases/setups in OpenStack's upstream CI, the
> > > OpenStack services + Ganesha + Storage backend would all be
> > > installed and run in a Xenial VM with ~8G RAM. Scripting
> > > the CI's  installation phase would be much simpler if the FSAL
> > > packages for CephFS and VFS were available.
> > >
> > > Thanks,
> > > Ramana
> > >
> > > [1] Files installed with nfs-ganesha Xenial PPA,
> > > $ dpkg-query -L  nfs-ganesha
> > > /.
> > > /lib
> > > /lib/systemd
> > > /lib/systemd/system
> > > /lib/systemd/system/nfs-ganesha-config.service
> > > /lib/systemd/system/nfs-ganesha-lock.service
> > > /lib/systemd/system/nfs-ganesha-config.service-in.cmake
> > > /lib/systemd/system/nfs-ganesha.service
> > > /etc
> > > /etc/defaults
> > > /etc/defaults/nfs-ganesha
> > > /etc/logrotate.d
> > > /etc/logrotate.d/nfs-ganesha
> > > /etc/ganesha
> > > /etc/ganesha/ganesha.conf
> > > /etc/dbus-1
> > > /etc/dbus-1/system.d
> > > /etc/dbus-1/system.d/nfs-ganesha-dbus.conf
> > > /usr
> > > /usr/include
> > > /usr/sbin
> > > /usr/lib
> > > /usr/lib/pkgconfig
> > > /usr/share
> > > /usr/share/doc
> > > /usr/share/doc/nfs-ganesha
> > > /usr/share/doc/nfs-ganesha/copyright
> > > /usr/share/doc/nfs-ganesha/changelog.Debian.gz
> > > /usr/bin
> > > /usr/bin/ganesha.nfsd
> > >
> > > --
> > > Check out the vibrant tech community on one of the world's most
> > > engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> > > ___
> > > Nfs-ganesha-devel mailing list
> > > nfs-ganesha-de...@lists.sourceforge.net
> > > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> > >
> > 
> > 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [openstack-health] Avoid showing non-official projects failure ratios

2016-12-02 Thread Masayuki Igawa
Hi,

On Fri, Dec 2, 2016 at 6:29 PM, Andreas Jaeger  wrote:
> On 12/02/2016 10:03 AM, Thierry Carrez wrote:
>> Ken'ichi Ohmichi wrote:
>>> Hi QA-team,
>>>
>>> In the big-tent policy, we continue creating new projects.
>>> On the other hand, some projects became non-active.
>>> That seems natural thing.
>>>
>>> Now openstack-health[1] shows non-active project as 100% failure ratio
>>> on "Project Status".
>>> The project has became non-official since
>>> https://review.openstack.org/#/c/324412/
>>> So I feel it is nice to have black-list or something to make it
>>> disappear from the dashboard for concentrating on active projects'
>>> failures.
>>>
>>> Any thoughts?
>>
>> Yes, I totally agree we should only list active official projects in
>> there, otherwise long-dead things like Cue will make the view look bad.
>> Looks like the system adds new ones but does not remove anything ? It
>> should probably take its list from [1].
>>
>> [1]
>> http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
>
> Is cue completely dead? Should we then retire it completely following
> http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project ?
>
> It still has jobs setup and I see people submitting typo fixes etc.

I'm not sure the cue is dead or not. But I think we should fix the
failure of the job or remove the periodic jobs. Otherwise, the job
just waste the resource of the OpenStack infra..

But we should have the filter feature like a 'Project Type' of
stackalitics, probably. I think it's useful for openstack-health
users.

Best Regards,
-- Masayuki Igawa

>
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pike PTL

2016-12-02 Thread Samuel de Medeiros Queiroz
Hey Steve,

Thanks for all your dedication, you've been a great leader!
It's been a pleasure to serve keystone with you as PTL.

Samuel

On Tue, Nov 29, 2016 at 12:19 PM, Brad Topol  wrote:

> +1! Great job Steve
>
>
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
>
> [image: Inactive hide details for Henry Nash ---11/23/2016 11:08:25
> AM---Steve, It’s been a pleasure working with you as PTL - an exce]Henry
> Nash ---11/23/2016 11:08:25 AM---Steve, It’s been a pleasure working with
> you as PTL - an excellent tenure. Enjoy taking some time ba
>
> From: Henry Nash 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 11/23/2016 11:08 AM
> Subject: Re: [openstack-dev] [keystone] Pike PTL
> --
>
>
>
> Steve,
>
> It’s been a pleasure working with you as PTL - an excellent tenure. Enjoy
> taking some time back!
>
> Henry
>
>On 21 Nov 2016, at 19:38, Steve Martinelli <*s.martine...@gmail.com*
>   > wrote:
>
>   one of these days i'll learn how to spell :)
>
>   On Mon, Nov 21, 2016 at 12:52 PM, Steve Martinelli <
>   *s.martine...@gmail.com* > wrote:
>  Keystoners,
>
>  I do not intend to run for the PTL position of the Pike
>  development cycle. I'm sending this out early so I can work with 
> folks
>  interested in the role, If you intend to run for PTL in Pike and are
>  interested in learning the ropes (or just want to hear more about 
> what the
>  role means) then shoot me an email.
>
>  It's been an unforgettable ride. Being PTL a is very rewarding
>  experience, I encourage anyone interested to put your name forward. 
> I'm not
>  going away from OpenStack, I just think three terms as PTL has been 
> enough.
>  It'll be nice to have my evenings back :)
>
>  To *all* the keystone contributors (cores and non-cores), thank
>  you for all your time and commitment. More importantly thank you for
>  putting up with my many questions, pings, pokes and -1s. Each of you 
> are
>  amazing and together you make an awesome team. It has been an 
> absolute
>  pleasure to serve as PTL, thank you for letting me do so.
>
>  stevemar
>
>
>  
>
>  Thanks for the idea Lana [1]
>  [1]
>  
> *http://lists.openstack.org/pipermail/openstack-docs/2016-November/009357.html*
>  
> 
>
>   
>   __
>   OpenStack Development Mailing List (not for usage questions)
>   Unsubscribe: *openstack-dev-requ...@lists.openstack.org*
>   ?subject:unsubscribe
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>   
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] OpenStack Ocata B1 for Ubuntu 16.04 LTS

2016-12-02 Thread Corey Bryant
On Thu, Dec 1, 2016 at 9:31 PM, Jeffrey Zhang 
wrote:

> Cool. And Kolla has upgrade ubuntu repo to b1 in master branch.
>
> btw, if there is any way or help doc for proposing a new package into
> ubuntu-cloud-archive? like kolla.
>
>
>
Hi Jeffrey,

There's currently no kolla package in debian or ubuntu so it would need to
be started from scratch.  If you were to provide a quality package and
support for it I'd be happy to sponsor your uploads into ubuntu and the
cloud archive.

I can give you more pointers but as a starter here's where we host our
package source:
https://code.launchpad.net/~ubuntu-server-dev/+git

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Do we need to rfe to implement active-active router?

2016-12-02 Thread huangdenghui
Hi Assaf
I already file a RFE bug, please check bug#1645625. Any comment is welcome.


发自网易邮箱手机版



On 2016-11-17 01:32 , Assaf Muller Wrote:

On Wed, Nov 16, 2016 at 10:42 AM, huangdenghui  wrote:
> hi
> Currently, neutron support DVR router and legacy router.  For high
> availability, there is HA router in reference implementation of legacy mode
> and DVR  mode. I am considering whether is active-active router needed in
> both mode?

Yes, an RFE would be required and likely a spec describing the high
level approach of the implementation.

>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release management team draft logo

2016-12-02 Thread Steve Martinelli
A bit of colour should will go a long way here, black and brown would help
make it more obvious (IMO).

On Fri, Dec 2, 2016 at 4:05 AM, Thierry Carrez 
wrote:

> Doug Hellmann wrote:
> > Release team, please take a look at the attached logo and let me know
> > what you think.
>
> It's not immediately obvious to me it's a shepherd dog, but then I don't
> exactly know how to make that more obvious.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-02 Thread Thierry Carrez
Daniel P. Berrange wrote:
> Do we have any real data on just how many contributors really do
> lurk in the meeting rooms permanently, as opposed to merely joining
> rooms at start of the meeting & leaving immediately thereafter ?

There are currently 488 permanent residents on #openstack-meeting, 270
on #openstack-meeting-4 (while no meeting is going on). So I'd say that
most people stay around permanently.

> Likewise any data on how many contributors are actively participate
> in meetings across different projects, vs silod just in their own
> one project ?

That is harder to get numbers on.

> If the latter is in the clear majority, then you might as well just
> have #openstack-meeting-$PROJECT and thus mostly avoid the problem
> of conflicting demands for a limited set of channels.

Since there are between 300 and 500 people who find it interesting to
lurk in meeting channels, I'm pretty sure that would be a bad choice...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alex Schultz core on puppet-tripleo

2016-12-02 Thread Jason Rist
On 12/01/2016 05:26 PM, Emilien Macchi wrote:
> Team,
> 
> Alex Schultz (mwhahaha on IRC) has been active on TripleO since a few
> months now.  While he's very active in different areas of TripleO, his
> reviews and contributions on puppet-tripleo have been very useful.
> Alex is a Puppet guy and also the current PTL of Puppet OpenStack. I
> think he perfectly understands how puppet-tripleo works. His
> involvement in the project and contributions on puppet-tripleo deserve
> that we allow him to +2 puppet-tripleo.
> 
> Thanks Alex for your involvement and hard work in the project, this is
> very appreciated!
> 
> As usual, I'll let the team to vote about this proposal.
> 
> Thanks,
> 
+1

-- 
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-02 Thread Yolanda Robla Mota
Hi Dmitry

So we've been looking at that spec you suggested, but we are wondering if that 
will be useful for our use case. As the text says:

The ``ironic-python-agent`` project and ``agent`` driver will be adjusted to
support ``get_deploy_steps``. That way, ``ironic-python-agent`` will be able
to declare deploy steps to run prior to disk imaging, and operators will be
able to extend ``ironic-python-agent`` to add any custom step.

Our needs are different, actually we need to create a deployment step after 
imaging. We'd need an step that drops config on /etc/default/grub , and updates 
it. This is a post-imaging deploy step, that modifies the base image. Could 
ironic support these kind of steps, if there is a base system to just define 
per-user steps?

The idea we had on mind is:
- from tripleo, add a property to each flavor, that defines the boot 
parameters:  openstack flavor set compute --property os:kernel_boot_params='abc'
- define a "ironic post-imaging deploy step", that will grab this property from 
the flavor, drop it on /etc/default/grub and regenerate it
- then on local boot, the proper kernel parameters will be applied

What is your feedback there?

- Original Message -
From: "Dmitry Tantsur" 
To: openstack-dev@lists.openstack.org
Sent: Friday, December 2, 2016 12:44:29 PM
Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel 
parameters on local boot

On 11/28/2016 04:46 PM, Jay Faulkner wrote:
>
>> On Nov 28, 2016, at 7:36 AM, Yolanda Robla Mota  wrote:
>>
>> Hi, good afternoon
>>
>> I wanted to start an email thread about how to properly setup kernel 
>> parameters on local boot, for our overcloud images on TripleO.
>> These parameters may vary depending on the needs of our end users, and even 
>> can be different ( for different roles ) per deployment. As an example, we 
>> need it for:
>> - enable FIPS kernel in terms of security 
>> (https://bugs.launchpad.net/tripleo/+bug/1640235)
>> - enable functionality for DPDK/SR-IOV 
>> (https://review.openstack.org/#/c/331564/)
>> - enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
>> - etc..
>>
>> So far, the solutions we got were on several directions:
>>
>> 1. Update the golden overcloud-full image with virt-customize, modifying 
>> /etc/default/grub settings according to our needs: this is a manual process, 
>> not really driven by TripleO. End users will want to avoid manual steps as 
>> much as possible. Also if we announce that OpenStack ships features in 
>> TripleO like DPDK, SR-IOV... doesn't make sense to tell end users that if 
>> they want to consume that feature, they need to do manual updates on the 
>> image. It shall be natively supported, or configurable per TripleO 
>> environments.
>>
>> 2. Create our own images using diskimage-builder and custom elements: in 
>> this case, we have the problem that the partners will loose support, as 
>> building their own images is good for upstream, but not accepted into the 
>> OSP environment. Also the combination of images needed can be huge, that can 
>> be a blocker for QA.
>>
>> 3. Add Ironic support for it. Images can be uploaded to glance, and some 
>> properties can be set on metadata, like a json with kernel parameters. 
>> Ironic will modify these kernel parameters when deploying the image (in a 
>> similar way that when it installs bootloader, or generates partitions).
>>
>
> This has been proposed before in ironic-specs 
> (https://review.openstack.org/#/c/331564/) and was rejected, as it would 
> require Ironic to reach out and modify image contents, which traditionally 
> has been considered out of scope for Ironic. I would personally recommend #4, 
> as post-boot automation is the safest way to configure node-specific options 
> inside an image.

I'm still a bit divided about our decision back then.. On one hand, this does 
seem somewhat out of scope. On the other, I quite understand why reboot is 
suboptimal. I wonder if the ongoing deploy steps work will actually solve it by 
allowing hardware managers to provide additional deploy steps.

Yolanda, you may want to check the spec 
https://review.openstack.org/#/c/382091/ 
as it lays the foundation for the deploy steps idea.

>
> Thanks,
> Jay Faulkner
> OSIC
>
>
>> 4. Configure it post-deployment: there can be some puppet element that 
>> updates kernel parameters. But it will need a node reboot to be applied, and 
>> it's very far from being optimal and acceptable for the end users. Reboots 
>> are slow, they can be a problem depending on the number of nodes/hardware, 
>> and also the timing of reboot shall be totally controlled (after all puppet 
>> has been applied properly).
>>
>>
>> In the first three cases, we also hit the problem that TripleO only accepts 
>> one single overcloud image for all deployments - there is no way to instruct 
>> TripleO to upload and use several images, depending on the node type 
>> (although Ironic supports it). Also, we are worried about upgrade paths 

Re: [openstack-dev] [release] Release management team draft logo

2016-12-02 Thread Ian Cordasco
On Dec 2, 2016 3:07 AM, "Thierry Carrez"  wrote:
>
> Doug Hellmann wrote:
> > Release team, please take a look at the attached logo and let me know
> > what you think.
>
> It's not immediately obvious to me it's a shepherd dog, but then I don't
> exactly know how to make that more obvious

>From what I have observed, this had been the f feedback from several teams.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-02 Thread Dmitry Tantsur

On 11/28/2016 04:46 PM, Jay Faulkner wrote:



On Nov 28, 2016, at 7:36 AM, Yolanda Robla Mota  wrote:

Hi, good afternoon

I wanted to start an email thread about how to properly setup kernel parameters 
on local boot, for our overcloud images on TripleO.
These parameters may vary depending on the needs of our end users, and even can 
be different ( for different roles ) per deployment. As an example, we need it 
for:
- enable FIPS kernel in terms of security 
(https://bugs.launchpad.net/tripleo/+bug/1640235)
- enable functionality for DPDK/SR-IOV 
(https://review.openstack.org/#/c/331564/)
- enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
- etc..

So far, the solutions we got were on several directions:

1. Update the golden overcloud-full image with virt-customize, modifying 
/etc/default/grub settings according to our needs: this is a manual process, 
not really driven by TripleO. End users will want to avoid manual steps as much 
as possible. Also if we announce that OpenStack ships features in TripleO like 
DPDK, SR-IOV... doesn't make sense to tell end users that if they want to 
consume that feature, they need to do manual updates on the image. It shall be 
natively supported, or configurable per TripleO environments.

2. Create our own images using diskimage-builder and custom elements: in this 
case, we have the problem that the partners will loose support, as building 
their own images is good for upstream, but not accepted into the OSP 
environment. Also the combination of images needed can be huge, that can be a 
blocker for QA.

3. Add Ironic support for it. Images can be uploaded to glance, and some 
properties can be set on metadata, like a json with kernel parameters. Ironic 
will modify these kernel parameters when deploying the image (in a similar way 
that when it installs bootloader, or generates partitions).



This has been proposed before in ironic-specs 
(https://review.openstack.org/#/c/331564/) and was rejected, as it would 
require Ironic to reach out and modify image contents, which traditionally has 
been considered out of scope for Ironic. I would personally recommend #4, as 
post-boot automation is the safest way to configure node-specific options 
inside an image.


I'm still a bit divided about our decision back then.. On one hand, this does 
seem somewhat out of scope. On the other, I quite understand why reboot is 
suboptimal. I wonder if the ongoing deploy steps work will actually solve it by 
allowing hardware managers to provide additional deploy steps.


Yolanda, you may want to check the spec https://review.openstack.org/#/c/382091/ 
as it lays the foundation for the deploy steps idea.




Thanks,
Jay Faulkner
OSIC



4. Configure it post-deployment: there can be some puppet element that updates 
kernel parameters. But it will need a node reboot to be applied, and it's very 
far from being optimal and acceptable for the end users. Reboots are slow, they 
can be a problem depending on the number of nodes/hardware, and also the timing 
of reboot shall be totally controlled (after all puppet has been applied 
properly).


In the first three cases, we also hit the problem that TripleO only accepts one 
single overcloud image for all deployments - there is no way to instruct 
TripleO to upload and use several images, depending on the node type (although 
Ironic supports it). Also, we are worried about upgrade paths if we do image 
customizations. We need a clear way to move forward on it.

So, we'd like to discuss the possible options there and the action items to 
take (raise bugs, create some blueprints...). To summarize, our end goal is the 
following:

- need to map overcloud-full images to roles
- need to be done in an automated way, no manual steps enforced, and in a way 
that can pass properly quality controls
- reboots are sub-optimal

What are your thoughts there?

Best,


Yolanda Robla
yrobl...@redhat.com
Principal Software Engineer - NFV Partner Engineer


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-02 Thread Daniel P. Berrange
On Fri, Dec 02, 2016 at 11:35:05AM +0100, Thierry Carrez wrote:
> Hi everyone,
> 
> There has been a bit of tension lately around creating IRC meetings.
> I've been busy[1] cleaning up unused slots and defragmenting biweekly
> ones to open up possibilities, but truth is, even with those changes
> approved, there will still be a number of time slots that are full:
> 
> Tuesday 14utc -- only biweekly available
> Tuesday 16utc -- full
> Wednesday 15utc -- only biweekly available
> Wednesday 16utc -- full
> Thursday 14utc -- only biweekly available
> Thursday 17utc -- only biweekly available
> 
> [1] https://review.openstack.org/#/q/topic:dec2016-cleanup
> 
> Historically, we maintained a limited number of meeting rooms in order
> to encourage teams to spread around and limit conflicts. This worked for
> a time, but those days I feel like team members don't have that much
> flexibility in picking a time that works for everyone. If the miracle
> slot that works for everyone is not available on the calendar, they tend
> to move the meeting elsewhere (private IRC channel, Slack, Hangouts)
> rather than change time to use a less-busy slot.
> 
> So I'm now wondering how much that artificial scarcity policy is hurting
> us more than it helps us. I'm still convinced it's very valuable to have
> a number of "meetings rooms" that you can lurk in and be available for
> pings, without having to join hundreds of channels where meetings might
> happen. But I'm not sure anymore that maintaining an artificial scarcity
> is helpful in limiting conflicts, and I can definitely see that it
> pushes some meetings away from the meeting channels, defeating their
> main purpose.
> TL;DR:
> - is it time for us to add #openstack-meeting-5 ?
> - should we more proactively add meeting channels in the future ?

Do we have any real data on just how many contributors really do
lurk in the meeting rooms permanently, as opposed to merely joining
rooms at start of the meeting & leaving immediately thereafter ?

Likewise any data on how many contributors are actively participate
in meetings across different projects, vs silod just in their own
one project ?

If the latter is in the clear majority, then you might as well just
have #openstack-meeting-$PROJECT and thus mostly avoid the problem
of conflicting demands for a limited set of channels.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-02 Thread Thierry Carrez
Hi everyone,

There has been a bit of tension lately around creating IRC meetings.
I've been busy[1] cleaning up unused slots and defragmenting biweekly
ones to open up possibilities, but truth is, even with those changes
approved, there will still be a number of time slots that are full:

Tuesday 14utc -- only biweekly available
Tuesday 16utc -- full
Wednesday 15utc -- only biweekly available
Wednesday 16utc -- full
Thursday 14utc -- only biweekly available
Thursday 17utc -- only biweekly available

[1] https://review.openstack.org/#/q/topic:dec2016-cleanup

Historically, we maintained a limited number of meeting rooms in order
to encourage teams to spread around and limit conflicts. This worked for
a time, but those days I feel like team members don't have that much
flexibility in picking a time that works for everyone. If the miracle
slot that works for everyone is not available on the calendar, they tend
to move the meeting elsewhere (private IRC channel, Slack, Hangouts)
rather than change time to use a less-busy slot.

So I'm now wondering how much that artificial scarcity policy is hurting
us more than it helps us. I'm still convinced it's very valuable to have
a number of "meetings rooms" that you can lurk in and be available for
pings, without having to join hundreds of channels where meetings might
happen. But I'm not sure anymore that maintaining an artificial scarcity
is helpful in limiting conflicts, and I can definitely see that it
pushes some meetings away from the meeting channels, defeating their
main purpose.

TL;DR:
- is it time for us to add #openstack-meeting-5 ?
- should we more proactively add meeting channels in the future ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [openstack-health] Avoid showing non-official projects failure ratios

2016-12-02 Thread Andreas Jaeger
On 12/02/2016 10:03 AM, Thierry Carrez wrote:
> Ken'ichi Ohmichi wrote:
>> Hi QA-team,
>>
>> In the big-tent policy, we continue creating new projects.
>> On the other hand, some projects became non-active.
>> That seems natural thing.
>>
>> Now openstack-health[1] shows non-active project as 100% failure ratio
>> on "Project Status".
>> The project has became non-official since
>> https://review.openstack.org/#/c/324412/
>> So I feel it is nice to have black-list or something to make it
>> disappear from the dashboard for concentrating on active projects'
>> failures.
>>
>> Any thoughts?
> 
> Yes, I totally agree we should only list active official projects in
> there, otherwise long-dead things like Cue will make the view look bad.
> Looks like the system adds new ones but does not remove anything ? It
> should probably take its list from [1].
> 
> [1]
> http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

Is cue completely dead? Should we then retire it completely following
http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project ?

It still has jobs setup and I see people submitting typo fixes etc.


Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stepping down from core

2016-12-02 Thread Miguel Angel Ajo Pelayo
It's been an absolute pleasure working with you on every single interaction.


Very good luck Henry,


On Fri, Dec 2, 2016 at 8:14 AM, Andreas Scheuring <
scheu...@linux.vnet.ibm.com> wrote:

> Henry, it was a pleasure working with you! Thanks!
> All the best for your further journey!
>
>
> --
> -
> Andreas
> IRC: andreas_s
>
>
>
> On Do, 2016-12-01 at 17:51 -0500, Henry Gessau wrote:
> > I've already communicated this in the neutron meeting and in some neutron
> > policy patches, but yesterday the PTL actually updated the gerrit ACLs
> so I
> > thought I'd drop a note here too.
> >
> > My work situation has changed and leaves me little time to keep up with
> my
> > duties as core reviewer, DB lieutenant, and drivers team member.
> >
> > Working with the diverse and very talented contributors to Neutron has
> been
> > the best experience of my career (which started before many of you were
> born).
> > Thank you all for making the team such a great community. Because of you
> the
> > project is thriving and will continue to be successful!
> >
> > I will still be around on IRC, contribute some small patches here and
> there,
> > and generally try to keep abreast of Neutron's progress. Don't hesitate
> to
> > ping me.
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-12-02 Thread Mehdi Abaakouk

On Fri, Dec 02, 2016 at 09:39:41AM +0100, Mehdi Abaakouk wrote:

On Fri, Dec 02, 2016 at 09:29:56AM +0100, Mehdi Abaakouk wrote:

And my bench seems to confirm the perf issue have been solved:


I have updated my requirement review to require >=1.3.1 [1] to solve
the monasca issue.

[1] https://review.openstack.org/404878a


And this is the update for all projects:

https://review.openstack.org/#/q/status:open+branch:master+topic:sileht/kafka-update

Nothing should block all of this anymore, except +2/+A :)

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release management team draft logo

2016-12-02 Thread Thierry Carrez
Doug Hellmann wrote:
> Release team, please take a look at the attached logo and let me know
> what you think.

It's not immediately obvious to me it's a shepherd dog, but then I don't
exactly know how to make that more obvious.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [openstack-health] Avoid showing non-official projects failure ratios

2016-12-02 Thread Thierry Carrez
Ken'ichi Ohmichi wrote:
> Hi QA-team,
> 
> In the big-tent policy, we continue creating new projects.
> On the other hand, some projects became non-active.
> That seems natural thing.
> 
> Now openstack-health[1] shows non-active project as 100% failure ratio
> on "Project Status".
> The project has became non-official since
> https://review.openstack.org/#/c/324412/
> So I feel it is nice to have black-list or something to make it
> disappear from the dashboard for concentrating on active projects'
> failures.
> 
> Any thoughts?

Yes, I totally agree we should only list active official projects in
there, otherwise long-dead things like Cue will make the view look bad.
Looks like the system adds new ones but does not remove anything ? It
should probably take its list from [1].

[1]
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alex Schultz core on puppet-tripleo

2016-12-02 Thread Julie Pichon
On 1 December 2016 at 22:26, Emilien Macchi  wrote:
> Team,
>
> Alex Schultz (mwhahaha on IRC) has been active on TripleO since a few
> months now.  While he's very active in different areas of TripleO, his
> reviews and contributions on puppet-tripleo have been very useful.
> Alex is a Puppet guy and also the current PTL of Puppet OpenStack. I
> think he perfectly understands how puppet-tripleo works. His
> involvement in the project and contributions on puppet-tripleo deserve
> that we allow him to +2 puppet-tripleo.
>
> Thanks Alex for your involvement and hard work in the project, this is
> very appreciated!

+1!

>
> As usual, I'll let the team to vote about this proposal.
>
> Thanks,
> --
> Emilien Macchi
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alex Schultz core on puppet-tripleo

2016-12-02 Thread Marios Andreou
On 02/12/16 00:26, Emilien Macchi wrote:
> Team,
> 
> Alex Schultz (mwhahaha on IRC) has been active on TripleO since a few
> months now.  While he's very active in different areas of TripleO, his
> reviews and contributions on puppet-tripleo have been very useful.
> Alex is a Puppet guy and also the current PTL of Puppet OpenStack. I
> think he perfectly understands how puppet-tripleo works. His
> involvement in the project and contributions on puppet-tripleo deserve
> that we allow him to +2 puppet-tripleo.
> 
> Thanks Alex for your involvement and hard work in the project, this is
> very appreciated!
> 
> As usual, I'll let the team to vote about this proposal.
> 
> Thanks,
> 

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-12-02 Thread Mehdi Abaakouk

On Fri, Dec 02, 2016 at 09:29:56AM +0100, Mehdi Abaakouk wrote:

And my bench seems to confirm the perf issue have been solved:


I have updated my requirement review to require >=1.3.1 [1] to solve
the monasca issue.

[1] https://review.openstack.org/404878

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] team logo (initial draft)

2016-12-02 Thread Heidi Joy Tretheway
Hi Morgan, 
I don’t know if the illustrator meant to hide a keystone in the illustration, 
or if you just saw one, but either way that’s brilliant! And don’t worry - 
these will definitely get color, but right now we’re submitting gray versions 
for review because it speeds the process (gets it out of the illustrators’ 
hands faster). I promise you’ll get color in the final product. 

Thanks for replying!

> On Dec 1, 2016, at 4:06 PM, Morgan Fainberg  wrote:
> 
> Looks good! Commented on the Form, but the "grey section" might be even 
> better if there was a little color to it. As it is, It might be too "stark" a 
> contrast as it is to a black laptop/background (white alone tends to be) if 
> the white sections are opaque, and it might fade into a "white" or silver 
> background (aka macbook (pro) style).
> 
> It might even be cooler if the grey sections were provided with a variety of 
> color differences.
> 
> Overall the turtle looks great stylistically, I like that the shell almost 
> has a "keystone" (as in what goes in an arch) shape to it.
> 
> Cheers,
> --Morgan
> 
> On Thu, Dec 1, 2016 at 2:09 PM, Steve Martinelli  > wrote:
> keystoners, we finally have a logo! well a draft version of it :)
> 
> Please provide feedback by Tuesday, Dec. 13 (good or bad) at: 
> www.tinyurl.com/OSmascot 
> Heidi (cc'ed) will be out of the office Dec. 2-12 but promises to respond to 
> questions as swiftly as possible when she returns.
> 
> All hail the turtle!
> 
> stevemar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-12-02 Thread Mehdi Abaakouk

On Fri, Dec 02, 2016 at 03:29:59PM +1100, Tony Breeds wrote:

On Thu, Dec 01, 2016 at 04:52:52PM +, Keen, Joe wrote:


Unfortunately there’s nothing wrong on the Monasca side so far as we know.
 We test new versions of the kafka-python library outside of Monasca
before we bother to try integrating a new version.  Since 1.0 the
kafka-python library has suffered from crashes and memory leaks severe
enough that we’ve never attempted using it in Monasca itself.  We reported
the bugs we found to the kafka-python project but they were closed once
they released a new version.


So Opening bugs isn't working.  What about writing code?


The bug https://github.com/dpkp/kafka-python/issues/55

Reopening it would be the right solution here.

I can't reproduce the segfault neither and I agree with dpkp, that looks like a
ujson issue.

And my bench seems to confirm the perf issue have been solved:
(but not in the pointed version...)

$ pifpaf run kafka python kafka_test.py
kafka-python version: 0.9.5
...
fetch size 179200 -> 45681.8728864 messages per second
fetch size 204800 -> 47724.3810674 messages per second
fetch size 230400 -> 47209.9841092 messages per second
fetch size 256000 -> 48340.7719787 messages per second
fetch size 281600 -> 49192.9896743 messages per second
fetch size 307200 -> 50915.3291133 messages per second

$ pifpaf run kafka python kafka_test.py
kafka-python version: 1.0.2

fetch size 179200 -> 8546.77931323 messages per second
fetch size 204800 -> 9213.30958314 messages per second
fetch size 230400 -> 10316.668006 messages per second
fetch size 256000 -> 11476.2285269 messages per second
fetch size 281600 -> 12353.7254386 messages per second
fetch size 307200 -> 13131.2367288 messages per second

(1.1.1 and 1.2.5 have also the same issue)

$ pifpaf run kafka python kafka_test.py
kafka-python version: 1.3.1
fetch size 179200 -> 44636.9371873 messages per second
fetch size 204800 -> 44324.7085365 messages per second
fetch size 230400 -> 45235.8283208 messages per second
fetch size 256000 -> 45793.1044121 messages per second
fetch size 281600 -> 44648.6357019 messages per second
fetch size 307200 -> 44877.8445987 messages per second
fetch size 332800 -> 47166.9176281 messages per second
fetch size 358400 -> 47391.0057622 messages per second

Looks like it works well now :)


Just in case I have updated a bit the bench script to ensure it always works 
for me:

--- kafka_test.py.ori   2016-12-02 09:16:10.570677010 +0100
+++ kafka_test.py   2016-12-02 09:06:04.870370438 +0100
@@ -14,7 +14,7 @@
import time
import ujson

-KAFKA_URL = '92.168.10.6:9092'
+KAFKA_URL = '127.0.0.1:9092'
KAFKA_GROUP = 'kafka_python_perf'
KAFKA_TOPIC = 'raw-events'

@@ -24,6 +24,7 @@

def write():
k_client = KafkaClient(KAFKA_URL)
+k_client.ensure_topic_exists(KAFKA_TOPIC)
p = KeyedProducer(k_client,
  async=False,
  req_acks=KeyedProducer.ACK_AFTER_LOCAL_WRITE,


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev