Re: [openstack-dev] Troubleshooting and ask.openstack.org

2016-06-30 Thread Tom Fifield

On 01/07/16 13:01, Adam Young wrote:

On 06/28/2016 11:13 PM, Tom Fifield wrote:

Quick answers in-line

On 29/06/16 05:44, Adam Young wrote:

It seems to me that keystone Core should be able to moderate Keystone
questions on the site.  That means that they should be able to remove
old dead ones, remove things tagged as Keystone that do not apply and so
on.  I would assume the same is true for Nova, Glance, Trove, Mistral
and all the rest.


If you send a list of ask openstack usernames to
community...@openstack.org , happy to give them moderator rights.
Anyone with karma beyond 200 already has them.


The email bounced.


Typo!

communitym...@openstack.org








We need some better top level interface than just the tags, though.
Ideally we would have a page where someone lands when troubleshooting
keystone with a series of questions and links to the discussion pages
for that question.  Like:


I get an error that says "cannot authenticate" what do I do?


Example - something like this link for "Common Upstream Development
Questions"

https://ask.openstack.org/en/questions/tags:common-upstream

?


What is the Engine behind "ask.openstack.org?"  does it have other tools
we could use?


Askbot - https://github.com/ASKBOT/askbot-devel


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting and ask.openstack.org

2016-06-30 Thread Adam Young

On 06/28/2016 08:33 PM, Steve Martinelli wrote:
I'm cool with the existing keystone repo and adding to docs. If we hit 
a huge amount of content then we can migrate to a new repo. I think 
Adam's main concern with this approach is that we reduce the 
contributors down to folks that know the gerrit workflow.


We don't want a static troubleshooting guide.  We want people to be able 
to ask questionss and link them to answers, have community members add 
their own answer...in short, what we hae in "ask.openstack" now, but not 
well done or maintained.


Its often not a Keystone problem, but a Nova, Glance etc problem. We 
can't stick the Answer in a Keystone repo.








On Tue, Jun 28, 2016 at 8:19 PM, Jamie Lennox > wrote:




On 29 June 2016 at 09:49, Steve Martinelli > wrote:

I think we want something a bit more organized.

Morgan tossed the idea of a keystone-docs repo, which could have:

- The FAQ Adam is asking about
- Install guides (moved over from openstack-manuals)
- A spot for all those neat and unofficial blog posts we do
- How-to guides
- etc...

I think it's a neat idea and warrants some discussion. Of
course, we don't want to be the odd project out.


What would be the advantage of a new repo rather than just using
the keystone/docs folder. My concern is that docs/ already gets
stagnate but a new repo would end up being largely ignored and at
least theoretically you can update docs/ when the relevant code
changes.


On Tue, Jun 28, 2016 at 6:00 PM, Ian Cordasco
> wrote:

-Original Message-
From: Adam Young >
Reply: OpenStack Development Mailing List (not for usage
questions)
>
Date: June 28, 2016 at 16:47:26
To: OpenStack Development Mailing List
>
Subject:  [openstack-dev] Troubleshooting and
ask.openstack.org 

> Recently, the Keystone team started brainstormin a
troubleshooting
> document. While we could, eventually put this into the
Keystone repo,
> it makes sense to also be gathering troubleshooting
ideas from the
> community at large. How do we do this?
>
> I think we've had a long enough run with the
ask.openstack.org  website
> to determine if it is really useful, and if it needs an
update.
>
>
> I know we getting nuked on the Wiki. What I would like
to be able to
> generate is Frequently Asked Questions (FAQ) page, but
as a living
> document.
>
> I think that ask.openstack.org
 is the right forum for this,
but we need
> some more help:
>
> It seems to me that keystone Core should be able to
moderate Keystone
> questions on the site. That means that they should be
able to remove
> old dead ones, remove things tagged as Keystone that do
not apply and so
> on. I would assume the same is true for Nova, Glance,
Trove, Mistral
> and all the rest.
>
> We need some better top level interface than just the
tags, though.
> Ideally we would have a page where someone lands when
troubleshooting
> keystone with a series of questions and links to the
discussion pages
> for that question. Like:
>
>
> I get an error that says "cannot authenticate" what do I do?
>
> What is the Engine behind "ask.openstack.org
?" does it have other tools
> we could use?

The engine is linked in the footer: https://askbot.com/

I'm not sure how much of it is reusable but it claims to
be able to do
some of the things I think you're asking for except it doesn't
explicitly mention deleting comments/questions/etc.

--
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

Re: [openstack-dev] Troubleshooting and ask.openstack.org

2016-06-30 Thread Adam Young

On 06/28/2016 11:13 PM, Tom Fifield wrote:

Quick answers in-line

On 29/06/16 05:44, Adam Young wrote:

It seems to me that keystone Core should be able to moderate Keystone
questions on the site.  That means that they should be able to remove
old dead ones, remove things tagged as Keystone that do not apply and so
on.  I would assume the same is true for Nova, Glance, Trove, Mistral
and all the rest.


If you send a list of ask openstack usernames to 
community...@openstack.org , happy to give them moderator rights. 
Anyone with karma beyond 200 already has them.


The email bounced.






We need some better top level interface than just the tags, though.
Ideally we would have a page where someone lands when troubleshooting
keystone with a series of questions and links to the discussion pages
for that question.  Like:


I get an error that says "cannot authenticate" what do I do?


Example - something like this link for "Common Upstream Development 
Questions"


https://ask.openstack.org/en/questions/tags:common-upstream

?


What is the Engine behind "ask.openstack.org?"  does it have other tools
we could use?


Askbot - https://github.com/ASKBOT/askbot-devel


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral] Library for JWT (JSON Web Token)

2016-06-30 Thread Renat Akhmerov

> On 30 Jun 2016, at 19:50, Mehdi Abaakouk  wrote:
> 
> 
> 
> Le 2016-06-30 13:07, Renat Akhmerov a écrit :
>> Reason: we need it to provide support for OpenID Connect
>> authentication in Mistral.
> 
> Can't [1] do the job ? (sorry if I'm off-beat)
> 
> [1] http://docs.openstack.org/developer/keystone/federation/openidc.html

No, please look at my message with subject "[keystone][openid][mistral] 
Enabling OpenID Connect authentication w/o federation”.

Again, maybe I’m missing something fundamental. If so, I’ll be glad to get an 
advice.

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Nominate Thomas Herve for Zaqar core

2016-06-30 Thread 王玺源
+1, Thomas did a great job on Zaqar. As we can see, many important feature
are done by him.

Welcome, thomas.

2016-07-01 6:33 GMT+08:00 Emilien Macchi :

> I'm not core but Thomas helped Puppet OpenStack group many times to
> get Zaqar working in gate and we highly appreciate his help.
> Way to go!
>
> On Thu, Jun 30, 2016 at 3:18 PM, Fei Long Wang 
> wrote:
> > Hi team,
> >
> > I would like to propose adding Thomas Herve(therve) for the Zaqar core
> team.
> > TBH, I have drafted this mail about 6 months ago, the reason you see this
> > mail until now is because I'm not sure if Thomas can dedicate his time on
> > Zaqar(He is a very busy man). But as you see, I'm wrong. He continually
> > contribute a lot of high quality patches for Zaqar[1] and a lot of
> inspiring
> > comments for this project and team. I'm sure he would make excellent
> > addition to the team. If no one objects, I'll proceed and add him  in a
> week
> > from now.
> >
> > [1]
> >
> http://stackalytics.com/?module=zaqar-group=commits=all_id=therve
> >
> > --
> > Cheers & Best regards,
> > Fei Long Wang (王飞龙)
> >
> --
> > Senior Cloud Software Engineer
> > Tel: +64-48032246
> > Email: flw...@catalyst.net.nz
> > Catalyst IT Limited
> > Level 6, Catalyst House, 150 Willis Street, Wellington
> >
> --
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: Architecture Working Group

2016-06-30 Thread Arkady_Kanevsky
+1

-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
Sent: Monday, June 20, 2016 10:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] Proposal: Architecture Working Group

+1 , great idea.

if we can add a mission/objective based on the nice definitions you added, will 
help a long way in cross-project architecture evolution.
moreover, I'd like this to be a integration point for openstack projects (and 
not a silo) so that we can build the shared understanding we really need to 
build.

On 6/17/16 5:52 PM, Clint Byrum wrote:
> ar·chi·tec·ture
> ˈärkəˌtek(t)SHər/
> noun
> noun: architecture
>
> 1.
>
> the art or practice of designing and constructing buildings.
>
> synonyms:building design, building style, planning, building,
> construction;
>
> formalarchitectonics
>
> "modern architecture"
>
> the style in which a building is designed or constructed, especially with 
> regard to a specific period, place, or culture.
>
> plural noun: architectures
>
> "Victorian architecture"
>
> 2.
>
> the complex or carefully designed structure of something.
>
> "the chemical architecture of the human brain"
>
> the conceptual structure and logical organization of a computer or 
> computer-based system.
>
> "a client/server architecture"
>
> synonyms:structure, construction, organization, layout, design,
> build, anatomy, makeup;
>
> informalsetup
>
> "the architecture of a computer system"
>
>
> Introduction
> =
>
> OpenStack is a big system. We have debated what it actually is [1],
> and there are even t-shirts to poke fun at the fact that we don't have
> good answers.
>
> But this isn't what any of us wants. We'd like to be able to point at
> something and proudly tell people "This is what we designed and
> implemented."
>
> And for each individual project, that is a possibility. Neutron can
> tell you they designed how their agents and drivers work. Nova can
> tell you that they designed the way conductors handle communication
> with API nodes and compute nodes. But when we start talking about how
> they interact with each other, it's clearly just a coincidental mash
> of de-facto standards and specs that don't help anyone make decisions
> when refactoring or adding on to the system.
>
> Oslo and cross-project initiatives have brought some peace and order
> to the implementation and engineering processes, but not to the design
> process. New ideas still start largely in the project where they are
> needed most, and often conflict with similar decisions and ideas in
> other projects [dlm, taskflow, tooz, service discovery, state
> machines, glance tasks, messaging patterns, database patterns, etc.
> etc.]. Often times this creates a log jam where none of the projects
> adopt a solution that would align with others. Most of the time when
> things finally come to a head these things get done in a piecemeal
> fashion, where it's half done here,
> 1/3 over there, 1/4 there, and 3/4 over there..., which to the outside
> looks like chaos, because that's precisely what it is.
>
> And this isn't always a technical design problem. OpenStack, for
> instance, isn't really a micro service architecture. Of course, it
> might look like that in diagrams [2], but we all know it really isn't.
> The compute node is home to agents for every single concern, and the
> API interactions between the services is too tightly woven to consider
> many of them functional without the same lockstep version of other
> services together. A game to play is ask yourself what would happen if
> a service was isolated on its own island, how functional would its API
> be, if at all. Is this something that we want? No. But there doesn't
> seem to be a place where we can go to actually design, discuss,
> debate, and ratify changes that would help us get to the point of
> gathering the necessary will and capability to enact these efforts.
>
> Maybe nova-compute should be isolated from nova, with an API that
> nova, cinder and neutron talk to. Maybe we should make the scheduler
> cross-project aware and capable of scheduling more than just nova
> instances. Maybe we should have experimental groups that can look at
> how some of this functionality could perhaps be delegated to
> non-openstack projects. We hear that Mesos, for example to help with
> the scheduling aspects, but how do we discuss these outside hijacking
> threads on the mailing list? These are things that we all discuss in
> the hallways and bars and parties at the summit, but because they
> cross projects at the design level, and are inherently a lot of social
> and technical and exploratory work, Many of us fear we never get to a
> place of turning our dreams into reality.
>
> So, with that, I'd like to propose the creation of an Architecture
> Working Group. This group's charge would not be design by committee,
> but a place for architects to share their designs and gain support
> across projects to 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-30 Thread Arkady_Kanevsky
There is a version of Tempest that is released as part of OpenStack release.
Agree with Mark that we should stick to versions parity.

-Original Message-
From: Mark Voelker [mailto:mvoel...@vmware.com]
Sent: Monday, June 20, 2016 8:25 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tempest][nova][defcore] Add option to disable 
some strict response checking for interop testing


> On Jun 20, 2016, at 8:46 AM, Doug Hellmann wrote:
>
> Excerpts from Mark Voelker's message of 2016-06-16 20:33:36 +:
>
>> On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
>>
>>
>>> I don't think DefCore actually needs to change old versions of
>>> Tempest, but maybe Chris or Mark can verify that?
>>
>> So if I'm groking this correctly, there's kind of two scenarios being
>> painted here. One is the "LCD" approach where we use the
>> $osversion-eol version of Tempest, where $osversion matches the
>> oldest version covered in a Guideline. The other is to use the
>> start-of-$osversion version of Tempest where $osversion is the
>> OpenStack version after the most recent one in the Guideline. The
>> former may result in some fairly long-lived flags, and the latter is
>> actually not terribly different than what we do today I think.
>> Let me try to talk through both...
>>
>> In some cases, tests get flagged in the Guidelines because of bugs in
>> the test or because the test needs refactoring. The underlying
>> Capabilities the those tests are testing actually work fine. Once we
>> identify such an issue, the test can be fixed...in master. Under the
>> first scenario, this potentially creates some very long-lived flags:
>>
>> 2016.01 is the most current Guideline right now covers Juno, Kilo,
>> Liberty (and Mitaka after it was released). It's one of the two
>> Guidelines that you can use if you want an OpenStack Powered license
>> from he Foundation, $vendor wants to run it against their shiny new
>> Mitaka cloud. They run the Juno EOL version of Tempest (tag=8), they
>> find a test issue, and we flag it. A few weeks later, a fix lands in
>> Tempest. Several months later the next Guideline rolls
>> around: the oldest covered release is Kilo and we start telling
>> people to use the Kilo-EOL version of Tempest. That doesn't have the
>> fix, so the flag stays. Another six months goes by and we get a
>> Guideline and we're up to the Liberty-EOL version of Tempest. No
>> fix, flag stays. Six more months, and now we're at Mitaka-EOL, and
>> that's the first version that includes the fix.
>>
>> Generally speaking long lived flags aren't so great because it means
>> the tests are not required...which means there's less or no assurance
>> that the capabilities they test for actually work in the clouds that
>> adhere to those Guidelines. So, the worst-case scenario here looks
>> kind of ugly.
>>
>> As Matt correctly pointed out though, the capabilities DefCore
>> selects for are generally pretty stable API's that are long-lived
>> across many releases, so we haven't run into a lot of issues running
>> pretty new versions of Tempest against older clouds to date. In fact
>> I'm struggling to think of a time we've flagged something because
>> someone complained the test wasn't runnable against an older release
>> covered by the Guideline in question. I can think of plenty of times
>> where we've flagged something due to a test issue though...keep in mind
>> we're still in pretty formative times with DefCore here where these
>> tests are starting to be used in a new way for the first time.
>> Anyway, as Matt points out we could potentially use a much newer
>> Tempest tag: tag=11 (which is the start of Newton development and is
>> a roughly 2 month old version of Tempest). Next Guideline rolls
>> around, we use the tag for start-of-ocata, and we get the fix and can
>> drop the flag.
>>
>> Today, RefStack client by default checks out a specific SHA of
>> Tempest [1] (it actually did use a tag at some point in the past, and
>> still can). When we see a fix for a flagged test go in, we or the
>> Refstack folks can do a quick test to make sure everything's in order
>> and then update that SHA to match the version with the fix. That way
>> we're relatively sure we have a version that works today, and will
>> work when we drop the flag in the next Guideline too. When we
>> finalize that next Guideline, we also update the test-repositories
>> section of the new Guideline that Matt pointed to earlier to reflect
>> the best-known version on the day the Guideline was sent to the Board
>> for approval. One added benefit of this approach is that people
>> running the tests today may get a version of Tempest that includes a
>> fix for a flagged test. A flagged test isn't required, but it does
>> get run-and now will show a passing result, so we have data that says
>> "this provider actually does support this capability (even though
>> it's flagged), and the test does indeed seem to be 

Re: [openstack-dev] [kolla][vote] Apply for release:managed tag

2016-06-30 Thread Tony Breeds
On Fri, Jul 01, 2016 at 12:56:09AM +, Steven Dake (stdake) wrote:
> Hey folks,
> 
> I'd like the release management team to take on releases of Kolla.  This
> means applying for the release:managed[1] tag.  Please vote +1 if you wish to
> proceed, or -1 if you don't wish to proceed.  The most complex part of this
> change will be that when feature freeze happens, we must actually freeze all
> feature development.  We as a team haven't been super good at this in the
> past, but I am confident we could hold to that set of rules if the core team
> is in agreement on this vote.

I'm far from the center of this but release:managed is on the way out (
https://review.openstack.org/#/c/335440/ ) So I think you're good as you are.
I'm sure Doug will provide help in understanding what the release process looks
like without release:managed.  Especially WRT feature-freeze / RC periods.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Apply for release:managed tag

2016-06-30 Thread Swapnil Kulkarni (coolsvap)
On Fri, Jul 1, 2016 at 6:26 AM, Steven Dake (stdake)  wrote:
> Hey folks,
>
> I'd like the release management team to take on releases of Kolla.  This
> means applying for the release:managed[1] tag.  Please vote +1 if you wish
> to proceed, or –1 if you don’t wish to proceed.  The most complex part of
> this change will be that when feature freeze happens, we must actually
> freeze all feature development.  We as a team haven't been super good at
> this in the past, but I am confident we could hold to that set of rules if
> the core team is in agreement on this vote.
>
> I will leave voting open or 1 week until July 8th.  If a majority is reached
> prior to that date, I will close voting early and submit governance changes.
>
> Note the release team would have  to accept our application, so even though
> we may decide to vote to be release:managed, it is ultimately up to the
> discretion of the release management team whether we meet the criteria and
> if they have the bandwidth to work with our release liason.
>
> [1]
> https://github.com/openstack/governance/blob/master/reference/tags/release_managed.rst
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

+1 to apply for release:managed tag.

Swapnil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] ARP responder for VLAN network?

2016-06-30 Thread zhi
hi, all.

As we all know, for vxlan networks, ovs bridge "br-tun" are treated ARP
responder. L2 agent add ARP response flows into that bridge. So that we can
save ARP traffic in physical network.

But, what about vlan networks? I created two vms in one vlan network
and the same subnet. I can catch ARP request & reply packages in physical
ethernets when ping one vm from another. And I set arp_responder=True in
both compute nodes in neutron configuration file.

Can we treated physical ovs bridge as ARP responder? Just like the
behavior of br-tun?

Hope for your reply ;-)


Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Apply for release:managed tag

2016-06-30 Thread Michał Jastrzębski
+1 while I know that we weren't actually good at freezing features, we
were young project. Right now we're more mature and I think we can
deal with feature freeze.

On 30 June 2016 at 19:56, Steven Dake (stdake)  wrote:
> Hey folks,
>
> I'd like the release management team to take on releases of Kolla.  This
> means applying for the release:managed[1] tag.  Please vote +1 if you wish
> to proceed, or –1 if you don’t wish to proceed.  The most complex part of
> this change will be that when feature freeze happens, we must actually
> freeze all feature development.  We as a team haven't been super good at
> this in the past, but I am confident we could hold to that set of rules if
> the core team is in agreement on this vote.
>
> I will leave voting open or 1 week until July 8th.  If a majority is reached
> prior to that date, I will close voting early and submit governance changes.
>
> Note the release team would have  to accept our application, so even though
> we may decide to vote to be release:managed, it is ultimately up to the
> discretion of the release management team whether we meet the criteria and
> if they have the bandwidth to work with our release liason.
>
> [1]
> https://github.com/openstack/governance/blob/master/reference/tags/release_managed.rst
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hangzhou Bug Smash will be held from July 6 to 8

2016-06-30 Thread Fred Li
Hi Will,Happy to know that you are going to join remotely!you can add bugs you want to fix into the link [3] below, and discuss on the irc channel. RegardsFredOn Jul 1, 2016, at 02:07, Will Zhou  wrote:Hello Fred,Great event! In which way can we work remotely with the team? Thanks.On Thu, Jun 30, 2016 at 11:05 PM Liyongle (Fred)  wrote:Hi OpenStackers,

The 4th China OpenStack Bug Smash, hosted by CESI, Huawei, and intel, will be held in Hangzhou, China from July 6 to 8 (Beijing time), from 01:00 July 6 to 6:00 July 8 UTC. And the target to get bugs fixed before the milestone newton-2  [1].

Around 50 stackers will fix the bugs in nova, cinder, neutron, magnum, ceilometer, heat, ironic, smaug, freezer, oslo, murano and kolla. You are appreciated to provide any support, or work remotely with the team.

Please find this bug smash home page at [2], and the bugs list in [3] (under preparation).

[1] http://releases.openstack.org/newton/schedule.html
[2] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Newton-Hangzhou
[3] https://etherpad.openstack.org/p/hackathon4_all_list

Best Regards

Fred (李永乐)

China OpenStack Bug Smash Team
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-- -​周正喜Mobile: 13701280947​WeChat: 472174291
__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hangzhou Bug Smash will be held from July 6 to 8

2016-06-30 Thread Fred Li
Hi Will,

Happy to know that you are going to join remotely!

you can add bugs you want to fix into the link [3] below, and discuss on the 
irc channel. 

Regards
Fred
> On Jul 1, 2016, at 02:07, Will Zhou  > wrote:
> 
> Hello Fred,
> 
> Great event! In which way can we work remotely with the team? Thanks.
> 
> On Thu, Jun 30, 2016 at 11:05 PM Liyongle (Fred)  > wrote:
> Hi OpenStackers,
> 
> The 4th China OpenStack Bug Smash, hosted by CESI, Huawei, and intel, will be 
> held in Hangzhou, China from July 6 to 8 (Beijing time), from 01:00 July 6 to 
> 6:00 July 8 UTC. And the target to get bugs fixed before the milestone 
> newton-2  [1].
> 
> Around 50 stackers will fix the bugs in nova, cinder, neutron, magnum, 
> ceilometer, heat, ironic, smaug, freezer, oslo, murano and kolla. You are 
> appreciated to provide any support, or work remotely with the team.
> 
> Please find this bug smash home page at [2], and the bugs list in [3] (under 
> preparation).
> 
> [1] http://releases.openstack.org/newton/schedule.html 
> 
> [2] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Newton-Hangzhou 
> 
> [3] https://etherpad.openstack.org/p/hackathon4_all_list 
> 
> 
> Best Regards
> 
> Fred (李永乐)
> 
> China OpenStack Bug Smash Team
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> -- 
> 
> -
> ​周正喜
> Mobile: 13701280947 
> ​WeChat: 472174291
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hangzhou Bug Smash will be held from July 6 to 8

2016-06-30 Thread xiangxinyong
Hello Fred,


Thanks.
It is great to communicate and cooperate with openstacker.
I will be there. 
See you guys.



Best Regards,
  xiangxinyong


> Hi OpenStackers,
> 
> The 4th China OpenStack Bug Smash, hosted by CESI, Huawei, and intel, will be 
> held in Hangzhou, China from July 6 to 8 (Beijing time), from 01:00 July 6 to 
> 6:00 July 8 UTC. And the target to get bugs fixed before the milestone 
> newton-2  [1].
> 
> Around 50 stackers will fix the bugs in nova, cinder, neutron, magnum, 
> ceilometer, heat, ironic, smaug, freezer, oslo, murano and kolla. You are 
> appreciated to provide any support, or work remotely with the team. 
> 
> Please find this bug smash home page at [2], and the bugs list in [3] (under 
> preparation). 
> 
> [1] http://releases.openstack.org/newton/schedule.html
> [2] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Newton-Hangzhou
> [3] https://etherpad.openstack.org/p/hackathon4_all_list
> 
> Best Regards
> 
> Fred (??)
> 
> China OpenStack Bug Smash Team__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][vote] Apply for release:managed tag

2016-06-30 Thread Steven Dake (stdake)
Hey folks,

I'd like the release management team to take on releases of Kolla.  This means 
applying for the release:managed[1] tag.  Please vote +1 if you wish to 
proceed, or -1 if you don't wish to proceed.  The most complex part of this 
change will be that when feature freeze happens, we must actually freeze all 
feature development.  We as a team haven't been super good at this in the past, 
but I am confident we could hold to that set of rules if the core team is in 
agreement on this vote.

I will leave voting open or 1 week until July 8th.  If a majority is reached 
prior to that date, I will close voting early and submit governance changes.

Note the release team would have  to accept our application, so even though we 
may decide to vote to be release:managed, it is ultimately up to the discretion 
of the release management team whether we meet the criteria and if they have 
the bandwidth to work with our release liason.

[1] 
https://github.com/openstack/governance/blob/master/reference/tags/release_managed.rst


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] the meaning of the release:managed tag now that everything is released:managed

2016-06-30 Thread Steven Dake (stdake)


On 6/30/16, 10:22 AM, "Jeremy Stanley"  wrote:

>On 2016-06-30 14:07:53 + (+), Steven Dake (stdake) wrote:
>[...]
>> If it does have some special meaning or requirements beyond the
>> "we will freeze on the freeze deadline" could someone enumerate
>> those?
>[...]
>
>As far as I know it still means that release activities for the
>deliverable are handled by the Release Management team. A quick
>parsing of the projects.yaml indicates that only ~21% (125 out of
>582) of the deliverables for official projects have that tag
>applied.
>-- 
>Jeremy Stanley

Jeremy,

Thanks for the quick response.  So just to clarify, the release team for
release:managed does not rely on the release liaison to produce the sha
hash to release with?  I'm satisfied with a release tagging happening
anytime during the week for Kolla of the release milestone weeks from tip
of master.  The branching at milestone 3 we already do.  I'd have to taek
the freeze up with a vote of the core reviewer team.  Are there other
requirements?

Does the release team have the bandwidth to handle tagging another
repository during release milestones?  If so, I'll get the ball rolling on
the voting and the governance changes.

Thanks for any input or clarity folks may provide.

Regards
-steve

>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

2016-06-30 Thread Matt Riedemann

On 6/29/2016 10:49 AM, Rosa, Andrea (HP Cloud Services) wrote:

One more vote from  "not a core member" .
I am not a core and I am mainly involved in the Nova project where Scott 
presence is always useful
and valuable when we need to sort out some cinder <-> nova issue.

--
Andrea Rosa



+1 to that, I always appreciate Scott's willingness to help us out in 
the nova channel with cinder-related questions.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Nominate Thomas Herve for Zaqar core

2016-06-30 Thread Emilien Macchi
I'm not core but Thomas helped Puppet OpenStack group many times to
get Zaqar working in gate and we highly appreciate his help.
Way to go!

On Thu, Jun 30, 2016 at 3:18 PM, Fei Long Wang  wrote:
> Hi team,
>
> I would like to propose adding Thomas Herve(therve) for the Zaqar core team.
> TBH, I have drafted this mail about 6 months ago, the reason you see this
> mail until now is because I'm not sure if Thomas can dedicate his time on
> Zaqar(He is a very busy man). But as you see, I'm wrong. He continually
> contribute a lot of high quality patches for Zaqar[1] and a lot of inspiring
> comments for this project and team. I'm sure he would make excellent
> addition to the team. If no one objects, I'll proceed and add him  in a week
> from now.
>
> [1]
> http://stackalytics.com/?module=zaqar-group=commits=all_id=therve
>
> --
> Cheers & Best regards,
> Fei Long Wang (王飞龙)
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flw...@catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> --
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: Architecture Working Group

2016-06-30 Thread Clint Byrum
Excerpts from Mike Perez's message of 2016-06-30 14:10:30 -0700:
> On 09:02 Jun 30, Clint Byrum wrote:
> > Excerpts from Mike Perez's message of 2016-06-30 07:50:42 -0700:
> > > On 11:31 Jun 20, Clint Byrum wrote:
> > > > Excerpts from Joshua Harlow's message of 2016-06-17 15:33:25 -0700:
> > > > > Thanks for getting this started Clint,
> > > > > 
> > > > > I'm happy and excited to be involved in helping try to guide the 
> > > > > whole 
> > > > > ecosystem together (it's also why I like being in oslo) to a 
> > > > > architecture that is more cohesive (and is more of something that we 
> > > > > can 
> > > > > say to our current or future children that we were all involved and 
> > > > > proud to be involved in creating/maturing...).
> > > > > 
> > > > > At a start, for said first meeting, any kind of agenda come to mind, 
> > > > > or 
> > > > > will it be more a informal gathering to start (either is fine with 
> > > > > me)?
> > > > > 
> > > > 
> > > > I've been hesitant to fill this in too much as I'm still forming the
> > > > idea, but here are the items I think are most compelling to begin with:
> > > > 
> > > > * DLM's across OpenStack -- This is already under way[1], but it seems 
> > > > to
> > > >   have fizzled out. IMO that is because there's no working group who
> > > >   owns it. We need to actually write some plans.
> > > 
> > > Not meaning to nitpick, but I don't think this is a compelling reason for 
> > > the
> > > architecture working group. We need a group that wants to spend time on
> > > reviewing the drivers being proposed. This is like saying we need the
> > > architecture working group because no working group is actively reshaping 
> > > quotas
> > > cross-project. 
> > > 
> > 
> > That sounds like a reasoned deep argument, not a nitpick, so thank you
> > for making it.
> > 
> > However, I don't think lack of drivers is standing in the way of a DLM
> > effort. It is a lack of coordination. There was a race to the finish line
> > to make Consul and etcd drivers, but then, like the fish in finding Nemo,
> > the drivers are in bags floating in the bay.. now what?
> 
> Some drivers are still in review, or likely abandoned efforts so it's not
> really a bay of options as you're describing it.
> 

Heh, that kind of sounds like the same thing.. not a bay of options,
just options stuck between the fish tank and the bay.

> Cinder has continued forward with being the guinea pig as planned with Tooz.
> [1] I don't think this a great example for your argument because
> 
> 1) Not all projects need this.
> 
> 2) This was discussed in Tokyo and just done in Mitaka for Cinder. Why not 
> give
>projects time to evaluate when they're ready?
> 
> > Nobody owns this effort. Everybody gets busy. Nothing gets done. We
> > continue to bring it up in the hallway and wish we had time.
>
> I don't ever foresee a release where we say "All projects support DLM". In 
> fact
> I see things going as planned because:
> 
> 1) We have a project that carried it forward as planned.
> 2) We're purposely not repeat the MQ mess. Only DLM drivers with support from
>members of the community are surfacing up.
> 
> I would ask you instead, how exactly are you measuring success here?
>

That's a great question. I think the community did what I'd like to
see the working group do as it's first order of business: Mapped the
territory, and provided a plan to improve it. So to your point, there's no
need for an architecture working group if this always happens as planned
in all instances. I'd personally like to see it happen this way all the
time, which is the primary reason I'm motivated to coordinate this
group.

As a second order of business, I think this group would have a hard time
keeping momentum if all it did were write architectural plans. Each of
the designs it helps create need to be backed up with actual work. Who
cares if you drew a picture of a bridge: show me the bridge. :)

> > This is just a place to have a meeting and some people who get together
> > and say "hey is that done yet? Do you need help? is that still a
> > priority?". Could we do this as part of Oslo? Yes! But, I want this to
> > be about going one step higher, and actually taking the implementations
> > into the respective projects.
> 
> How about calling a cross-project meeting? [2] I have already spent the time
> organizing people who are interested from each appropriate project team that
> are eager to help [3]. Again you can call your posse whatever, but please work
> with the people already around to assist.
> 

That's exactly what I want to help do. So perhaps we do need to more
formally attach that second order of business to the existing cross
project processes. I'll noodle on that and see if I can more clearly
draw that line. Thanks for bringing up the overlap.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [mistral][osc-lib][openstackclient] is it too early for orc-lib?

2016-06-30 Thread Steve Martinelli
The crux of this, as Dean stated, is if the library wants OSC to always be
pulled in (along with its many dependencies). We've seen folks include it
in requirements, test-requirements, or even not at all (just document that
OSC needs to be installed).

I tossed up the idea with the ironic team of leveraging "extras" field to
list OSC as optional, the change would look like:

--- a/setup.cfg
+++ b/setup.cfg
@@ -22,6 +22,10 @@ classifier =

+[extras]
+cli =
+  python-openstackclient>=3.0.0  # Apache-2.0
+

So, if a user wanted to install just the python binding of ironicclient or
mistralclient, they would do $ pip install python-ironicclient; if a user
wanted the CLI as well.. $ pip install python-ironicclient[cli]

Just an idea, it may be overkill and completely horrible.

On Thu, Jun 30, 2016 at 5:29 PM, Dean Troyer  wrote:

> On Thu, Jun 30, 2016 at 8:38 AM, Hardik 
> wrote:
>
>> Regarding osc-lib we have mainly two changes.
>>
>> 1) Used "utils" which is moved from openstackclient.common.utils to
>> osc_lib.utils
>> 2) We used "command"  which wrapped in osc_lib from cliff.
>>
>> So I think there is no harm in keeping osc_lib.
>>
>
> Admittedly the change to include osc-lib is a little early, I would have
> preferred until the other parts of it were a bit more stable.
>
>
>> Also, I guess we do not need openstackclient to be installed  with
>> mistralclient as if mistral is used in standalone mode
>> there is no need of openstackclient.
>>
>
> The choice to include OSC as a dependency of a plugin/library rests
> entirely on the plugin team, and that will usually be determined by the
> answer to the question "Do you want all users of your library to have OSc
> installed even if they do not use it?"  or alternatively "Do you want to
> make your users remember to install OSC after installing the plugin?"
>
> Note that we do intend to have the capability on osc-lib to build an
> OSC-like stand-alone binary for plugins that would theoretically make
> installing OSC optional for stand-alone client users.  This is not complete
> yet, and as I said above, one reason I wish osc-lib had not been merged
> into plugin requirements yet.  That said, as long as you don't use those
> bits yet you will be fine, the utils, command, etc bits are stable, it is
> the clientmanager and shell parts that are still being developed.
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][osc-lib][openstackclient] is it too early for orc-lib?

2016-06-30 Thread Dean Troyer
On Thu, Jun 30, 2016 at 8:38 AM, Hardik 
wrote:

> Regarding osc-lib we have mainly two changes.
>
> 1) Used "utils" which is moved from openstackclient.common.utils to
> osc_lib.utils
> 2) We used "command"  which wrapped in osc_lib from cliff.
>
> So I think there is no harm in keeping osc_lib.
>

Admittedly the change to include osc-lib is a little early, I would have
preferred until the other parts of it were a bit more stable.


> Also, I guess we do not need openstackclient to be installed  with
> mistralclient as if mistral is used in standalone mode
> there is no need of openstackclient.
>

The choice to include OSC as a dependency of a plugin/library rests
entirely on the plugin team, and that will usually be determined by the
answer to the question "Do you want all users of your library to have OSc
installed even if they do not use it?"  or alternatively "Do you want to
make your users remember to install OSC after installing the plugin?"

Note that we do intend to have the capability on osc-lib to build an
OSC-like stand-alone binary for plugins that would theoretically make
installing OSC optional for stand-alone client users.  This is not complete
yet, and as I said above, one reason I wish osc-lib had not been merged
into plugin requirements yet.  That said, as long as you don't use those
bits yet you will be fine, the utils, command, etc bits are stable, it is
the clientmanager and shell parts that are still being developed.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: Architecture Working Group

2016-06-30 Thread Mike Perez
On 09:02 Jun 30, Clint Byrum wrote:
> Excerpts from Mike Perez's message of 2016-06-30 07:50:42 -0700:
> > On 11:31 Jun 20, Clint Byrum wrote:
> > > Excerpts from Joshua Harlow's message of 2016-06-17 15:33:25 -0700:
> > > > Thanks for getting this started Clint,
> > > > 
> > > > I'm happy and excited to be involved in helping try to guide the whole 
> > > > ecosystem together (it's also why I like being in oslo) to a 
> > > > architecture that is more cohesive (and is more of something that we 
> > > > can 
> > > > say to our current or future children that we were all involved and 
> > > > proud to be involved in creating/maturing...).
> > > > 
> > > > At a start, for said first meeting, any kind of agenda come to mind, or 
> > > > will it be more a informal gathering to start (either is fine with me)?
> > > > 
> > > 
> > > I've been hesitant to fill this in too much as I'm still forming the
> > > idea, but here are the items I think are most compelling to begin with:
> > > 
> > > * DLM's across OpenStack -- This is already under way[1], but it seems to
> > >   have fizzled out. IMO that is because there's no working group who
> > >   owns it. We need to actually write some plans.
> > 
> > Not meaning to nitpick, but I don't think this is a compelling reason for 
> > the
> > architecture working group. We need a group that wants to spend time on
> > reviewing the drivers being proposed. This is like saying we need the
> > architecture working group because no working group is actively reshaping 
> > quotas
> > cross-project. 
> > 
> 
> That sounds like a reasoned deep argument, not a nitpick, so thank you
> for making it.
> 
> However, I don't think lack of drivers is standing in the way of a DLM
> effort. It is a lack of coordination. There was a race to the finish line
> to make Consul and etcd drivers, but then, like the fish in finding Nemo,
> the drivers are in bags floating in the bay.. now what?

Some drivers are still in review, or likely abandoned efforts so it's not
really a bay of options as you're describing it.

Cinder has continued forward with being the guinea pig as planned with Tooz.
[1] I don't think this a great example for your argument because

1) Not all projects need this.

2) This was discussed in Tokyo and just done in Mitaka for Cinder. Why not give
   projects time to evaluate when they're ready?

> Nobody owns this effort. Everybody gets busy. Nothing gets done. We
> continue to bring it up in the hallway and wish we had time.
   
I don't ever foresee a release where we say "All projects support DLM". In fact
I see things going as planned because:

1) We have a project that carried it forward as planned.
2) We're purposely not repeat the MQ mess. Only DLM drivers with support from
   members of the community are surfacing up.

I would ask you instead, how exactly are you measuring success here?

> This is just a place to have a meeting and some people who get together
> and say "hey is that done yet? Do you need help? is that still a
> priority?". Could we do this as part of Oslo? Yes! But, I want this to
> be about going one step higher, and actually taking the implementations
> into the respective projects.

How about calling a cross-project meeting? [2] I have already spent the time
organizing people who are interested from each appropriate project team that
are eager to help [3]. Again you can call your posse whatever, but please work
with the people already around to assist.

[1] - https://review.openstack.org/#/c/185646/
[2] - http://docs.openstack.org/project-team-guide/cross-project.html#general
[3] - 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Liaisons

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift][keystone] Using JSON as future ACL format

2016-06-30 Thread Steve Martinelli
I'm a bit late to the game here, and others have summed this up
excellently, but i'll add my 2c. I think option 2 is the way to go, user
and project IDs will absolutely work best as they are immutable.

On Fri, Jun 10, 2016 at 4:52 AM, Coles, Alistair 
wrote:

> Thai,
>
>
>
> (repeating some of what we have discussed in private for others’ benefit)
>
>
>
> The Swift docs state “Be sure to use Keystone UUIDs rather than names in
> container ACLs” [1]. The guidance is re-iterated here [2] “…names must no
> longer be used in cross-tenant ACLs…”. They then go on to explain that for
> backwards compatibility (i.e. to not break ACLS that have already been
> persisted in Swift) names in ACLs are supported in the default domain only.
>
>
>
> The intent was not to encourage the continued use of names in the default
> domain (or while using keystone v2), nor to suggest that was “safe”, but to
> recognize that names had previously been used in contexts where names were
> unique and the ‘:’ separator was safe. In fact, Swift provides an option to
> disallow name matching in **all** contexts when no such backwards
> compatibility is required.
>
>
>
> At the time we made those changes (c. Atlanta summit) the input I had from
> Keystone devs was that names were not globally unique (with keystone v3),
> were mutable and should not be persisted. Hence the swift docs guidance. We
> actually considered preventing any new ACL being set with names, but to do
> so requires distinguishing a name string from a UUID string, which we
> didn’t find a solution for.
>
>
>
> So in response to your argument “If we are allowing V2 to store names [{
> project, name }], I do not see why we should not allow the same for V3 [{
> domain, project, name }]” : yes, we are allowing names in ACLs in some
> circumstances, but only for backwards compatibility, and we are not
> encouraging it. Having gone through the pain of dealing with names in
> existing persisted ACLs, I am reluctant to encourage their
> continued/renewed use.
>
>
>
> Are their examples of any other projects requiring names rather than UUIDs
> in ACLs, or for other purposes, that we can learn from?
>
>
>
> The idea discussed here [3] (not implemented) was that names could be
> supported in a JSON ACL format but should be resolved to UUIDs before
> persisting in Swift. That way a user’s name can change but since their
> request token is resolved to UUID then any persisted ACL would still match.
> As has already been mentioned in another reply on this thread, Swift has a
> JSON ACL format for “v1”/TempAuth account level ACLs [4] that could perhaps
> be implemented for keystoneauth and then extended to containers.
>
>
>
> Alistair
>
>
>
> [1]
> http://docs.openstack.org/developer/swift/overview_auth.html#access-control-using-keystoneauth
>
> [2] http://docs.openstack.org/developer/swift/middleware.html#keystoneauth
>
> [3] https://wiki.openstack.org/wiki/Swift/ContainerACLWithKeystoneV3
>
> [4] http://docs.openstack.org/developer/swift/overview_auth.html#tempauth
>
>
>
>
>
>
>
> *From:* Thai Q Tran [mailto:tqt...@us.ibm.com]
> *Sent:* 06 June 2016 21:06
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [swift][keystone] Using JSON as future ACL
> format
>
>
>
> Hello all,
>
> Hope everyone had a good weekend, and hope this email does not ruin your
> next.
> We had a small internal discussion at IBM and here are some of the
> findings that I will present to the wider community.
>
> 1. The ":" separator that swift currently uses is not entirely safe since
> LDAP can be configured to allow special characters in user IDs. It
> essentially means no special characters are safe to use as separators. I am
> not sure how practical this is, but its something to consider.
>
> 2. Since names are not guaranteed to be immutable, we should store
> everything via IDs. Currently, for backward compatibility reasons, Swift
> continues to support names for for V2. Keep in mind that V2 does not
> guarantee that names are immutable either. Given this fact and what we know
> from #1, we can say that names are mutable for both V2 and V3, and that any
> separators we use are fallible. In other words, using a separator for names
> or ids will not work 100% of the time.
>
> 3. Keystone recently enabled URL safe naming of project and domains for
> their hierarchal work. As a by product of that, if the option is enabled,
> Swift can essentially use the reserved characters as separators. The list
> of reserved characters are listed below. The only question remaining, how
> does Keystone inform Swift that this option is enabled? or Swift can add an
> separator option that is a subset of the characters below and leave it to
> the deployer to configure it.
>
> ";" | "/" | "?" | ":" | "@" | "&" | "=" | "+" |"$" | ","
>
>
> 

Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-06-30 Thread Matt Riedemann

On 6/30/2016 8:44 AM, Daniel P. Berrange wrote:

A bunch of people in Nova and upstream QEMU teams are trying to investigate
a long standing bug in live migration[1]. Unfortuntely the bug is rather
non-deterministic - eg on the multinode-live-migration tempest job it has
hit 4 times in 7 days, while on multinode-full tempest job it has hit
~70 times in 7 days.


For those that don't know, the multinode-live-migration job only runs 
the live migration tests in Tempest and it runs on ubuntu 16.04 nodes 
while the multinode-full job runs the live migration tests plus the 
normal tempest full job run, but on ubuntu 14.04 nodes. So the 
mn-live-migration job *may* be a bit more stable because it's running 
with newer libvirt/qemu.


We've at least noticed that another live migration bug isn't showing up 
on the dedicated xenial live migration job:


https://bugs.launchpad.net/nova/+bug/1539271



I have a test patch which hacks nova to download & install a special QEMU
build with extra debugging output[2]. Because of the non-determinism I need
to then run the multinode-live-migration & multinode-full tempest jobs
many times to try and catch the bug.  Doing this by just entering 'recheck'
is rather tedious because you have to wait for the 1+ hour turnaround time
between each recheck.

To get around this limitation I created a chain of 10 commits [3] which just
toggled some whitespace and uploaded them all, so I can get 10 CI runs
going in parallel. This worked remarkably well - at least enough to
reproduce the more common failure of multinode-full, but not enough for
the much rarer multinode-live-migration job.


The ascii art is a real treat.



I could expand this hack and upload 100 dummy changes to get more jobs
running to increase chances of hitting the multinode-live-migration
failure. Out of the 16 jobs run on every Nova change, I only care about
running 2 of them. So to get 100 runs of the 2 live migration jobs I want,
I'd be creating 1600 CI jobs in total which is not too nice for our CI
resource pool :-(

I'd really love it if there was

 1. the ability to request checking of just specific jobs eg

  "recheck gate-tempest-dsvm-multinode-full"


FWIW people have asked for this before. I think it would be OK if there 
were a way to not change the overall verification score somehow because 
it could potentially invalidate earlier runs where multiple jobs failed 
but you're only rechecking one of them.




 2. the ability to request this recheck to run multiple
times in parallel. eg if i just repeat the 'recheck'
command many times on the same patchset # without
waiting for results

Any one got any other tips for debugging highly non-deterministic
bugs like this which only hit perhaps 1 time in 100, without wasting
huge amounts of CI resource as I'm doing right now ?

No one has ever been able to reproduce these failures outside of
the gate CI infra, indeed certain CI hosting providers seem worse
afffected by the bug than others, so running tempest locally is not
an option.


Good point on the node providers, I hadn't noticed that before, but it 
definitely looks to be hitting OVH and OSIC nodes more than any others:


http://goo.gl/f0coZb



Regards,
Daniel

[1] https://bugs.launchpad.net/nova/+bug/1524898
[2] https://review.openstack.org/#/c/335549/5
[3] https://review.openstack.org/#/q/topic:mig-debug




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

2016-06-30 Thread Mike Perez
On 12:27 Jun 27, Sean McGinnis wrote:
> I would like to nominate Scott D'Angelo to core. Scott has been very
> involved in the project for a long time now and is always ready to help
> folks out on IRC. His contributions [1] have been very valuable and he
> is a thorough reviewer [2].
> 
> Please let me know if there are any objects to this within the next
> week. If there are none I will switch Scott over by next week, unless
> all cores approve prior to then.

+1

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova Mitaka Release

2016-06-30 Thread Matt Riedemann

On 6/30/2016 12:55 PM, HU, BIN wrote:

I see, and thank you very much Dan. Also thank you Markus for unreleased 
release notes.

Now I understand that it is not a plugin and unstable interface. And there is a new 
"use_neutron" option for configuring Nova to use Neutron as its network backend.

When we use Neutron, there are ML2 and ML3 plugins so that we can choose to use 
different backend providers to actually perform those network functions. For 
example, integration with ODL.

Shall we foresee a situation, where user can choose another network backend 
directly, e.g. ODL, ONOS? Under this circumstance, a stable plugin interface 
seems needed which can provide end users with more options and flexibility in 
deployment.

What do you think?

Thanks
Bin



The nova compute API understands how the nova-network API and neutron 
APIs work. nova-network is deprecated [1], and we're deprecating the API 
proxies for network resources out of the nova API [2]. We're also 
dropping API extensibility [3]. Having plugin implementations for the 
networking (or image, or volume, or identity) API can break Nova's API 
which breaks users.


For anyone following along with nova development for the last several 
releases this shouldn't be a surprise. Plug points, hooks, and API 
extensions that can modify API requests/responses/resources are barriers 
to interoperability between OpenStack clouds. There is a big long thread 
[4] that goes into more details about why.


But these also present issues for nova development upstream because 
people report bugs when we break these unversioned, untested, 
non-contractual interfaces which downstream people are plugging into.


So if there is a use case that nova (or neutron) does not today support, 
we encourage people to not work in a vacuum but get involved with the 
development effort in the community, attend the summit design sessions, 
attend the meetups, be in IRC and the development mailing list, etc so 
that these things can be worked together in the open.


[1] https://review.openstack.org/#/c/310539/
[2] 
http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/deprecate-api-proxies.html
[3] 
http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/api-no-more-extensions.html

[4] http://lists.openstack.org/pipermail/openstack-dev/2016-June/097349.html

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][calico] New networking-calico IRC meeting

2016-06-30 Thread Carl Baldwin
On Mon, Jun 20, 2016 at 8:42 AM, Neil Jerram  wrote:
> Calling everyone interested in networking-calico ...!  networking-calico has
> been around in the Neutron stadium for a while now, and it's way past time
> that we had a proper forum for discussing and evolving it - so I'm happy to
> be finally proposing a regular IRC meeting slot for it: [1].  A strawman
> initial agenda is up at [2].

I see that the meeting yaml has merged and I've tried to check
eavesdrop.o.o [3].  I figured that since this is a biweekly meeting,
it would help me avoid mistakes to download the ICS file and import it
in to my calendar.  But, I can't find this meeting on that page!  Has
this page stopped updating?

Carl

> [1] https://review.openstack.org/#/c/331689/
> [2] https://wiki.openstack.org/wiki/Meetings/NetworkingCalico

[3] http://eavesdrop.openstack.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla + BiFrost integration

2016-06-30 Thread Mooney, Sean K


> -Original Message-
> From: Steven Dake (stdake) [mailto:std...@cisco.com]
> Sent: Monday, June 27, 2016 9:21 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla +
> BiFrost integration
> 
> 
> 
> On 6/27/16, 11:19 AM, "Devananda van der Veen"
> 
> wrote:
> 
> >At a quick glance, this sequence diagram matches what I
> >envisioned/expected.
> >
> >I'd like to suggest a few additional steps be called out, however I'm
> >not sure how to edit this so I'll write them here.
> >
> >
> >As part of the installation of Ironic, and assuming this is done
> >through Bifrost, the Actor should configure Bifrost for their
> >particular network environment. For instance: what eth device is
> >connected to the IPMI network; what IP ranges can Bifrost assign to
> >physical servers; and so on.
> >
> >There are a lot of other options during the install that can be
> >changed, but the network config is the most important. Full defaults
> >for this roles' config options are here:
> >
> >https://github.com/openstack/bifrost/blob/master/playbooks/roles/bifro
> s
> >t-i
> >ronic-install/defaults/main.yml
> >
> >and documentation is here:
> >
> >https://github.com/openstack/bifrost/tree/master/playbooks/roles/bifro
> s
> >t-i
> >ronic-install
> >
> >
> >
> >Immediately before "Ironic PXE boots..." step, the Actor must perform
> >an action to "enroll" hardware (the "deployment targets") in Ironic.
> >This could be done in several ways: passing a YAML file to Bifrost;
> >using the Ironic CLI; or something else.
> >
> >
> >"Ironic reports success to the bootstrap operation" is ambiguous.
> >Ironic does not currently support notifications, so, to learn the
> >status of the deployments, you will need to poll the Ironic API (eg,
> >"ironic node-list").
> >
> 
> Great,
> 
> Thanks for the feedback.  I'll integrate your changes into the sequence
> diagram when I have a free hour or so - whenever that is :)
> 
> Regards
> -steve
[Mooney, Sean K] I agree with most of devananda points and had come to similar
Conlcutions.

At a highlevel I think the workflow from 0 to cloud would be as follow.
Assuming you have one linux system.
- clone http://github.com/openstack/kolla && cd kolla
- tools/kolla-host build-host-deploy
This will install ansible if not installed then invoke a playbook to 
install
All build dependencies and generate the kolla-build.conf passwords.yml 
and global.yml.
 Install kolla python package
- configure kolla-build.conf as required
- tools/build.py or kolla-build to build image
- configure global.yml and or biforst specific file 
  This would involve specifying a file that can be used with bifrost dynamic 
inventory.
  Configuring network interface for bifrost to use.
  Enable ssh-key generate or supply one to use as the key to us when connecting 
to the servers post deploy.
  Configure diskimage builder options or supply path to a file on the system to 
use as your os image.
- tools/kolla-host deploy-bifrost
  Deploys bifrost container.
  Copies images/keys
  Bootstraps bifrost and start services.
- tools/kolla-host deploy-servers
  Invokes bifrost enroll and deploy dynamic then polls until all
  Servers are provisioned or a server fails.
- tools/kolla-hosts bootstrap-servers
  Installs all kolla deploy dependencies
  Docker ect. This will also optionally do things such as
  Configure hugepages, configure cpu isolation, firewall settings
  Or any other platform level config for example apply labels to ceph
  Disks .
  This role will reboot the remote server at the end of the role if required
  e.g. after installing The wily kernel on Ubuntu 14.04
- configure global.yml as normal
- tools/kolla-ansible prechecks (this should now pass)
- tools/kolla-ansible deploy
- profit

I think this largely agrees with the diagram you proposed but has a couple of 
extra steps/details.

> 
> >
> >
> >Cheers,
> >--Devananda
> >
> >On 06/23/2016 06:54 PM, Steven Dake (stdake) wrote:
> >> Hey folks,
> >>
> >> I created the following sequence diagram to show my thinking on
> >>Ironic  integration.  I recognize some internals of the recently
> >>merged bifrost changes  are not represented in this diagram.  I would
> >>like to see a bootstrap action do  all of the necessary things to
> >>bring up BiFrost in a container using Sean's WIP  Kolla patch
> followed
> >>by bare metal minimal OS load followed by Kolla dependency  software
> >>(docker-engine, docker-py, and ntpd) loading and initialization.
> >>
> >> This diagram expects ssh keys to be installed on the deployment
> >>targets via BiFrost.
> >>
> >> https://creately.com/diagram/ipt09l352/ROMDJH4QY1Avy1RYhbMUDraaQ4%3D
> >>
> >> Thoughts welcome, especially from folks in the Ironic community or
> >>Sean who is  leading this work in Kolla.
> >>
> >> Regards,
> >> -steve
> >>
> >>
> >>
> >>
> 

[openstack-dev] [Zaqar] Nominate Thomas Herve for Zaqar core

2016-06-30 Thread Fei Long Wang

Hi team,

I would like to propose adding Thomas Herve(therve) for the Zaqar core 
team. TBH, I have drafted this mail about 6 months ago, the reason you 
see this mail until now is because I'm not sure if Thomas can dedicate 
his time on Zaqar(He is a very busy man). But as you see, I'm wrong. He 
continually contribute a lot of high quality patches for Zaqar[1] and a 
lot of inspiring comments for this project and team. I'm sure he would 
make excellent addition to the team. If no one objects, I'll proceed and 
add him  in a week from now.


[1] 
http://stackalytics.com/?module=zaqar-group=commits=all_id=therve 



--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: Architecture Working Group

2016-06-30 Thread Adam Lawson
Okay I'll bite. I'm a working owner; Cloud/OpenStack/SDN architect slash
OpenStack SI business owner working with companies trying to extract value
from technology they don't understand. Or in ways they aren't familiar
with. Or with code they don't have time to build/maintain themselves.

This working group seems like we'll get to look at things from the
perspective of "what is openstack and how can we make it better for those
who want to use it" among other things. Sad reality is SI's and product
vendors make more money if OpenStack remains complicated so we'll be
working against working against a powerful money machine that funds this
project. I want OpenStack to address real non-theoretical and
non-marketing-BS cloud problems that are based in today's reality and in
advance of tomorrow's challenges. I hope we'll get that chance.

Today, it seems to me that this WG would focus un-crunching code, design
and evangelize opportunities for potential improvements for consideration
by the greater OpenStack community and the TC. No successful architecture
group I've ever participated wondered how do we can compel others to accept
our recommendations. Leave that to the business/OpenStack governance.

Ultimately, I totally agree with Clint in that if we avoid too much focus
on design enforcement, that's our first win. And in my mind, our designs
will not be absorbed nor accepted de facto anyway. I think the value will
however be recognized over time though and I'm totally down with that.

I'd like to participate with this Clint if there's room for one more. ; )

//adam


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Thu, Jun 30, 2016 at 11:16 AM, Joshua Harlow 
wrote:

> Mike Perez wrote:
>
>> On 11:31 Jun 20, Clint Byrum wrote:
>>
>>> Excerpts from Joshua Harlow's message of 2016-06-17 15:33:25 -0700:
>>>
 Thanks for getting this started Clint,

 I'm happy and excited to be involved in helping try to guide the whole
 ecosystem together (it's also why I like being in oslo) to a
 architecture that is more cohesive (and is more of something that we can
 say to our current or future children that we were all involved and
 proud to be involved in creating/maturing...).

 At a start, for said first meeting, any kind of agenda come to mind, or
 will it be more a informal gathering to start (either is fine with me)?

 I've been hesitant to fill this in too much as I'm still forming the
>>> idea, but here are the items I think are most compelling to begin with:
>>>
>>> * DLM's across OpenStack -- This is already under way[1], but it seems to
>>>have fizzled out. IMO that is because there's no working group who
>>>owns it. We need to actually write some plans.
>>>
>>
>> Not meaning to nitpick, but I don't think this is a compelling reason for
>> the
>> architecture working group. We need a group that wants to spend time on
>> reviewing the drivers being proposed. This is like saying we need the
>> architecture working group because no working group is actively reshaping
>> quotas
>> cross-project.
>>
>> With that said, I can see the architecture working group providing
>> information
>> on to a group actually reviewing/writing drivers for DLM and saying "Doing
>> mutexes with the mysql driver is crazy, I brought it in a environment and
>> have
>> such information to support that it is not reliable". THAT is useful and I
>> don't feel like people do enough of.
>>
>> My point is call your working group whatever you want (The Purple
>> Parrots), and
>> just go spearhead DLM, but don't make it about one of the most compelling
>> reasons for the existence of this group.
>>
>
> Sadly I feel if such a group formed it wouldn't be addressing the larger
> issue that this type of group is trying to address; the purple parrots
> would be a tactical team that could go do what u said, but that doesn't
> address the larger strategic goal of trying to improve the full situation
> (technical and architectural inconsistencies and 'fizzling out' solutions)
> that IMHO needs to be worked through.
>
> So yes, the tactical group needs to exist, and overall it likely will, but
> there also needs to be a strategic group that is being proactive about the
> issues and not just tactically reacting to things (which isn't imho
> healthy).
>
> -Josh
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [all] Proposal: Architecture Working Group

2016-06-30 Thread Joshua Harlow

Mike Perez wrote:

On 11:31 Jun 20, Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2016-06-17 15:33:25 -0700:

Thanks for getting this started Clint,

I'm happy and excited to be involved in helping try to guide the whole
ecosystem together (it's also why I like being in oslo) to a
architecture that is more cohesive (and is more of something that we can
say to our current or future children that we were all involved and
proud to be involved in creating/maturing...).

At a start, for said first meeting, any kind of agenda come to mind, or
will it be more a informal gathering to start (either is fine with me)?


I've been hesitant to fill this in too much as I'm still forming the
idea, but here are the items I think are most compelling to begin with:

* DLM's across OpenStack -- This is already under way[1], but it seems to
   have fizzled out. IMO that is because there's no working group who
   owns it. We need to actually write some plans.


Not meaning to nitpick, but I don't think this is a compelling reason for the
architecture working group. We need a group that wants to spend time on
reviewing the drivers being proposed. This is like saying we need the
architecture working group because no working group is actively reshaping quotas
cross-project.

With that said, I can see the architecture working group providing information
on to a group actually reviewing/writing drivers for DLM and saying "Doing
mutexes with the mysql driver is crazy, I brought it in a environment and have
such information to support that it is not reliable". THAT is useful and I
don't feel like people do enough of.

My point is call your working group whatever you want (The Purple Parrots), and
just go spearhead DLM, but don't make it about one of the most compelling
reasons for the existence of this group.


Sadly I feel if such a group formed it wouldn't be addressing the larger 
issue that this type of group is trying to address; the purple parrots 
would be a tactical team that could go do what u said, but that doesn't 
address the larger strategic goal of trying to improve the full 
situation (technical and architectural inconsistencies and 'fizzling 
out' solutions) that IMHO needs to be worked through.


So yes, the tactical group needs to exist, and overall it likely will, 
but there also needs to be a strategic group that is being proactive 
about the issues and not just tactically reacting to things (which isn't 
imho healthy).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hangzhou Bug Smash will be held from July 6 to 8

2016-06-30 Thread Will Zhou
Hello Fred,

Great event! In which way can we work remotely with the team? Thanks.

On Thu, Jun 30, 2016 at 11:05 PM Liyongle (Fred) 
wrote:

> Hi OpenStackers,
>
> The 4th China OpenStack Bug Smash, hosted by CESI, Huawei, and intel, will
> be held in Hangzhou, China from July 6 to 8 (Beijing time), from 01:00 July
> 6 to 6:00 July 8 UTC. And the target to get bugs fixed before the milestone
> newton-2  [1].
>
> Around 50 stackers will fix the bugs in nova, cinder, neutron, magnum,
> ceilometer, heat, ironic, smaug, freezer, oslo, murano and kolla. You are
> appreciated to provide any support, or work remotely with the team.
>
> Please find this bug smash home page at [2], and the bugs list in [3]
> (under preparation).
>
> [1] http://releases.openstack.org/newton/schedule.html
> [2] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Newton-Hangzhou
> [3] https://etherpad.openstack.org/p/hackathon4_all_list
>
> Best Regards
>
> Fred (李永乐)
>
> China OpenStack Bug Smash Team
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

-
​
周正喜
Mobile: 13701280947
​WeChat: 472174291
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova Mitaka Release

2016-06-30 Thread HU, BIN
I see, and thank you very much Dan. Also thank you Markus for unreleased 
release notes.

Now I understand that it is not a plugin and unstable interface. And there is a 
new "use_neutron" option for configuring Nova to use Neutron as its network 
backend.

When we use Neutron, there are ML2 and ML3 plugins so that we can choose to use 
different backend providers to actually perform those network functions. For 
example, integration with ODL.

Shall we foresee a situation, where user can choose another network backend 
directly, e.g. ODL, ONOS? Under this circumstance, a stable plugin interface 
seems needed which can provide end users with more options and flexibility in 
deployment.

What do you think?

Thanks
Bin

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: Thursday, June 30, 2016 10:30 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova 
Mitaka Release

> Just curious - what is the motivation of removing the plug-ability 
> entirely? Because of significant maintenance effort?

It's not a plugin interface and has never been stable. We've had a long-running 
goal of removing all of these plug points where we don't actually expect people 
to write stable plugins.

If you want to write against an unstable internal-only API and chase every 
little change we make to it, then just patch the code locally.
Using these plug points is effectively the same thing.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova Mitaka Release

2016-06-30 Thread Dan Smith
> Just curious - what is the motivation of removing the plug-ability
> entirely? Because of significant maintenance effort?

It's not a plugin interface and has never been stable. We've had a
long-running goal of removing all of these plug points where we don't
actually expect people to write stable plugins.

If you want to write against an unstable internal-only API and chase
every little change we make to it, then just patch the code locally.
Using these plug points is effectively the same thing.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] the meaning of the release:managed tag now that everything is released:managed

2016-06-30 Thread Jeremy Stanley
On 2016-06-30 14:07:53 + (+), Steven Dake (stdake) wrote:
[...]
> If it does have some special meaning or requirements beyond the
> "we will freeze on the freeze deadline" could someone enumerate
> those?
[...]

As far as I know it still means that release activities for the
deliverable are handled by the Release Management team. A quick
parsing of the projects.yaml indicates that only ~21% (125 out of
582) of the deliverables for official projects have that tag
applied.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova Mitaka Release

2016-06-30 Thread HU, BIN
Dan,

Thank you for the information.

Just curious - what is the motivation of removing the plug-ability entirely? 
Because of significant maintenance effort?

Thanks
Bin

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: Thursday, June 30, 2016 9:52 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova 
Mitaka Release

> For those deprecated options, does it mean it won't be supported in 
> Mitaka and future release at all? Or is there a grace period so that 
> we, as a user, can gradually transition from Liberty to Mitaka?

The deprecation period is usually one cycle for something like this.
That means people get a chance to clean up their configs before we remove it 
which would cause an error if it's still in there.

> What is the rationale of some deprecated options, for example, 
> "[DEFAULT]network_api_class". It seems to me that it provides end 
> users with flexibility in configuring backends.

The rationale is that this is not a plugin interface. It's not stable in any 
way and not something we want to be randomly plug-able by people.
The reason it's in there now is historical. We have removed almost all of the 
other plug points that work like this, and had neglected to remove this one 
because it is used internally by our nova-network/neutron switching. However, 
it need not be exposed for that, and the need for it even internally will be 
going away soon anyway.

> So what is the rationale of deprecating this option and is there 
> equivalent method in Mitika to replace this option?

There is not, and there are no plans to add one since we want to remove the 
plug-ability entirely.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia][upgrades] upgrade loadbalancer to new amphora image

2016-06-30 Thread Doug Wiegley

> On Jun 30, 2016, at 7:15 AM, Ihar Hrachyshka  wrote:
> 
>> 
>> On 30 Jun 2016, at 01:16, Brandon Logan  wrote:
>> 
>> Hi Ihar, thanks for starting this discussion.  Comments in-line.
>> 
>> After writing my comments in line, I might now realize that you're just
>> talking about documenting  a way for a user to do this, and not have
>> Octavia handle it at all.  If that's the case I apologize for my reading
>> comprehension, but I'll keep my comments in case I'm wrong.  My brain is
>> not working well today, sorry :(
> 
> Right. All the mechanisms needed to apply the approach are already in place 
> in both Octavia and Neutron as of Mitaka. The question is mostly about 
> whether the team behind the project may endorse the alternative approach in 
> addition to whatever is in the implementation in regards to failovers by 
> giving space to describe it in the official docs. I don’t suggest that the 
> approach is the sole documented, or that octavia team need to implement 
> anything. [That said, it may be wise to look at providing some smart scripts 
> on top of neutron/octavia API that would realize the approach without putting 
> the burden of multiple API calls onto users.]

I don’t have a problem documenting it, but I also wouldn’t personally want to 
recommend it.

We’re adding a layer of NAT, which has performance and HA implications of its 
own.

We’re adding FIPs, when the neutron advice for “simple nova-net like 
deployment” is provider nets and linuxbridge, which don’t support them.

Thanks,
doug


> 
>> 
>> Thanks,
>> Brandon
>> 
>> On Wed, 2016-06-29 at 18:14 +0200, Ihar Hrachyshka wrote:
>>> Hi all,
>>> 
>>> I was looking lately at upgrades for octavia images. This includes using 
>>> new images for new loadbalancers, as well as for existing balancers.
>>> 
>>> For the first problem, the amp_image_tag option that I added in Mitaka 
>>> seems to do the job: all new balancers are created with the latest image 
>>> that is tagged properly.
>>> 
>>> As for balancers that already exist, the only way to get them use a new 
>>> image is to trigger an instance failure, that should rebuild failed nova 
>>> instance, using the new image. AFAIU the failover process is not currently 
>>> automated, requiring from the user to set the corresponding port to DOWN 
>>> and waiting for failover to be detected. I’ve heard there are plans to 
>>> introduce a specific command to trigger a quick-failover, that would 
>>> streamline the process and reduce the time needed for the process because 
>>> the failover would be immediately detected and processed instead of waiting 
>>> for keepalived failure mode to occur. Is it on the horizon? Patches to 
>>> review?
>> 
>> Not that I know of and with all the work slated for Newton, I'm 99% sure
>> it won't be done in Newton.  Perhaps Ocata.
> 
> I see. Do we maybe want to provide a smart script that would help to trigger 
> a failover with neutron API? [detect the port id, set it to DOWN, …]
> 
>>> 
>>> While the approach seems rather promising and may be applicable for some 
>>> environments, I have several concerns about the failover approach that we 
>>> may want to address.
>>> 
>>> 1. HA assumption. The approach assumes there is another node running 
>>> available to serve requests while instance is rebuilding. For non-HA 
>>> amphoras, it’s not the case, meaning the image upgrade process has a 
>>> significant downtime.
>>> 
>>> 2. Even if we have HA, for the time of instance rebuilding, the balancer 
>>> cluster is degraded to a single node.
>>> 
>>> 3. (minor) during the upgrade phase, instances that belong to the same HA 
>>> amphora may run different versions of the image.
>>> 
>>> What’s the alternative?
>>> 
>>> One idea I was running with for some time is moving the upgrade complexity 
>>> one level up. Instead of making Octavia aware of upgrade intricacies, allow 
>>> it to do its job (load balance), while use neutron floating IP resource to 
>>> flip a switch from an old image to a new one. Let me elaborate.
>> I'm not sure I like the idea of tying this to floating IP as there are
>> deployers who do not use floating IPs.  Then again, we are currently
>> depending on allowed address pairs which is also an extension, but I
>> suspect its probably deployed in more places.  I have no proof of this
>> though.
> 
> I guess you already deduced that, but just for the sake of completeness: no, 
> I don’t suggest that octavia ties its backend to FIPs. I merely suggest to 
> document the proposed approach as ‘yet another way of doing it’, at least 
> until we tackle the first two concerns raised.
> 
>>> 
>>> Let’s say we have a load balancer LB1 that is running Image1. In this 
>>> scenario, we assume that access to LB1 VIP is proxied through a floating ip 
>>> FIP that points to LB1 VIP. Now, the operator uploaded a new Image2 to 
>>> glance registry and tagged it for octavia usage. The user now wants to 
>>> 

Re: [openstack-dev] [lbaas][octavia] suggestion for today's meeting agenda: How to make the Amphora-agent support additional Linux flavors

2016-06-30 Thread Doug Wiegley

> On Jun 30, 2016, at 7:01 AM, Ihar Hrachyshka  wrote:
> 
> 
>> On 30 Jun 2016, at 06:03, Kosnik, Lubosz  wrote:
>> 
>> Like Doug said Amphora suppose to be a black box. It suppose to get some 
>> data - like info in /etc/defaults and do everything inside on its own.
>> Everyone will be able to prepare his own implementation of this image 
>> without mixing things between each other.
> 
> That would be correct if the image would not be maintained by the project 
> itself. Then indeed every vendor would prepare their own image, maybe 
> collaborate on common code for that. Since this code is currently in octavia, 
> we kinda need to plug into it for other vendors. Otherwise you pick one and 
> give it a preference.

No, I disagree with that premise, because it pre-supposes that we have any 
interest in supporting *this exact reference implementation* for any period of 
time.

Octavia has a few goals:

- Present an openstack loadbalancing API to operators and users.
- Put VIPs on openstack clouds, that do loadbalancy things, and are always 
there and working.
- Upgrade seamlessly.

That’s it. A few more constraints:

- It’s an openstack project, so it must be python, with our supported version, 
running on our supported OSs, using our shared libraries, being open, level 
playing field, etc…

Nowhere in there is the amp concept, or that we must always require nova, or 
that said amps must run a REST agent, or anything about the load-balancing 
backend.The amp itself, and all the code written for it, is just a means to an 
end. If the day comes tomorrow that the amp agent and amp concept is silly, as 
long as we have a seamless upgrade and those VIPs keep operating, we are under 
no obligation as a project to keep using that amp code or maintaining it. Our 
obligation is to the operators and users.

The amp “agent” code has already gone through two iterations (direct ssh, now a 
silly rest agent on the amp). We’ve already discussed that the current ubuntu 
based amp is too heavy-weight and needs to change. Tomorrow it could be based 
on a microlinux. And the day after that, cirros plus a static nginx. And the 
day after that, a docker swarm with an proxy running on a simulated minecraft 
redstone machine (well, we’d have to find an open-source clone of minecraft, 
first.)

The point being, as a project contributor, I have zero interest in signing up 
for long-term maintenance of something that 1) is not user visible, and 2) is 
likely to change; all for the sake of any particular vendors sensibilities. The 
current octavia will run just fine on ubuntu or redhat, and the current amp 
image will launch just fine on a nova run by either, too.

That said, every part of octavia is pluggable with drivers, and while I will 
personally resist adding multiple reference drivers in-tree, it doesn’t mean 
everyone will, nor does it preclude using shims and external repos.

That’s just my opinion, but I’d hate to see us tying our own hands by adding 
support and maintenance burden at this early stage, beyond delivering VIPs to 
users. I’d be more inclined to see the amp image itself cease to exist inside 
an openstack project, before I want to spend the time supporting lots of them, 
for non-technical reasons.

Thanks,
doug



> 
> But if we can make the agent itself vendor agnostic, so that the only 
> differentiation would happen around components stuffed into the image 
> (kernel, haproxy version, security tweaks, …), then it will be obviously a 
> better path than trying to template the agent for multiple vendors.

> 
> A silly question: why does the agent even need to configure the network using 
> distribution mechanisms and not just calling to ip link and friends?
> 
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova Mitaka Release

2016-06-30 Thread Dan Smith
> For those deprecated options, does it mean it won’t be supported in 
> Mitaka and future release at all? Or is there a grace period so that 
> we, as a user, can gradually transition from Liberty to Mitaka?

The deprecation period is usually one cycle for something like this.
That means people get a chance to clean up their configs before we
remove it which would cause an error if it's still in there.

> What is the rationale of some deprecated options, for example, 
> “[DEFAULT]network_api_class”. It seems to me that it provides end 
> users with flexibility in configuring backends.

The rationale is that this is not a plugin interface. It's not stable in
any way and not something we want to be randomly plug-able by people.
The reason it's in there now is historical. We have removed almost all
of the other plug points that work like this, and had neglected to
remove this one because it is used internally by our
nova-network/neutron switching. However, it need not be exposed for
that, and the need for it even internally will be going away soon anyway.

> So what is the rationale of deprecating this option and is there
> equivalent method in Mitika to replace this option?

There is not, and there are no plans to add one since we want to remove
the plug-ability entirely.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova Mitaka Release

2016-06-30 Thread Markus Zoeller
On 30.06.2016 18:35, HU, BIN wrote:
> Hello Nova team,
> 
> I am a newbie of Nova. I read your documentation 
> http://docs.openstack.org/mitaka/config-reference/tables/conf-changes/nova.html
>  regarding new, updated and deprecated options in Mitaka for Compute. I have 
> a few questions:
> 
> 
> -  For those deprecated options, does it mean it won't be supported 
> in Mitaka and future release at all? Or is there a grace period so that we, 
> as a user, can gradually transition from Liberty to Mitaka?
> 

No, they are still supported in Mitaka but will removed during Newton or
have a new name. See [1] for more information.


> -  What is the rationale of some deprecated options, for example, 
> "[DEFAULT] network_api_class". It seems to me that it provides end users with 
> flexibility in configuring backends. So what is the rationale of deprecating 
> this option and is there equivalent method in Mitika to replace this option?
> 

The rationales are available at [1]. The option you asked for is
explained there.


> Thank you very much
> Bin
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

References:
[1] http://docs.openstack.org/releasenotes/nova/unreleased.html

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DIB][TripleO] Refreshing the DIB specs process

2016-06-30 Thread Gregory Haynes
Hello everyone,

I believe our DIB specs process is in need of a refresh. Currently, we
seem to avoid specs altogether. I think this has worked while we have
mostly maintained our status-quo of fixing bugs which pop up and adding
fairly straightforward elements. Recently, however, we seem to be making
a push toward some larger changes which require more careful thought and
discussion. I think this is great and I really want this type of
development to continue and so I would like to steer us towards using
specs for these larger changes in order to keep our development process
sustainable.

The biggest barrier I see to us using specs is that historically our
specs have lived in the tripleo-specs repo. When we had a significant
overlap between tripleo-core and dib-core this worked well, but lately
many of the dib reviewers are not tripleo-core. This means that if we
were to use tripleo-specs we would not be able to approve our own specs
(which, obviously, doesn't make a lot of sense). As a result, I'd like
to propose the creation of a specs directory inside of the
diskimage-builder repo[1] which we use for our specs process.

Additionally, one of the goals I have for our specs process is to not
stifle the ability for developers to quickly fix bugs. Relative to other
projects we seem to have a high rate of trivial bugfixes which come in
(I believe due to the nature of the problem we are solving) and we need
to not place unnecessary roadblocks on getting those merged. Similarly
to other projects, I have documented a trivial specs clause to our specs
process so we can hopefully facilitate this.

Cheers,
Greg

1: https://review.openstack.org/#/c/336109/

-- 
  Gregory Haynes
  g...@greghaynes.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Ci] New 'fuel-nailgun' gate job

2016-06-30 Thread Dmitry Kaiharodsev
Hi to all,

please be informed that we've enabled voting on mentioned Jenkins jobs

For any additional questions please use our #fuel-infra IRC channel

On Tue, Jun 7, 2016 at 3:59 PM, Dmitry Kaiharodsev <
dkaiharod...@mirantis.com> wrote:

> Hi to all,
>
> please be informed that starting from today we're launching gate job [1]
> for 'fuel-nailgun' package [2] in non-voting mode.
>
> Mentioned job will be triggered on each commit and will perform steps:
> - build a package from the commit
> - run system tests scenario [3] with using created package
> - show system test result in current patchset without voting
>
> We're going to enable voting mode when it will be approved from 'fuel-qa'
> team side.
> Additional notification regarding voting mode will be sent in this thread.
>
> For any additional questions please use our #fuel-infra IRC channel
>
> [1] https://bugs.launchpad.net/fuel/+bug/1557524
> [2] https://github.com/openstack/fuel-nailgun-agent
> [3]
> https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_nailgun_agent.py#L38-45
>
> --
> Kind Regards,
> Dmitry Kaigarodtsev
> IRC: dkaigarodtsev
>



-- 
Kind Regards,
Dmitry Kaigarodtsev
IRC: dkaigarodtsev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO deep dive hour?

2016-06-30 Thread James Slagle
On Wed, Jun 29, 2016 at 1:01 PM, Jason Rist  wrote:
> On 06/28/2016 04:00 PM, James Slagle wrote:
>> We've got some new contributors around TripleO recently, and I'd like
>> to offer up a "TripleO deep dive hour".
>>
>> The idea is to spend 1 hour a week in a high bandwidth environment
>> (Google Hangouts / Bluejeans / ???) to deep dive on a TripleO related
>> topic. The topic could be anything TripleO related, such as general
>> onboarding, CI, networking, new features, etc.
>>
>> I'm by no means an expert on all those things, but I'd like to
>> facilitate the conversation and I'm happy to lead the first few
>> "dives" and share what I know. If it proves to be a popular format,
>> hopefully I can convince some other folks to lead discussions on
>> various topics.
>>
>> I think it'd be appropriate to record these sessions so that what is
>> discussed is available to all. However, I don't intend these to be a
>> presentation format, and instead more of a q discussion. If I don't
>> get any ideas for topics though, I may choose to prepare something to
>> present :).
>>
>> Our current meeting time of day at 1400 UTC seems to suit a lot of
>> folks, so how about 1400 UTC on Thursdays? If folks think this is
>> something that would be valuable and want to do it, we could start
>> next Thursday, July 7th.
>>
>>
>
> This sounds very useful. Will you be sending out some sort of invite or
> notification once you've decided you're going to hold it?

It sounds like the time will be 1400UTC on Thursdays.

I've added a poll to the etherpad to see what tool works best for the
most people:
https://etherpad.openstack.org/p/tripleo-deep-dive-topics

I've also added a Proposed Topics for July 7th if anyone has any ideas
for something burning they want to discuss. I'll put a few details
there as well.

Based on responses, I will add the details to that etherpad by the end
of day Tuesday July 5th so that everyone can see how to connect on
Thursday.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2016-06-30 Thread Everett Toews
Greetings OpenStack community,

A few interesting developments in the API WG this week.

The API WG reviewed the new Glance Artifact Repository (aka Glare) API [4]. The 
team was already adhering to most of the API WG guidelines [3] and after some 
reviews they were able to get excellent coverage of the guidelines for their 
API. Kudos to the team!

Based on some new information, a couple of guildelines have been abandoned. The 
"Add version discovery guideline" [5] was abandoned when we realized we have a 
very high level conflict here with the microversion version discovery 
guideline. The "Add guideline for Experimental APIs" [6] was abandoned when the 
author decided to discuss it further and explore the alternate direction 
pointed in the reviews.

# Recently merged guidelines

Nothing new in the last two weeks.
  
# API guidelines proposed for freeze

The following guidelines are available for broader review by interested 
parties. These will be merged in one week if there is no further feedback.

None this week

# Guidelines currently under review

These are guidelines that the working group are debating and working on for 
consistency and language. We encourage any interested parties to join in the 
conversation.

* Get rid of the DRAFT warning at the top of guidelines
  https://review.openstack.org/#/c/330687/
* Remove "Conveying error/fault information" section
  https://review.openstack.org/#/c/330876/
* Add the beginning of a set of guidlines for URIs
  https://review.openstack.org/#/c/322194/
* Add description of pagination parameters
  https://review.openstack.org/190743

Note that some of these guidelines were introduced quite a long time ago and 
need to either be refreshed by their original authors, or adopted by new 
interested parties. If you're the author of one of these older reviews, please 
come back to it or we'll have to mark it abandoned.

# API Impact reviews currently open

Reviews marked as APIImpact [1] are meant to help inform the working group 
about changes which would benefit from wider inspection by group members and 
liaisons. While the working group will attempt to address these reviews 
whenever possible, it is highly recommended that interested parties attend the 
API-WG meetings [2] to promote communication surrounding their reviews.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [3].

Thanks for reading and see you next week!

[1] 
https://review.openstack.org/#/q/status:open+AND+(message:ApiImpact+OR+message:APIImpact),n,z
[2] https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
[3] http://specs.openstack.org/openstack/api-wg/
[4] https://review.openstack.org/#/c/283136/
[5] https://review.openstack.org/#/c/254895/
[6] https://review.openstack.org/#/c/273158/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Deprecated Configuration Option in Nova Mitaka Release

2016-06-30 Thread HU, BIN
Hello Nova team,

I am a newbie of Nova. I read your documentation 
http://docs.openstack.org/mitaka/config-reference/tables/conf-changes/nova.html 
regarding new, updated and deprecated options in Mitaka for Compute. I have a 
few questions:


-  For those deprecated options, does it mean it won't be supported in 
Mitaka and future release at all? Or is there a grace period so that we, as a 
user, can gradually transition from Liberty to Mitaka?

-  What is the rationale of some deprecated options, for example, 
"[DEFAULT] network_api_class". It seems to me that it provides end users with 
flexibility in configuring backends. So what is the rationale of deprecating 
this option and is there equivalent method in Mitika to replace this option?

Thank you very much
Bin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] inclusion of charm-helpers (LGPL licensed)

2016-06-30 Thread Billy Olsen
I suspect that the reactive charming model wouldn't have this issue
due to the ability to essentially statically link the libraries via
wheels/pip packages. If that's the case, it's likely possible to
follow along the same lines as the base-layer charm and bootstrap the
environment using pip/wheel libraries included at build time. As I see
it, this would require:

* Updates to the process/tooling for pushing to the charm store
* Update the install/upgrade-charm hook to bootstrap the environment
with the requirements files
* If using virtualenv (not a requirement in my mind), then each of the
hooks needs to be bootstrapped to ensure that they are running within
the virtualenv.

To make life easier in development mode, the charms can download from
pypi if the linked wheel/pip package isn't available - it saves a
build step before deployment, though certainly for the published
versions the statically linked libraries should be included (which,
from my understanding, I believe the licensing allows and why the
reactive charming/layered model wouldn't have this issue).


On Tue, Jun 28, 2016 at 6:29 AM, James Page  wrote:
> Hi All
>
> Whilst working on the re-licensing of the OpenStack charms to Apache 2.0,
> its apparent that syncing and inclusion of the charm-helpers python module
> directly into the charm is not going to work from a license compatibility
> perspective. charm-helpers is LGPL v3 (which is OK for a runtime dependency
> of an OpenStack project - see [0]).
>
> We already have a plan in place to remove the inclusion of charm-helpers for
> execution of functional tests, but we need to come up with a solution to the
> runtime requirement for charm-helpers, preferably one that does not involve
> direct installation at deploy time from either pypi or from
> lp:charm-helpers.
>
> Thoughts? ideas?
>
> Cheers
>
> James
>
> [0] http://governance.openstack.org/reference/licensing.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Questions about instance actions' update and finish

2016-06-30 Thread Chris Friesen

On 06/29/2016 09:13 PM, Edward Leafe wrote:

On Jun 29, 2016, at 10:05 PM, Matt Riedemann 
wrote:



2. The updated_at field is also empty, should we sync the updated_at
time to the created_at time when we create the action and also update
it whenever the action status changed, e.g finished.


When a finish_time is recorded that should definitely also update
updated_at. I would be in favor of having updated_at set when the
instance action is created. I've never fully understood why Nova doesn't
do that generally.


As discussed in the API meeting this morning, I thought it would be odd to
set updated_at = created_at when the record is created.


It's really very common. Think of 'updated_at' as meaning 'the last time this
record was modified'. For a new record, the initial creation is also the last
time it was modified.


For what it's worth, this is how the timestamps work for POSIX filesystems. 
When you create a file it sets the access/modify/change timestamps to the file 
creation time.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: Architecture Working Group

2016-06-30 Thread Clint Byrum
Excerpts from Mike Perez's message of 2016-06-30 07:50:42 -0700:
> On 11:31 Jun 20, Clint Byrum wrote:
> > Excerpts from Joshua Harlow's message of 2016-06-17 15:33:25 -0700:
> > > Thanks for getting this started Clint,
> > > 
> > > I'm happy and excited to be involved in helping try to guide the whole 
> > > ecosystem together (it's also why I like being in oslo) to a 
> > > architecture that is more cohesive (and is more of something that we can 
> > > say to our current or future children that we were all involved and 
> > > proud to be involved in creating/maturing...).
> > > 
> > > At a start, for said first meeting, any kind of agenda come to mind, or 
> > > will it be more a informal gathering to start (either is fine with me)?
> > > 
> > 
> > I've been hesitant to fill this in too much as I'm still forming the
> > idea, but here are the items I think are most compelling to begin with:
> > 
> > * DLM's across OpenStack -- This is already under way[1], but it seems to
> >   have fizzled out. IMO that is because there's no working group who
> >   owns it. We need to actually write some plans.
> 
> Not meaning to nitpick, but I don't think this is a compelling reason for the
> architecture working group. We need a group that wants to spend time on
> reviewing the drivers being proposed. This is like saying we need the
> architecture working group because no working group is actively reshaping 
> quotas
> cross-project. 
> 

That sounds like a reasoned deep argument, not a nitpick, so thank you
for making it.

However, I don't think lack of drivers is standing in the way of a DLM
effort. It is a lack of coordination. There was a race to the finish line
to make Consul and etcd drivers, but then, like the fish in finding Nemo,
the drivers are in bags floating in the bay.. now what?

Nobody owns this effort. Everybody gets busy. Nothing gets done. We
continue to bring it up in the hallway and wish we had time.

This is just a place to have a meeting and some people who get together
and say "hey is that done yet? Do you need help? is that still a
priority?". Could we do this as part of Oslo? Yes! But, I want this to
be about going one step higher, and actually taking the implementations
into the respective projects.

> With that said, I can see the architecture working group providing information
> on to a group actually reviewing/writing drivers for DLM and saying "Doing
> mutexes with the mysql driver is crazy, I brought it in a environment and have
> such information to support that it is not reliable". THAT is useful and I
> don't feel like people do enough of.
> 

Ugh, no, I don't want it to be a group of information providers. I'm
not talking about an Architecture Review Board.

It's a group for doers. People who design together, and build with
others. The DLM spec process was actually one of the reasons I wanted
to create this group. We did such a great job on the design side, but
we didn't really stick together on it and push it all the way through.
This group is my idea of how we stick together and complete work like
that.

> My point is call your working group whatever you want (The Purple Parrots), 
> and
> just go spearhead DLM, but don't make it about one of the most compelling
> reasons for the existence of this group.
> 

The Purple Parrots is already taken -- it's my new band. We're playing
at the house of blues on August 3rd.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][stable] status update and call for action

2016-06-30 Thread Ihar Hrachyshka
Hi all,

I want to update the community about what’s going on in Neutron stable 
branches. I also want to share some ideas on how to improve the process going 
forward, and get some feedback.

First, some basic info.
- the project currently maintains two stable branches (stable/liberty and 
stable/mitaka).
- we have a document that captures general OpenStack policy: 
http://docs.openstack.org/project-team-guide/stable-branches.html
- for neutron, we tend to allow all types of applicable bug fixes into the 
latest branch, while the older (liberty) branch gets High+ priority bug fixes 
only.
- neutron project runs its own stable program, supervised by its own 
neutron-stable-maint team.

Since Liberty release, we implemented a so called ‘proactive’ approach towards 
backports, where all applicable bug fixes were proactively backported into 
stable branches without waiting for bugs to be reported against stable branches 
by affected users.

Lately, I implemented a bunch of tools to automate parts of the process. I also 
documented the work flow in: 
http://docs.openstack.org/project-team-guide/stable-branches.html#proactive-backports

(I encourage everyone interested in the stable program to read the section 
through.)

Some stats:
- in liberty branch, so far we merged 287 patches (in 8 months), with 6 minor 
releases.
- in mitaka branch, we landed 111 patches so far (in 3 months), with 4 releases.

For comparison, in kilo, we landed 210 patches in 13 months of life of the 
branch, with 4 releases.

Now that we have the process set to detect candidates for backports, I’d like 
to get more people involved in both backporting relevant patches to stable 
branches as well as reviewing them. I thought that we could distribute the work 
by interested parties. I would love if that job is managed by respective 
subteams where possible, with the help from neutron-stable-maint team.

The basic idea of triage is captured at: 
http://docs.openstack.org/project-team-guide/stable-branches.html#candidate-triage

I wonder whether this is something people interested in particular topics are 
willing to cover for.

For the start, I produced a bunch of topic specific LP dashboards, specifically:

- ipv6: https://goo.gl/dyu1d1
- dns: https://goo.gl/9H2BlK
- l3-ipam-dhcp: https://goo.gl/v4XWE4
- l3-dvr-backlog: https://goo.gl/sx0KL5
- l3-ha: https://goo.gl/QIIRa1
- api: https://goo.gl/d66XtB
- db: https://goo.gl/8NNtym
- loadimpact: https://goo.gl/xQuKRc
- ovs: https://goo.gl/Zr70co
- linuxbridge: https://goo.gl/CrcCzU
- sg-fw: https://goo.gl/K9lkdA
- qos: https://goo.gl/9kRCJv

(There are more tags to consider, but let’s start with those.)

Is there will to help with the process?

==

While at it, I highly encourage current stable maintainers to check the stable 
queue more often. To produce a dedicated gerrit dashboard, you can use the 
following template for gerrit-dash-creator: 
https://github.com/openstack/gerrit-dash-creator/blob/master/dashboards/neutron-subprojects-stable.dash
 I try to keep it in sync with governance changes.

An example of the current dashboard can be found at: https://goo.gl/uiltP9

==

Thanks a lot for everyone who helps with the load, and keep up the good job!
Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting and ask.openstack.org

2016-06-30 Thread Kenny Johnston
On Tue, Jun 28, 2016 at 6:49 PM, Steve Martinelli 
wrote:

> I think we want something a bit more organized.
>
> Morgan tossed the idea of a keystone-docs repo, which could have:
>
> - The FAQ Adam is asking about
> - Install guides (moved over from openstack-manuals)
> - A spot for all those neat and unofficial blog posts we do
> - How-to guides
> - etc...
>
> I think it's a neat idea and warrants some discussion. Of course, we don't
> want to be the odd project out.
>

Between the two proposals, I have to imagine that Operators would rather
see a central place for troubleshooting rather than disparate project
specific ones.


> On Tue, Jun 28, 2016 at 6:00 PM, Ian Cordasco 
> wrote:
>
>> -Original Message-
>> From: Adam Young 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>> 
>> Date: June 28, 2016 at 16:47:26
>> To: OpenStack Development Mailing List > >
>> Subject:  [openstack-dev] Troubleshooting and ask.openstack.org
>>
>> > Recently, the Keystone team started brainstormin a troubleshooting
>> > document. While we could, eventually put this into the Keystone repo,
>> > it makes sense to also be gathering troubleshooting ideas from the
>> > community at large. How do we do this?
>> >
>> > I think we've had a long enough run with the ask.openstack.org website
>> > to determine if it is really useful, and if it needs an update.
>> >
>> >
>> > I know we getting nuked on the Wiki. What I would like to be able to
>> > generate is Frequently Asked Questions (FAQ) page, but as a living
>> > document.
>> >
>> > I think that ask.openstack.org is the right forum for this, but we need
>> > some more help:
>> >
>> > It seems to me that keystone Core should be able to moderate Keystone
>> > questions on the site. That means that they should be able to remove
>> > old dead ones, remove things tagged as Keystone that do not apply and so
>> > on. I would assume the same is true for Nova, Glance, Trove, Mistral
>> > and all the rest.
>> >
>> > We need some better top level interface than just the tags, though.
>> > Ideally we would have a page where someone lands when troubleshooting
>> > keystone with a series of questions and links to the discussion pages
>> > for that question. Like:
>> >
>> >
>> > I get an error that says "cannot authenticate" what do I do?
>> >
>> > What is the Engine behind "ask.openstack.org?" does it have other tools
>> > we could use?
>>
>> The engine is linked in the footer: https://askbot.com/
>>
>> I'm not sure how much of it is reusable but it claims to be able to do
>> some of the things I think you're asking for except it doesn't
>> explicitly mention deleting comments/questions/etc.
>>
>> --
>> Ian Cordasco
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kenny Johnston | irc:kencjohnston | @kencjohnston
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra][project-config] how project config is updated

2016-06-30 Thread Asselin, Ramy
Puppet would update the repo and trigger changes off of it.

Documented here: 
http://docs.openstack.org/infra/openstackci/third_party_ci.html#updating-your-masterless-puppet-hosts

Ramy

From: 王华 [mailto:wanghua.hum...@gmail.com]
Sent: Thursday, June 30, 2016 9:42 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [openstack-infra][project-config] how project config 
is updated

Hi all,

After openstack-infra/project-config is updated, for example layout.yaml for 
zuul is changed, how the change is applied to the CI system? Is there a script 
to trigger this change? I don't find some scripts in the pipeline of 
project-config in the layout.yaml to do this work. Do anyone know how it works?

Best Regards,
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia][upgrades] upgrade loadbalancer to new amphora image

2016-06-30 Thread Michael Johnson
Hi Ihar,

I think the biggest issue I see with the FIP and new amphora approach
is the the persistence tables would be lost.  This is not an issue in
the Active/Standby scenario, but would be in the failover scenario.

Michael

On Wed, Jun 29, 2016 at 9:14 AM, Ihar Hrachyshka  wrote:
> Hi all,
>
> I was looking lately at upgrades for octavia images. This includes using new 
> images for new loadbalancers, as well as for existing balancers.
>
> For the first problem, the amp_image_tag option that I added in Mitaka seems 
> to do the job: all new balancers are created with the latest image that is 
> tagged properly.
>
> As for balancers that already exist, the only way to get them use a new image 
> is to trigger an instance failure, that should rebuild failed nova instance, 
> using the new image. AFAIU the failover process is not currently automated, 
> requiring from the user to set the corresponding port to DOWN and waiting for 
> failover to be detected. I’ve heard there are plans to introduce a specific 
> command to trigger a quick-failover, that would streamline the process and 
> reduce the time needed for the process because the failover would be 
> immediately detected and processed instead of waiting for keepalived failure 
> mode to occur. Is it on the horizon? Patches to review?
>
> While the approach seems rather promising and may be applicable for some 
> environments, I have several concerns about the failover approach that we may 
> want to address.
>
> 1. HA assumption. The approach assumes there is another node running 
> available to serve requests while instance is rebuilding. For non-HA 
> amphoras, it’s not the case, meaning the image upgrade process has a 
> significant downtime.
>
> 2. Even if we have HA, for the time of instance rebuilding, the balancer 
> cluster is degraded to a single node.
>
> 3. (minor) during the upgrade phase, instances that belong to the same HA 
> amphora may run different versions of the image.
>
> What’s the alternative?
>
> One idea I was running with for some time is moving the upgrade complexity 
> one level up. Instead of making Octavia aware of upgrade intricacies, allow 
> it to do its job (load balance), while use neutron floating IP resource to 
> flip a switch from an old image to a new one. Let me elaborate.
>
> Let’s say we have a load balancer LB1 that is running Image1. In this 
> scenario, we assume that access to LB1 VIP is proxied through a floating ip 
> FIP that points to LB1 VIP. Now, the operator uploaded a new Image2 to glance 
> registry and tagged it for octavia usage. The user now wants to migrate the 
> load balancer function to using the new image. To achieve this, the user 
> follows the steps:
>
> 1. create an independent clone of LB1 (let’s call it LB2) that has exact same 
> attributes (members) as LB1.
> 2. once LB2 is up and ready to process requests incoming to its VIP, redirect 
> FIP to the LB2 VIP.
> 3. now all new flows are immediately redirected to LB2 VIP, no downtime (for 
> new flows) due to atomic nature of FIP update on the backend (we use 
> iptables-save/iptables-restore to update FIP rules on the router).
> 4. since LB1 is no longer handling any flows, we can deprovision it. LB2 is 
> now the only balancer handling members.
>
> With that approach, 1) we provide for consistent downtime expectations 
> irrelevant to amphora architecture chosen (HA or not); 2) we flip the switch 
> when the clone is up and ready, so no degraded state for the balancer 
> function; 3) all instances in an HA amphora run the same image.
>
> Of course, it won’t provide no downtime for existing flows that may already 
> be handled by the balancer function. That’s a limitation that I believe is 
> shared by all approaches currently at the table.
>
> As a side note, the approach would work for other lbaas drivers, like 
> namespaces, f.e. in case we want to update haproxy.
>
> Several questions in regards to the topic:
>
> 1. are there any drawbacks with the approach? can we consider it an 
> alternative way of doing image upgrades that could find its way into official 
> documentation?
>
> 2. if the answer is yes, then how can I contribute the piece? should I sync 
> with some other doc related work that I know is currently ongoing in the team?
>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL]Network verification failed.

2016-06-30 Thread Yuki Nishiwaki
Hello Samer

Did you read this reference (https://community.mellanox.com/docs/DOC-2036) ?
If you didn’t read it yet, this may be helpful to you.

By the way, "Figure 1” which is attached is very cool chart.
What the tool did you use to write this figure ??

Yuki Nishiwaki

> 2016/06/30 21:01、Samer Machara  のメール:
> 
> Hello!
>   I'm having problems configuring the Network settings (See Figure 3) in Fuel 
> 8.0. In Figure 1, I resume my network topology.
>  
>   When I am checking the network, I get the following error: verification 
> failed Expected VLAN (not received). See Figure 2.
> 
>   All these Node interfaces correspond to the "storage network" that are type 
> "InfiniBand" connected to a switch "Mellanox IS5025"
> unmanaged, which means it is plug and play. So, I can not configure PKs 
> (VLANs).
>  
>   Otherwise, I do not know how to determine what is happening because I do 
> not see more details of the error, other than that shown in Fig 2.
> 
> 
> Figure 1
> 
> 
> Figure 2.
> 
> 
> 
> Figure 3
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Weekly meeting 6/30 Cancled

2016-06-30 Thread Andrew Woodward
Nothing is on the agenda this week, so I'm calling to cancel the meeting.
If you have anything to discuss. Please come chat in #fuel or add it to the
agenda to discuss next week.

https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hangzhou Bug Smash will be held from July 6 to 8

2016-06-30 Thread Liyongle (Fred)
Hi OpenStackers,

The 4th China OpenStack Bug Smash, hosted by CESI, Huawei, and intel, will be 
held in Hangzhou, China from July 6 to 8 (Beijing time), from 01:00 July 6 to 
6:00 July 8 UTC. And the target to get bugs fixed before the milestone newton-2 
 [1].

Around 50 stackers will fix the bugs in nova, cinder, neutron, magnum, 
ceilometer, heat, ironic, smaug, freezer, oslo, murano and kolla. You are 
appreciated to provide any support, or work remotely with the team. 

Please find this bug smash home page at [2], and the bugs list in [3] (under 
preparation). 

[1] http://releases.openstack.org/newton/schedule.html
[2] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Newton-Hangzhou
[3] https://etherpad.openstack.org/p/hackathon4_all_list

Best Regards

Fred (李永乐)

China OpenStack Bug Smash Team
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nfs-ganesha export modification issue

2016-06-30 Thread Csaba Henk
Hi,

just to provide some context for Alexey's statement

> the second one [creating multiple exports (one per client) for an
exported resource
> [used in current manila's ganesha helper]] ... can easily lead to
confusion.

Here is how it's been dealt with in the case of Manila's Ganesha helper:

https://review.openstack.org/286346/

Ie. include a literal "" string in the export location provided
for
the share. That's a hack, but at least makes clear how things are.

My idea for fixing this was to introduce per-access-rule export locations
(either by storing an export location template for the share, which would be
filled with actual values on the fly when the access rule is queried
through the
API, or store export location in the db as part of the access rule record).

So far I haven't got there to bring it up, but maybe now it's the time.

Csaba




On Thu, Jun 30, 2016 at 2:37 PM, Alexey Ovchinnikov <
aovchinni...@mirantis.com> wrote:

> Hello everyone,
>
> here I will briefly summarize an export update problem one will encounter
> when using nfs-ganesha.
>
> While working on a driver that relies on nfs-ganesha I have discovered
> that it
> is apparently impossible to provide interruption-free export updates. As
> of version
> 2.3 which I am working with it is possible to add an export or to remove an
> export without restarting the daemon, but it is not possible to modify an
> existing
> export. So in other words if you create an export you should define all
> clients
> before you actually export and use it, otherwise it will be impossible to
> change
> rules on the fly. One can come up with at least two ways to work around
> this issue: either by removing, updating and re-adding an export, or by
> creating multiple
> exports (one per client) for an exported resource. Both ways have
> associated
> problems: the first one interrupts clients already working with an export,
> which might be a big problem if a client is doing heavy I/O, the second one
> creates multiple exports associated with a single resource, which can
> easily lead
> to confusion. The second approach is used in current manila's ganesha
> helper[1].
> This issue seems to be raised now and then with nfs-ganesha team, most
> recently in
> [2], but apparently it will not  be addressed in the nearest future.
>
> With kind regards,
> Alexey.
>
> [1]:
> https://github.com/openstack/manila/blob/master/manila/share/drivers/ganesha/__init__.py
> [2]: https://sourceforge.net/p/nfs-ganesha/mailman/message/35173839
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: Architecture Working Group

2016-06-30 Thread Mike Perez
On 11:31 Jun 20, Clint Byrum wrote:
> Excerpts from Joshua Harlow's message of 2016-06-17 15:33:25 -0700:
> > Thanks for getting this started Clint,
> > 
> > I'm happy and excited to be involved in helping try to guide the whole 
> > ecosystem together (it's also why I like being in oslo) to a 
> > architecture that is more cohesive (and is more of something that we can 
> > say to our current or future children that we were all involved and 
> > proud to be involved in creating/maturing...).
> > 
> > At a start, for said first meeting, any kind of agenda come to mind, or 
> > will it be more a informal gathering to start (either is fine with me)?
> > 
> 
> I've been hesitant to fill this in too much as I'm still forming the
> idea, but here are the items I think are most compelling to begin with:
> 
> * DLM's across OpenStack -- This is already under way[1], but it seems to
>   have fizzled out. IMO that is because there's no working group who
>   owns it. We need to actually write some plans.

Not meaning to nitpick, but I don't think this is a compelling reason for the
architecture working group. We need a group that wants to spend time on
reviewing the drivers being proposed. This is like saying we need the
architecture working group because no working group is actively reshaping quotas
cross-project. 

With that said, I can see the architecture working group providing information
on to a group actually reviewing/writing drivers for DLM and saying "Doing
mutexes with the mysql driver is crazy, I brought it in a environment and have
such information to support that it is not reliable". THAT is useful and I
don't feel like people do enough of.

My point is call your working group whatever you want (The Purple Parrots), and
just go spearhead DLM, but don't make it about one of the most compelling
reasons for the existence of this group.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas][octavia] suggestion for today's meeting agenda: How to make the Amphora-agent support additional Linux flavors

2016-06-30 Thread Kosnik, Lubosz
Currently agent is in next face of his life. There are some work in progress to 
change that code.
Because of that it’s right timing to discuss about that and find some proper 
way how to work with this.
The biggest issue with Octavia is that there is almost no documentation about 
what everything to be able to use this project.
There is laconic doc about creating new images so always everyone is able to 
build his own image. Were not blocking that behavior.

Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

> On Jun 30, 2016, at 8:01 AM, Ihar Hrachyshka  wrote:
> 
> 
>> On 30 Jun 2016, at 06:03, Kosnik, Lubosz  wrote:
>> 
>> Like Doug said Amphora suppose to be a black box. It suppose to get some 
>> data - like info in /etc/defaults and do everything inside on its own.
>> Everyone will be able to prepare his own implementation of this image 
>> without mixing things between each other.
> 
> That would be correct if the image would not be maintained by the project 
> itself. Then indeed every vendor would prepare their own image, maybe 
> collaborate on common code for that. Since this code is currently in octavia, 
> we kinda need to plug into it for other vendors. Otherwise you pick one and 
> give it a preference.
> 
> But if we can make the agent itself vendor agnostic, so that the only 
> differentiation would happen around components stuffed into the image 
> (kernel, haproxy version, security tweaks, …), then it will be obviously a 
> better path than trying to template the agent for multiple vendors.
> 
> A silly question: why does the agent even need to configure the network using 
> distribution mechanisms and not just calling to ip link and friends?
> 
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] the meaning of the release:managed tag now that everything is released:managed

2016-06-30 Thread Steven Dake (stdake)
Hey folks,

I am keen on tagging Kolla in the governance repository with every tag that is 
applicable to our project.  One of these is release:managed.  I've been working 
as PTL the last 3 cycles to get Kolla processes to the point we could apply for 
release:managed.  Looks like Doug and release team in general has beat me to 
the punch :)

The requirements of the tag are met by force because of how the release process 
is now executed.  I'm wondering if this tag has any meaning any longer given 
the fact that the release team has nearly automated themselves out of a job :)

If it does have some special meaning or requirements beyond the "we will freeze 
on the freeze deadline" could someone enumerate those?

FWIW I feel a lot more comfortable with the current release process.  The 
release team has done a fantastic job.  I always felt nervous pushing a signed 
tag and I've been doing this for ~5 years :)

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra][project-config] how project config is updated

2016-06-30 Thread Jeremy Stanley
On 2016-06-30 21:42:20 +0800 (+0800), 王华 wrote:
> After openstack-infra/project-config is updated, for example layout.yaml
> for zuul is changed, how the change is applied to the CI system? Is there a
> script to trigger this change? I don't find some scripts in the pipeline of
> project-config in the layout.yaml to do this work. Do anyone know how it
> works?

We have ansible roles in the openstack-infra/system-config repo
which are run from a wrapper script triggered by a cron job every 15
minutes to update modules on all our servers and do a `puppet apply`
on them. In the case of zuul layout.yaml, the updated config is put
in place by the Puppet module in the openstack-infra/puppet-zuul
module.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-06-30 Thread Daniel P. Berrange
A bunch of people in Nova and upstream QEMU teams are trying to investigate
a long standing bug in live migration[1]. Unfortuntely the bug is rather
non-deterministic - eg on the multinode-live-migration tempest job it has
hit 4 times in 7 days, while on multinode-full tempest job it has hit
~70 times in 7 days.

I have a test patch which hacks nova to download & install a special QEMU
build with extra debugging output[2]. Because of the non-determinism I need
to then run the multinode-live-migration & multinode-full tempest jobs
many times to try and catch the bug.  Doing this by just entering 'recheck'
is rather tedious because you have to wait for the 1+ hour turnaround time
between each recheck.

To get around this limitation I created a chain of 10 commits [3] which just
toggled some whitespace and uploaded them all, so I can get 10 CI runs
going in parallel. This worked remarkably well - at least enough to
reproduce the more common failure of multinode-full, but not enough for
the much rarer multinode-live-migration job.

I could expand this hack and upload 100 dummy changes to get more jobs
running to increase chances of hitting the multinode-live-migration
failure. Out of the 16 jobs run on every Nova change, I only care about
running 2 of them. So to get 100 runs of the 2 live migration jobs I want,
I'd be creating 1600 CI jobs in total which is not too nice for our CI
resource pool :-(

I'd really love it if there was

 1. the ability to request checking of just specific jobs eg

  "recheck gate-tempest-dsvm-multinode-full"

 2. the ability to request this recheck to run multiple
times in parallel. eg if i just repeat the 'recheck'
command many times on the same patchset # without
waiting for results

Any one got any other tips for debugging highly non-deterministic
bugs like this which only hit perhaps 1 time in 100, without wasting
huge amounts of CI resource as I'm doing right now ?

No one has ever been able to reproduce these failures outside of
the gate CI infra, indeed certain CI hosting providers seem worse
afffected by the bug than others, so running tempest locally is not
an option.

Regards,
Daniel

[1] https://bugs.launchpad.net/nova/+bug/1524898
[2] https://review.openstack.org/#/c/335549/5
[3] https://review.openstack.org/#/q/topic:mig-debug
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-infra][project-config] how project config is updated

2016-06-30 Thread 王华
Hi all,

After openstack-infra/project-config is updated, for example layout.yaml
for zuul is changed, how the change is applied to the CI system? Is there a
script to trigger this change? I don't find some scripts in the pipeline of
project-config in the layout.yaml to do this work. Do anyone know how it
works?

Best Regards,
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][osc-lib][openstackclient] is it too early for orc-lib?

2016-06-30 Thread Hardik

Hi,

Regarding osc-lib we have mainly two changes.

1) Used "utils" which is moved from openstackclient.common.utils to 
osc_lib.utils

2) We used "command"  which wrapped in osc_lib from cliff.

So I think there is no harm in keeping osc_lib.

Also, I guess we do not need openstackclient to be installed  with 
mistralclient as if mistral is used in standalone mode

there is no need of openstackclient.

Thoughts ?

Thanks and Regards,
Hardik Parekh


On Thursday 30 June 2016 05:25 PM, Renat Akhmerov wrote:

Hi,

We already let osc-lib in to Mistral but I found out that such 
transition was blocked in TripleO, [1].
I'd like to ask the team to read into it and discuss whether we need 
to revert corresponding patches in Mistral or not.


[1] https://review.openstack.org/#/c/11/

Renat Akhmerov
@Nokia



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra][nodepool] Can the nodes created by nodepool be reused by jenkins jobs?

2016-06-30 Thread Jeremy Stanley
On 2016-06-30 12:13:01 +0800 (+0800), 王华 wrote:
> There is a period between a job is done in Jenkins and the node is
> deleted by nodepool. Before the node is deleted, the node can be
> seen on Jenkins.How can Jenkins know not to use the node which has
> already run a job? Is there a mechanism to ensure this case?

The aforementioned OFFLINE_NODE_WHEN_COMPLETE parameter is
interpreted by the jenkins-gearman plug-in and offlines the
corresponding Jenkins slave atomically with job completion so that
there should be no race where it's possible for the master to assign
it another job. That said, if you're running a particularly recent
Jenkins release (1.651.2 or later), there is a new security feature
which prevents parameters from being passed outside of the job
configuration and you'll need to solve that issue in one of several
ways:

http://lists.openstack.org/pipermail/openstack-infra/2016-May/004284.html

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia][upgrades] upgrade loadbalancer to new amphora image

2016-06-30 Thread Ihar Hrachyshka

> On 30 Jun 2016, at 01:16, Brandon Logan  wrote:
> 
> Hi Ihar, thanks for starting this discussion.  Comments in-line.
> 
> After writing my comments in line, I might now realize that you're just
> talking about documenting  a way for a user to do this, and not have
> Octavia handle it at all.  If that's the case I apologize for my reading
> comprehension, but I'll keep my comments in case I'm wrong.  My brain is
> not working well today, sorry :(

Right. All the mechanisms needed to apply the approach are already in place in 
both Octavia and Neutron as of Mitaka. The question is mostly about whether the 
team behind the project may endorse the alternative approach in addition to 
whatever is in the implementation in regards to failovers by giving space to 
describe it in the official docs. I don’t suggest that the approach is the sole 
documented, or that octavia team need to implement anything. [That said, it may 
be wise to look at providing some smart scripts on top of neutron/octavia API 
that would realize the approach without putting the burden of multiple API 
calls onto users.]

> 
> Thanks,
> Brandon
> 
> On Wed, 2016-06-29 at 18:14 +0200, Ihar Hrachyshka wrote:
>> Hi all,
>> 
>> I was looking lately at upgrades for octavia images. This includes using new 
>> images for new loadbalancers, as well as for existing balancers.
>> 
>> For the first problem, the amp_image_tag option that I added in Mitaka seems 
>> to do the job: all new balancers are created with the latest image that is 
>> tagged properly.
>> 
>> As for balancers that already exist, the only way to get them use a new 
>> image is to trigger an instance failure, that should rebuild failed nova 
>> instance, using the new image. AFAIU the failover process is not currently 
>> automated, requiring from the user to set the corresponding port to DOWN and 
>> waiting for failover to be detected. I’ve heard there are plans to introduce 
>> a specific command to trigger a quick-failover, that would streamline the 
>> process and reduce the time needed for the process because the failover 
>> would be immediately detected and processed instead of waiting for 
>> keepalived failure mode to occur. Is it on the horizon? Patches to review?
> 
> Not that I know of and with all the work slated for Newton, I'm 99% sure
> it won't be done in Newton.  Perhaps Ocata.

I see. Do we maybe want to provide a smart script that would help to trigger a 
failover with neutron API? [detect the port id, set it to DOWN, …]

>> 
>> While the approach seems rather promising and may be applicable for some 
>> environments, I have several concerns about the failover approach that we 
>> may want to address.
>> 
>> 1. HA assumption. The approach assumes there is another node running 
>> available to serve requests while instance is rebuilding. For non-HA 
>> amphoras, it’s not the case, meaning the image upgrade process has a 
>> significant downtime.
>> 
>> 2. Even if we have HA, for the time of instance rebuilding, the balancer 
>> cluster is degraded to a single node.
>> 
>> 3. (minor) during the upgrade phase, instances that belong to the same HA 
>> amphora may run different versions of the image.
>> 
>> What’s the alternative?
>> 
>> One idea I was running with for some time is moving the upgrade complexity 
>> one level up. Instead of making Octavia aware of upgrade intricacies, allow 
>> it to do its job (load balance), while use neutron floating IP resource to 
>> flip a switch from an old image to a new one. Let me elaborate.
> I'm not sure I like the idea of tying this to floating IP as there are
> deployers who do not use floating IPs.  Then again, we are currently
> depending on allowed address pairs which is also an extension, but I
> suspect its probably deployed in more places.  I have no proof of this
> though.

I guess you already deduced that, but just for the sake of completeness: no, I 
don’t suggest that octavia ties its backend to FIPs. I merely suggest to 
document the proposed approach as ‘yet another way of doing it’, at least until 
we tackle the first two concerns raised.

>> 
>> Let’s say we have a load balancer LB1 that is running Image1. In this 
>> scenario, we assume that access to LB1 VIP is proxied through a floating ip 
>> FIP that points to LB1 VIP. Now, the operator uploaded a new Image2 to 
>> glance registry and tagged it for octavia usage. The user now wants to 
>> migrate the load balancer function to using the new image. To achieve this, 
>> the user follows the steps:
>> 
>> 1. create an independent clone of LB1 (let’s call it LB2) that has exact 
>> same attributes (members) as LB1.
>> 2. once LB2 is up and ready to process requests incoming to its VIP, 
>> redirect FIP to the LB2 VIP.
>> 3. now all new flows are immediately redirected to LB2 VIP, no downtime (for 
>> new flows) due to atomic nature of FIP update on the backend (we use 
>> iptables-save/iptables-restore to update FIP rules 

Re: [openstack-dev] [octavia][upgrades] upgrade loadbalancer to new amphora image

2016-06-30 Thread Ihar Hrachyshka

> On 29 Jun 2016, at 18:33, Kosnik, Lubosz  wrote:
> 
> May you specify what exact use-case you have to upload incompatible images?
> In my opinion we should prepare a flow which is like you said building new 
> instance, configuring everything, adding that amphora into load balancer and 
> removing old one. In that way we will be able to minimize retry to specific 
> Amphorae not all load balancers.
> Everything depends of that are we able to do something with Amphora image 
> that it will not work any more in cluster with older versions.

You picked just the third minor concern I had, and the one that is indeed quite 
theoretical. I just feel safer when my cluster runs the same version of the 
image, but yeah, generally it won’t be an issue.

I think we should primarily stick to two other concerns I raised: 1. HA 
assumption 2. cluster degradation for a non negligible time.

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas][octavia] suggestion for today's meeting agenda: How to make the Amphora-agent support additional Linux flavors

2016-06-30 Thread Ihar Hrachyshka

> On 30 Jun 2016, at 06:03, Kosnik, Lubosz  wrote:
> 
> Like Doug said Amphora suppose to be a black box. It suppose to get some data 
> - like info in /etc/defaults and do everything inside on its own.
> Everyone will be able to prepare his own implementation of this image without 
> mixing things between each other.

That would be correct if the image would not be maintained by the project 
itself. Then indeed every vendor would prepare their own image, maybe 
collaborate on common code for that. Since this code is currently in octavia, 
we kinda need to plug into it for other vendors. Otherwise you pick one and 
give it a preference.

But if we can make the agent itself vendor agnostic, so that the only 
differentiation would happen around components stuffed into the image (kernel, 
haproxy version, security tweaks, …), then it will be obviously a better path 
than trying to template the agent for multiple vendors.

A silly question: why does the agent even need to configure the network using 
distribution mechanisms and not just calling to ip link and friends?

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] [vitrage] branch marking policy

2016-06-30 Thread Rosensweig, Elisha (Nokia - IL)
Thanks! From a brief look, this seems exactly what we need.

Elisha

From: Joshua Hesketh [mailto:joshua.hesk...@gmail.com]
Sent: Thursday, June 30, 2016 3:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-infra] [vitrage] branch marking policy

Hey Elisha,

Have you looked at http://docs.openstack.org/infra/manual/drivers.html ?

Cheers,
Josh

On Thu, Jun 30, 2016 at 9:16 PM, Rosensweig, Elisha (Nokia - IL) 
> wrote:
Hi,

We've prepared a (local) branch with Vitrage that is *Liberty-compatible*, and 
would like to mark (tag?) the branch.

What is the standard way to do this?

Thanks,

Elisha Rosensweig, Ph.D.
R Director
CloudBand, Nokia
T: +972 9793 3159


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral] Library for JWT (JSON Web Token)

2016-06-30 Thread Mehdi Abaakouk



Le 2016-06-30 13:07, Renat Akhmerov a écrit :

Reason: we need it to provide support for OpenID Connect
authentication in Mistral.


Can't [1] do the job ? (sorry if I'm off-beat)

[1] http://docs.openstack.org/developer/keystone/federation/openidc.html

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] [vitrage] branch marking policy

2016-06-30 Thread Joshua Hesketh
Hey Elisha,

Have you looked at http://docs.openstack.org/infra/manual/drivers.html ?

Cheers,
Josh

On Thu, Jun 30, 2016 at 9:16 PM, Rosensweig, Elisha (Nokia - IL) <
elisha.rosensw...@nokia.com> wrote:

> Hi,
>
> We've prepared a (local) branch with Vitrage that is *Liberty-compatible*,
> and would like to mark (tag?) the branch.
>
> What is the standard way to do this?
>
> Thanks,
>
> Elisha Rosensweig, Ph.D.
> R Director
> CloudBand, Nokia
> T: +972 9793 3159
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] nfs-ganesha export modification issue

2016-06-30 Thread Alexey Ovchinnikov
Hello everyone,

here I will briefly summarize an export update problem one will encounter
when using nfs-ganesha.

While working on a driver that relies on nfs-ganesha I have discovered that
it
is apparently impossible to provide interruption-free export updates. As of
version
2.3 which I am working with it is possible to add an export or to remove an
export without restarting the daemon, but it is not possible to modify an
existing
export. So in other words if you create an export you should define all
clients
before you actually export and use it, otherwise it will be impossible to
change
rules on the fly. One can come up with at least two ways to work around
this issue: either by removing, updating and re-adding an export, or by
creating multiple
exports (one per client) for an exported resource. Both ways have associated
problems: the first one interrupts clients already working with an export,
which might be a big problem if a client is doing heavy I/O, the second one
creates multiple exports associated with a single resource, which can
easily lead
to confusion. The second approach is used in current manila's ganesha
helper[1].
This issue seems to be raised now and then with nfs-ganesha team, most
recently in
[2], but apparently it will not  be addressed in the nearest future.

With kind regards,
Alexey.

[1]:
https://github.com/openstack/manila/blob/master/manila/share/drivers/ganesha/__init__.py
[2]: https://sourceforge.net/p/nfs-ganesha/mailman/message/35173839
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [stable/mitaka] Error: Discovering versions from the identity service failed when creating the password plugin....

2016-06-30 Thread mohammad shahid
Hi Tony,

Thanks for your reply, its working now :-)


Regards,
Mohammad Shahid

On Fri, Jun 24, 2016 at 8:44 AM, Tony Breeds 
wrote:

> On Tue, Jun 21, 2016 at 10:27:28AM +0530, mohammad shahid wrote:
> > Hi,
> >
> > I am getting below error while starting openstack devstack with
> > stable/mitaka release. can someone look at this problem ?
>
> so I *think* you're trying to deploy stable/mitaka with the master version
> of
> devstack.
>
> Please make sure that you're running the stable/mitaka version of devstack.
>
> git clone https://git.openstack.org/openstack-dev/devstack
> cd devstack
> git checkout -b stable/mitaka -t origin/stable/mitaka
>
> If that doesn't work please inlude the devstack SHA you are using along
> with
> the trace and config.
>
> Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Questions about instance actions' update and finish

2016-06-30 Thread Andrew Laski


On Wed, Jun 29, 2016, at 11:11 PM, Matt Riedemann wrote:
> On 6/29/2016 10:10 PM, Matt Riedemann wrote:
> > On 6/29/2016 6:40 AM, Andrew Laski wrote:
> >>
> >>
> >>
> >> On Tue, Jun 28, 2016, at 09:27 PM, Zhenyu Zheng wrote:
> >>> How about I sync updated_at and created_at in my patch, and leave the
> >>> finish to the other BP, by this way, I can use updated_at for the
> >>> timestamp filter I added and it don't need to change again once the
> >>> finish BP is complete.
> >>
> >> Sounds good to me.
> >>
> >
> > It's been a long day so my memory might be fried, but the options we
> > talked about in the API meeting were:
> >
> > 1. Setting updated_at = created_at when the instance action record is
> > created. Laski likes this, I'm not crazy about it, especially since we
> > don't do that for anything else.

I would actually like for us to do this generally. I have the same
thinking as Ed does elsewhere in this thread, the creation of a record
is an update of that record. So take my comments as applying to Nova
overall and not just this issue.

> >
> > 2. Update the instance action's updated_at when instance action events
> > are created. I like this since the instance action is like a parent
> > resource and the event is the child, so when we create/modify an event
> > we can consider it an update to the parent. Laski thought this might be
> > weird UX given we don't expose instance action events in the REST API
> > unless you're an admin. This is also probably not something we'd do for
> > other related resources like server groups and server group members (but
> > we don't page on those either right now).

Right. My concern is just that the ordering of actions can change based
on events happening which are not visible to the user. However thinking
about it further we don't really allow multiple actions at once, except
for a few special cases like delete, so this may not end up affecting
any ordering as actions are mostly serial. I think this is a fine
solution for the issue at hand. I just think #1 is a more general
solution.

> >
> > 3. Order the results by updated_at,created_at so that if updated_at
> > isn't set for older records, created_at will be used. I think we all
> > agreed in the meeting to do this regardless of #1 or #2 above.
> >

+1

> 
> Oh and
> 
> #4. Sean Dague needs to come back from leadership training camp in 
> Michigan and make these kind of API decisions for us.

+2

> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Continued support of Fedora as a base platform

2016-06-30 Thread Haïkel
2016-06-30 14:07 GMT+02:00 Steven Dake (stdake) :
> What really cratered our implementation of fedora was the introduction of
> DNF.  Prior to that, we led with Fedora.  I switched my focus to something
> slower moving (CentOS) so I could focus on a properly working RDO rather
> then working around the latest and greatest changes.
>
> That said, if someone wants to fix Kolla to run against dnf, that would be
> fantastic, as it will need to be done for CentOS8 an RHEL8.
>
> Regards
> -steve
>

That's something that we fixed for Fedora Cloud image. I'll give it a shot.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Continued support of Fedora as a base platform

2016-06-30 Thread Steven Dake (stdake)
What really cratered our implementation of fedora was the introduction of
DNF.  Prior to that, we led with Fedora.  I switched my focus to something
slower moving (CentOS) so I could focus on a properly working RDO rather
then working around the latest and greatest changes.

That said, if someone wants to fix Kolla to run against dnf, that would be
fantastic, as it will need to be done for CentOS8 an RHEL8.

Regards
-steve

On 6/29/16, 6:07 PM, "Gerard Braad"  wrote:

>Hi,
>
>
>Kolla has supported Fedora AFAIK since the project started, and offers
>several other valid options:
>
>  # Valid options are [ centos, fedora, oraclelinux, ubuntu ]
>  #kolla_base_distro: "centos"
>
>but in recent time, it came to my attention that the support of Fedora
>is lacking. There could be several reasons for this;
>
>  1. interest
>  2. lack of resources
>  3. life cycle
>
>Firstly, might this be related to the fact that deploying on Fedora is
>not of interest to most? The majority of deployments of OpenStack
>happen on either Ubuntu or RHEL/CentOS. However, supporting Fedora
>early can help the deliverance on future version of RHEL/CentOS
>(although, there can be years in between before this happens). It is
>therefore still of importance.
>
>Second, and this is probably more likely the case. The Kolla project
>lacks the resources to maintain releasing on Fedora. Especially, since
>Fedora carries newer versions of software, there is a tendency of
>breakage. Automated tests is because of this of high importance.
>
>Third, since Fedora does not have a concept of Long-term releases, the
>release is only supported for a period of approximately 13 months.
>This is detailed in the Release Life Cycle [1] and EOL status page
>[2]. This means that after a release, like currently F24, the previous
>version like F22 will be phased out.
>
>A recent bugreport [3] about the image availability got resolved by
>implementing F22 (which would have been phased out just a month or two
>from now). The suggestion was to use CentOS for this. Maybe in this
>case it was... but should we?
>
>The question is not "Do we want to support Fedora?", but "Can we
>support Fedora?. If my time allows, I will certainly work on making
>this happen. But before, it might be needed to collect some of the
>feedback what has been done, what needs to be done... and what is
>currently the impediment of making it happen, like issues with
>versions of the dependencies.
>
>Would like to hear your thoughts...
>
>regards,
>
>
>Gerard
>
>[1]  https://fedoraproject.org/wiki/Fedora_Release_Life_Cycle
>[2]  https://fedoraproject.org/wiki/End_of_life
>[3]  https://bugs.launchpad.net/kolla/+bug/1589770
>
>-- 
>
>   Gerard Braad | http://gbraad.nl
>   [ Doing Open Source Matters ]
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral][osc-lib][openstackclient] is it too early for orc-lib?

2016-06-30 Thread Renat Akhmerov
Hi,

We already let osc-lib in to Mistral but I found out that such transition was 
blocked in TripleO, [1].
I’d like to ask the team to read into it and discuss whether we need to revert 
corresponding patches in Mistral or not.

[1] https://review.openstack.org/#/c/11/ 


Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-infra] [vitrage] branch marking policy

2016-06-30 Thread Rosensweig, Elisha (Nokia - IL)
Hi,

We've prepared a (local) branch with Vitrage that is *Liberty-compatible*, and 
would like to mark (tag?) the branch.

What is the standard way to do this?

Thanks,

Elisha Rosensweig, Ph.D.
R Director
CloudBand, Nokia 
T: +972 9793 3159


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][mistral] Library for JWT (JSON Web Token)

2016-06-30 Thread Renat Akhmerov
Hi,

Does any of the existing OpenStack requirements provide support for JWT? If 
not, I’d like to propose a new lib into global-requirements.txt, something from:
https://pypi.python.org/pypi/PyJWT/1.4.0 
 (seems to be the best)
https://pypi.python.org/pypi/python-jose/1.0.0 
, this one also has some other 
stuff.
https://pypi.python.org/pypi/jwcrypto/0.2.1 


Could you please check if some of those are compliant with our requirements 
(I’m not sure I know all of them)?

Reason: we need it to provide support for OpenID Connect authentication in 
Mistral.

Thanks

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Continued support of Fedora as a base platform

2016-06-30 Thread Haïkel
My opinion as one of RDO release wranglers is not to support Fedora
for anything else that isn't trunk.
It's proven really hard to maintain all dependencies in a good state,
and when we managed to do that,
an update could break things at any time (like python-pymongo update
who was removed because of Pulp developers).

RDO actually ensure that spec files are buildable on Fedora but you'd
have to maintain dependencies separately and rely on
tools like yum priorities plugin to override base packages.

Fedora lifecycle is also not sync-ed with OpenStack, OpenStack being
released around 2 months before the next Fedora Stable.
So in practice, if you use stable N-1, you have 9 months of support
from Fedora, and updating to stable N requires some amount of work.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Deprecation of fuel-mirror tool

2016-06-30 Thread Vladimir Kozhukalov
Please review this patch [1]. It is to remove the code from master branch.
We still need test jobs for stable branches.

[1] https://review.openstack.org/#/c/335868/

Vladimir Kozhukalov

On Mon, Jun 27, 2016 at 1:14 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Fuel-mirror python module itself will be removed from fuel-mirror
> repository, but perestroika is to be there until Packetary is able to
> substitute perestroika totally (work is now in progress).
>
> Just a reminder: According to our plan Packetary is to cover the whole
> rpm/deb domain including building deb/rpm packages and repositories.
> Sincing these repos over multiple locations as well as tracking repository
> snapshots will be a matter of Trsync project (Fuel infra team project).
>
> Vladimir Kozhukalov
>
> On Mon, Jun 27, 2016 at 11:38 AM, Igor Kalnitsky 
> wrote:
>
>> Vladimir,
>>
>> Thanks for driving this! What about fuel-mirror itself? Does it mean it's
>> deprecated? If so, what will happen to perestroika scripts inside it [1]?
>> It seems strange that fuel-mirror contains them.
>>
>> Thanks,
>> Igor
>>
>> [1] https://github.com/openstack/fuel-mirror/tree/master/perestroika
>>
>>
>> > On Jun 23, 2016, at 13:31, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>> >
>> > Dear colleagues.
>> >
>> > I'd like to announce that fuel-mirror tool is not going to be a part of
>> Fuel any more. Its functionality is to build/clone rpm/deb repos and modify
>> Fuel releases repository lists (metadata).
>> >
>> > Since Fuel 10.0 it is recommended to use other available tools for
>> managing local deb/rpm repositories.
>> >
>> > Packetary is a good example [0]. Packetary is ideal if one needs to
>> create a partial mirror of a deb/rpm repository, i.e. mirror that contains
>> not all available packages but only a subset of packages. To create full
>> mirror it is better to use debmirror or rsync or any other tools that are
>> available.
>> >
>> > To modify releases repository lists one can use commands which are to
>> available by default on the Fuel admin node since Newton.
>> >
>> > # list of available releases
>> > fuel2 release list
>> > # list of repositories for a release
>> > fuel2 release repos list 
>> > # save list of repositories for a release in yaml format
>> > fuel2 release repos list  -f yaml | tee repos.yaml
>> > # modify list of repositories
>> > vim repos.yaml
>> > # update list of repositories for a release from yaml file
>> > fuel2 release repos update  -f repos.yaml
>> >
>> > They are provided by python-fuelclient [1] package and were introduced
>> by this [2] patch.
>> >
>> >
>> > [0] https://wiki.openstack.org/wiki/Packetary
>> > [1] https://github.com/openstack/python-fuelclient
>> > [2] https://review.openstack.org/#/c/326435/
>> >
>> >
>> > Vladimir Kozhukalov
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] The bug list - how to use it, and how to triage

2016-06-30 Thread Rob Cresswell
Hello!

I just wanted to talk through Horizons bug list ( 
https://bugs.launchpad.net/horizon/ ), and how to use it to find issues you can 
help solve or review, as well as how to help triage bugs if you have time to 
help out.

Using the bug list:

- The "Tags" section on the right hand side is your friend. We have a whole 
bunch of tags related to language (like "angularjs"), the bug content 
("integration-tests" or "ux") or the type of service knowledge that may be 
useful in solving the bug ("nova", "neutron" etc). If you're just starting out, 
checking out the "low-hanging-fruit" tag, which is used to indicate 
straightforward bugs for your first couple of contributions.

- If you're looking for code to review, try using the Advanced Search to filter 
for Critical/High priority bugs that are In Progress. This means they are 
important to us, and have a patch up on Gerrit. Alternatively, scroll down and 
select the next milestone ("newton-2" in this case) from the 
"Milestone-targeted bugs" on the right hand side. These are bugs that have been 
triaged and we'd like to have complete for this milestone.

- Don't be intimidated by bugs marked High/Critical. Priority is often not 
linked to complexity, so its worth looking in to.

- If you assign yourself to a bug, but are unable to complete it, remember to 
remove yourself as an assignee and set the status back to "Confirmed" or "New"; 
this makes it much easier for us to track which bugs are being actively worked 
on.

Triaging the bug list:

- https://wiki.openstack.org/wiki/BugTriage This is a great step by step piece 
of documentation on triage, and definitely worth reading through to understand 
the prioritisation system.

- Target bugs to the "Next" milestone by default. This makes it easy to see 
whether bugs have been triaged or not. If a bug is important for this 
milestone, or looks close to completion, just target it to the next milestone 
right away.

- Remember to use tags, but be careful how you use them. Generally, we use the 
service name tags, like "nova", "swift" etc. to indicate that specific 
knowledge of the service may be useful for this bug. Just because a bug is on 
the Instances panel, does not mean it should immediately be tagged with "nova"; 
consider whether it is actually service specific, or is really a UI or other 
code issue.

- You don't need to be on the bug team to triage; if there's something you're 
unable to do, just ping a member of the bug team: 
https://launchpad.net/~horizon-bugs

Hope this helps! If anyone has any other questions, reply here or ping me on 
IRC (robcresswell)

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca] metering interval, global/local

2016-06-30 Thread László Hegedüs
Hi,

as far as I can tell, Monasca does not support different metering intervals for 
different meters. There is the check_freq value in agent.yaml, which is global.
Am I right about this?

Did you consider the option of specifying check_freq for specific meters?

BR,
Laszlo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Deprecate fuel-upgrade repository

2016-06-30 Thread Vladimir Kozhukalov
Yet another patch [1] that makes fuel-upgrade read only.

[1] https://review.openstack.org/#/c/335841/

Vladimir Kozhukalov

On Wed, Jun 29, 2016 at 6:42 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> This is yet another related patch [1], that removes the code from master
> branch and adds retirement warning. Review is welcome.
>
> [1] https://review.openstack.org/#/c/334949/
>
>
> Vladimir Kozhukalov
>
> On Wed, Jun 29, 2016 at 4:00 PM, Ilya Kharin  wrote:
>
>> Hi Vladimir,
>>
>> This change is reasonable because the fuel-upgrade repository is not
>> supported since the 8.0 release due to the fact that upgrade activities
>> were consolidated in the fuel-octane repository. Also, as I know upgrade
>> tarballs are no longer supported for the old releases (less or equal 7.0).
>>
>> I see only one concern here that it can prevent to create some fixes for
>> upgrade tarballs for old realease. Otherwise I have not any objections to
>> send this repository on the retirement.
>>
>> Best regards,
>> Ilya Kharin.
>>
>> On Tue, Jun 28, 2016 at 6:18 PM Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> This patch [1] is a part of project retirement procedure [2]. Review is
>>> welcome.
>>>
>>> [1] https://review.openstack.org/#/c/335085/
>>> [2]
>>> http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Tue, Jun 28, 2016 at 1:41 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Dear colleagues,

 Please be informed that fuel-upgrade [1] repository is going to be
 deprecated. We used to develop Fuel admin node upgrade scenarios in this
 repo, but now all upgrade related stuff is in fuel-octane [2] repo. So,
 fuel-upgrade is to be removed from the list of official Fuel repos [3].
 Fuel-upgrade will stay available for a while perhaps for about a year or so
 in case some of Fuel users want to build upgrade tarball.


 [1] https://github.com/openstack/fuel-upgrade
 [2] https://github.com/openstack/fuel-octane
 [3] https://review.openstack.org/#/c/334903/

 Vladimir Kozhukalov

>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Keep / Remove 800 UTC Meeting time

2016-06-30 Thread Rob Cresswell
Hi everyone,

I've mentioned in the past few meetings that attendance for the 800 UTC meeting 
time has been dwindling. As it stands, there are really only 3 regular 
attendees (myself included). I think we should consider scrapping it, and just 
use the 2000 UTC slot each week as a combined Horizon / Drivers meeting.

Does anyone have any strong objections to this? I'm more than happy to run the 
meeting if people would like to attend, but it seems wasteful to drag people 
into it each week if its empty.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday June 30th at 9:00 UTC

2016-06-30 Thread Ghanshyam Mann
Hello everyone,



Please reminder that the weekly OpenStack QA team IRC meeting will be Thursday, 
June 30th at 9:00 UTC in the #openstack-meeting channel.



The agenda for the meeting can be found here:

https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_June_30th_2016_.280900_UTC.29

Anyone is welcome to add an item to the agenda.



Sorry for delay in mail.



To help people figure out what time 9:00 UTC is in other timezones the next 
meeting will be at:



04:00 EST

18:00 JST

18:30 ACST

11:00 CEST

04:00 CDT

02:00 PDT


Thanks & Regards,
Ghanshyam Mann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev