Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-02 Thread Dustin Schoenbrun
+1 He's been a great resource to the community and the nomination is 
very well deserved!


On 02/02/2016 12:58 PM, Knight, Clinton wrote:

+1  Great addition.  Welcome, Rodrigo!

Clinton


On 2/2/16, 12:30 PM, "Ben Swartzlander"  wrote:


Rodrigo (ganso on IRC) joined the Manila project back in the Kilo
release and has been working on share migration (an important core
feature) for the last 2 releases. Since Tokyo he has dedicated himself
to reviews and community participation. I would like to nominate him to
join the Manila core reviewer team.

-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Dustin Schoenbrun
OpenStack Quality Engineer
Red Hat, Inc.
dscho...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-02 Thread Valeriy Ponomaryov
Rodrigo is the first person in queue for taking such responsibility.
+2

On Tue, Feb 2, 2016 at 8:14 PM, Dustin Schoenbrun 
wrote:

> +1 He's been a great resource to the community and the nomination is very
> well deserved!
>
>
> On 02/02/2016 12:58 PM, Knight, Clinton wrote:
>
>> +1  Great addition.  Welcome, Rodrigo!
>>
>> Clinton
>>
>>
>> On 2/2/16, 12:30 PM, "Ben Swartzlander"  wrote:
>>
>> Rodrigo (ganso on IRC) joined the Manila project back in the Kilo
>>> release and has been working on share migration (an important core
>>> feature) for the last 2 releases. Since Tokyo he has dedicated himself
>>> to reviews and community participation. I would like to nominate him to
>>> join the Manila core reviewer team.
>>>
>>> -Ben Swartzlander
>>> Manila PTL
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> --
> Dustin Schoenbrun
> OpenStack Quality Engineer
> Red Hat, Inc.
> dscho...@redhat.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][oslo][all] oslo.context 2.0.0 release (mitaka)

2016-02-02 Thread Victor Stinner

(I added [all] to the topic)

Hi,

Le 02/02/2016 17:20, dava...@gmail.com a écrit :

We are thrilled to announce the release of:
oslo.context 2.0.0: Oslo Context library
(...)
4a8a1df Fix request_id type on Python 3: use text (Unicode)


This change is a deliberate incompatible change in the API, but only on 
Python 3. If you notice any recent failure related to oslo.context (2.0) 
on Python 3, please bug me! I will investigate the issue.


I analyzed the code of all OpenStack applications and prepared the code 
of some applications in master and liberty to avoid any gate failure. 
But shit happens ;-)


Hopefully, this change should fix oslo.log on Python 3.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Nominating Vijendar Komalla for Solum core

2016-02-02 Thread Devdatta Kulkarni
Hi team,

I would like to propose Vijendar Komalla for our core team. Vijendar has been 
actively
contributing to Solum for several months now submitting patches, providing 
great reviews,
and participating actively in our IRC meetings and on Solum IRC channel.
You can find Vijendar's contributions at [1][2].

Please respond with your votes.

Regards,
Devdatta

[1] http://stackalytics.com/?module=solum_id=vijendar-komalla
[2] http://stackalytics.com/?module=python-solumclient_id=vijendar-komalla
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing a simple new tool: git-restack

2016-02-02 Thread Paul Michali
Sounds interesting... the link
https://docs.openstack.org/infra/git-restack/ referenced
as the home page in PyPI is a broken link.

Regards,

PCM


On Tue, Feb 2, 2016 at 12:54 PM James E. Blair  wrote:

> Hi,
>
> I'm pleased to announce a new and very simple tool to help with managing
> large patch series with our Gerrit workflow.
>
> In our workflow we often find it necessary to create a series of
> dependent changes in order to make a larger change in manageable chunks,
> or because we have a series of related changes.  Because these are part
> of larger efforts, it often seems like they are even more likely to have
> to go through many revisions before they are finally merged.  Each step
> along the way reviewers look at the patches in Gerrit and leave
> comments.  As a reviewer, I rely heavily on looking at the difference
> between patchsets to see how the series evolves over time.
>
> Occasionally we also find it necessary to re-order the patch series, or
> to include or exclude a particular patch from the series.  Of course the
> interactive git rebase command makes this easy -- but in order to use
> it, you need to supply a base upon which to "rebase".  A simple choice
> would be to rebase the series on master, however, that creates
> difficulties for reviewers if master has moved on since the series was
> begun.  It is very difficult to see any actual intended changes between
> different patch sets when they have different bases which include
> unrelated changes.
>
> The best thing to do to make it easy for reviewers (and yourself as you
> try to follow your own changes) is to keep the same "base" for the
> entire patch series even as you "rebase" it.  If you know how long your
> patch series is, you can simply run "git rebase -i HEAD~N" where N is
> the patch series depth.  But if you're like me and have trouble with
> numbers other than 0 and 1, then you'll like this new command.
>
> The git-restack command is very simple -- it looks for the most recent
> commit that is both in your current branch history and in the branch it
> was based on.  It uses that as the base for an interactive rebase
> command.  This means that any time you are editing a patch series, you
> can simply run:
>
>   git restack
>
> and you will be placed in an interactive rebase session with all of the
> commits in that patch series staged.  Git-restack is somewhat
> branch-aware as well -- it will read a .gitreview file to find the
> remote branch to compare against.  If your stack was based on a
> different branch, simply run:
>
>   git restack 
>
> and it will use that branch for comparison instead.
>
> Git-restack is on pypi so you can install it with:
>
>   pip install git-restack
>
> The source code is based heavily on git-review and is in Gerrit under
> openstack-infra/git-restack.
>
> https://pypi.python.org/pypi/git-restack/1.0.0
> https://git.openstack.org/cgit/openstack-infra/git-restack
>
> I hope you find this useful,
>
> Jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [taskflow] Begging for reviews

2016-02-02 Thread Joshua Harlow

Including oslo,

If any new oslo cores (or others in general) want to learn more about 
taskflow please feel free to bug me or others. More than willing to show 
others the power of what it offers and explain it... and get more 
reviewers and such as a result of that ;)


https://docs.google.com/presentation/d/14r92RXQLXYcJwygWM4OsfeUyvNHtZAYl572G9DFwigo/ 
(a useful slideset for interested folks).


-Josh

Greg Hill wrote:

Normally I reserve the begging for IRC, but since the other cores aren't
always on, I'm taking the shotgun approach. If you aren't a core on
taskflow, then you can safely skip the rest of this email.

We have a number of open reviews with a single +2 that need another core
reviewer to sign off. Included in that is a blocker for the next release:

https://review.openstack.org/#/c/272748/

That was a bug introduced since the last release with trapping exception
arguments from tasks that will affect anyone running the worker-based
engine. The case I ran into was in requests where it did something akin to:

raise RetryException(ConnectionError())

That inner exception could not be converted to JSON so it threw an
exception and aborted the job.

Also, these are important changes (IMO):

Make the worker stop processing queued tasks when it stops:
https://review.openstack.org/272862

Similarly for the conductor:
https://review.openstack.org/270319

Allow revert methods to have different method signatures from execute
and just work:
https://review.openstack.org/270853

_(_depends on https://review.openstack.org/267131 )

Don't revert tasks that were never executed (I'm simplifying):
https://review.openstack.org/273731 

And more. I'm just highlighting the ones that most affect me personally,
but in general, the review queue is really backlogged. Any help is
appreciated. Bring it!

https://review.openstack.org/#/q/status:open+project:openstack/taskflow+branch:master


Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-02 Thread Knight, Clinton
+1  Great addition.  Welcome, Rodrigo!

Clinton


On 2/2/16, 12:30 PM, "Ben Swartzlander"  wrote:

>Rodrigo (ganso on IRC) joined the Manila project back in the Kilo
>release and has been working on share migration (an important core
>feature) for the last 2 releases. Since Tokyo he has dedicated himself
>to reviews and community participation. I would like to nominate him to
>join the Manila core reviewer team.
>
>-Ben Swartzlander
>Manila PTL
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS bootstrap image retirement

2016-02-02 Thread Dmitry Klenov
Hi Sergey,

I fully support this idea. It was our plan as well when we were developing
Ubuntu Bootstrap feature. So let's proceed with CentOS bootstrap removal.

BR,
Dmitry.

On Tue, Feb 2, 2016 at 2:55 PM, Sergey Kulanov 
wrote:

> Hi Folks,
>
> I think it's time to declare CentOS bootstrap image retirement.
> Since Fuel 8.0 we've switched to Ubuntu bootstrap image usage [1, 2] and
> CentOS one became deprecated,
> so in Fuel 9.0 we can freely remove it [2].
> For now we are building CentOS bootstrap image together with ISO and then
> package it into rpm [3], so by removing fuel-bootstrap-image [3] we:
>
> * simplify patching/update story, since we don't need to rebuild/deliver
> this
>   package on changes in dependent packages [4].
>
> * speed-up ISO build process, since building centos bootstrap image takes
> ~ 20%
>   of build-iso time.
>
> We've prepared related blueprint for this change [5] and spec [6]. We also
> have some draft patchsets [7]
> which passed BVT tests.
>
> So the next steps are:
> * get feedback by reviewing the spec/patches;
> * remove related code from the rest fuel projects (fuel-menu, fuel-devops,
> fuel-qa).
>
>
> Thank you
>
>
> [1]
> https://specs.openstack.org/openstack/fuel-specs/specs/7.0/fuel-bootstrap-on-ubuntu.html
> [2]
> https://specs.openstack.org/openstack/fuel-specs/specs/8.0/dynamically-build-bootstrap.html
> [3]
> https://github.com/openstack/fuel-main/blob/master/packages/rpm/specs/fuel-bootstrap-image.spec
> [4]
> https://github.com/openstack/fuel-main/blob/master/bootstrap/module.mk#L12-L50
> [5]
> https://blueprints.launchpad.net/fuel/+spec/remove-centos-bootstrap-from-fuel
> [6] https://review.openstack.org/#/c/273159/
> [7]
> https://review.openstack.org/#/q/topic:bp/remove-centos-bootstrap-from-fuel
>
>
> --
> Sergey
> DevOps Engineer
> IRC: SergK
> Skype: Sergey_kul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-02 Thread Erlon Cruz
+1
Great addition. I'm not active in Manila, but we work in the same team in
our company and I see he is doing a great work! Well deserved!

On Tue, Feb 2, 2016 at 4:43 PM, Valeriy Ponomaryov  wrote:

> Rodrigo is the first person in queue for taking such responsibility.
> +2
>
> On Tue, Feb 2, 2016 at 8:14 PM, Dustin Schoenbrun 
> wrote:
>
>> +1 He's been a great resource to the community and the nomination is very
>> well deserved!
>>
>>
>> On 02/02/2016 12:58 PM, Knight, Clinton wrote:
>>
>>> +1  Great addition.  Welcome, Rodrigo!
>>>
>>> Clinton
>>>
>>>
>>> On 2/2/16, 12:30 PM, "Ben Swartzlander"  wrote:
>>>
>>> Rodrigo (ganso on IRC) joined the Manila project back in the Kilo
 release and has been working on share migration (an important core
 feature) for the last 2 releases. Since Tokyo he has dedicated himself
 to reviews and community participation. I would like to nominate him to
 join the Manila core reviewer team.

 -Ben Swartzlander
 Manila PTL


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> --
>> Dustin Schoenbrun
>> OpenStack Quality Engineer
>> Red Hat, Inc.
>> dscho...@redhat.com
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com
> vponomar...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Midcycle Sprint Summary

2016-02-02 Thread Matt Fischer
Perhaps we should cover and assign each module in the meeting after the
release?

Actually removing the code and tests in many cases would be a good
assignment for people trying to get more commits and experience.
On Feb 1, 2016 2:22 PM, "Cody Herriges"  wrote:

> Emilien Macchi wrote:
> > Last week, we had our midcycle sprint.
> > Our group did a great job and here is a summary of what we worked on:
> >
>
> My attention at the office was stolen quite a few times by finishing up
> work for our production cloud deployment but I worked on the
> pupept-cinder Mitaka deprecations when I could.  First round was done
> which was the removal of old code previously deprecated and I have
> started on a second pass which is new deprecations that are being
> introduced in Mitaka by upstream cinder.
>
> This is the fist time I've sat down to actually just hunt and implement
> deprecations and the number one thing I learned is that it is really
> time consuming.  We'll need several people working on this if we want
> them complete for every module by release time.
>
>
> --
> Cody
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Nominating Vijendar Komalla for Solum core

2016-02-02 Thread Ed Cranford
+1
On Feb 2, 2016 4:07 PM, "James Y. Li"  wrote:

> +1
>
>
> On Tue, Feb 2, 2016 at 1:33 PM, Devdatta Kulkarni <
> devdatta.kulka...@rackspace.com> wrote:
>
>> Hi team,
>>
>> I would like to propose Vijendar Komalla for our core team. Vijendar has
>> been actively
>> contributing to Solum for several months now submitting patches,
>> providing great reviews,
>> and participating actively in our IRC meetings and on Solum IRC channel.
>> You can find Vijendar's contributions at [1][2].
>>
>> Please respond with your votes.
>>
>> Regards,
>> Devdatta
>>
>> [1] http://stackalytics.com/?module=solum_id=vijendar-komalla
>> [2]
>> http://stackalytics.com/?module=python-solumclient_id=vijendar-komalla
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat-translator][tacker] Openstack Artifact Repository

2016-02-02 Thread Mikhail Fedosin
Hello Bruce! So glad to hear it!

Just want to inform you that soon artifacts will be a standalone service
under openstack/glance repo. The service will be called Glare (GLance
Artifact REpository).

We really want to see you involved! There is a weekly meeting that is held
on Mondays at 5PM UTC in #openstack-meeting-alt
https://etherpad.openstack.org/p/glance-artifacts-sub-team-meeting-agenda
There we can discuss all things you need.

Thanks for looking at Glare!

Best regards,
Mikhail Fedosin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-02 Thread Ravi, Goutham
+1 ganso's been extremely diligent and helpful with his reviews! He's a great 
addition!

From: Erlon Cruz >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, February 2, 2016 at 2:39 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core 
reviewer team

+1
Great addition. I'm not active in Manila, but we work in the same team in our 
company and I see he is doing a great work! Well deserved!

On Tue, Feb 2, 2016 at 4:43 PM, Valeriy Ponomaryov 
> wrote:
Rodrigo is the first person in queue for taking such responsibility.
+2

On Tue, Feb 2, 2016 at 8:14 PM, Dustin Schoenbrun 
> wrote:
+1 He's been a great resource to the community and the nomination is very well 
deserved!


On 02/02/2016 12:58 PM, Knight, Clinton wrote:
+1  Great addition.  Welcome, Rodrigo!

Clinton


On 2/2/16, 12:30 PM, "Ben Swartzlander" 
> wrote:

Rodrigo (ganso on IRC) joined the Manila project back in the Kilo
release and has been working on share migration (an important core
feature) for the last 2 releases. Since Tokyo he has dedicated himself
to reviews and community participation. I would like to nominate him to
join the Manila core reviewer team.

-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Dustin Schoenbrun
OpenStack Quality Engineer
Red Hat, Inc.
dscho...@redhat.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cinder Mitaka Midcycle Summary

2016-02-02 Thread Jordan Pittier
Thanks a lot for this summary. I enjoyed the reading.

Jordan

On Tue, Feb 2, 2016 at 10:14 PM, Sean McGinnis 
wrote:

> War and Peace
> or
> Notes from the Cinder Mitaka Midcycle Sprint
> January 26-29
>
> Etherpads from discussions:
> * https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-1
> * https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-2
> * https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-3
> * https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-4
>
> *Topics Covered*
> 
> In no particular order...
>
> Disable Old Volume Types
> 
> There was a request from an end user to have a mechanism to disable
> a volume type as part of a workflow for progressing from a beta to
> production state.
>
> Of what was known of the request, there was some confusion as to
> whether the desired use case couldn't be met with the existing
> functionality. It was decided nothing would be done for this until
> more input is receieved explaining what is needed and why it cannot
> be done as it is today.
>
> User Provided Encryption Keys for Volume Encryption
> ===
> The question was raised as to whether we want to allow user specified
> keys. Google has something today where this key can be passed in
> headers.
>
> Some concern with doing this, both from a security and amount of work
> perspective. It was ultimately agreed this was a better fit for a
> cross project discussion.
>
> Adding a Common cinder.conf Setting for Suppressing SSL Warnings
> 
> Log files get a TON of warnings when using a driver that uses the
> requests library internally for communication and you do not have
> a signed valid certificate. Some drivers have gotten around this
> by implementing their own settings for disabling these warnings.
>
> The question was raised that although not all drivers use requests,
> and therefore are not affected by this, should we still have a common
> config setting to disable these warnings for those drivers that do use
> it.
>
> Different approaches to disabling this will be explored. As long as
> it is clear what the option does, we were not opposed to this.
>
> Nested Quotas
> =
> The current nested quota enforcement is badly broken. There are many
> scenarios the just do not work as expected. There is also some
> confusion around how nested quotas should work. Things like setting
> -1 for a child quota do not work as expected and things are not
> properly enforced during volume creation.
>
> Glance has also started to look at implementing nested quota support
> based on Cinder's implementation, so we don't want to cause broken
> implementation in Cinder to be propogated to other projects.
>
> Ryan McNair is working with folks on other projects to make find
> a better solution and to work through our current issues. This will
> be an ongoing effort for now.
>
> The Future of CLI for python-cinderclient
> =
> A cross project spec has been approved to work toward removing
> individual project CLIs to center on the one common osc CLI. We
> discussed the feasibility of deprecating the cinder CLI in favor
> of focusing all CLI work on osc.
>
> There is also concern about delays getting new functionality
> deployed. First we need to make server side API changes, then get
> them added to the client library, then get them added to osc.
>
> There is not feature parity between the cinder and osc CLI's at the
> moment for cinder functionality. This needs to be addressed first
> before we can consider removing or deprecating anything in the cinder
> client CLI. Once we have the same level of functionality with both,
> we can then decide at what point to only add new CLI commands to osc
> and start deprecating the cinder CLI.
>
> Ivan and Ryan will look in to how to implement osc plugins.
>
> We will also look in to using cliff and other osc patterns to see if
> we can bring the existing cinder client implementation closer to the
> osc implementation to make the switch over smoother.
>
> API Microversions
> =
> Scott gave an update on the microversion work.
>
> Cinder patch: https://review.openstack.org/#/c/224910
> cinderclient patch: https://review.openstack.org/#/c/248163/
> spec: https://review.openstack.org/#/c/223803/
> Test cases: https://github.com/scottdangelo/TestCinderAPImicroversions
>
> Ben brought up the need to have a new unique URL endpoint for
> this to get around some backward compatibility problems. This new URL
> would be made v3 even though it will initially be the same as v2.
>
> We would like to get this in soon so it has some runtime. There were
> a lot of work items identified though that should get done before we
> land. Scott is going to continue working through these issues.
>
> Async User Reporting
> 

Re: [openstack-dev] [heat] Changing the default Server/SoftwareDeployment transports?

2016-02-02 Thread Steve Baker

On 22/01/16 22:36, Steven Hardy wrote:

Hi all,

Wanted to start some discussion re $subject, context is:

https://bugs.launchpad.net/heat/+bug/1507568

Here, we're hitting issues because by default OS::Nova::Server uses the
POLL_SERVER_CFN transport.  This made sense back when the
SoftwareDeployment stuff was first implemented, but now we have other
options, and there are some negative consequenses of choosing this default:

1. All heat deployments rely on the heat-api-cfn service by default, when
this should really be a CFN compatibility layer.

2. Related to (1) we then require the keystone ec2tokens extension to be
enabled

3. The cfn API action DescribeStackResource is used to retrieve server
metadata.  Because that API has no action to only show the metadata, we get
*all* data for that resource - and recent changes to show all attributes by
default have made this *much* higher overhead than it once was.

4. As mentioned in the bug above, trying to resolve all the attributes
doesn't work, because we're using stack domain user credentials to poll the
CFN API, which don't have access to the related nova API for the server
resource.  This can probably be fixed, but an alternative is just don't use
this API.

So, my question is, now that we have other (better) alternatives, can we
consider switching the Server transport e.g to POLL_SERVER_HEAT by default,
and relatedly the SoftwareDeployment signal_transport to HEAT_SIGNAL?

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server-prop-software_config_transport

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment-prop-signal_transport

The advantage of this is it only requires the native heat-api service, when
all other options require some other service/API to exist.

Long term, we should probably consider deprecating the CFN transport for
these (native) resources, but switching the default would be the first step
- what do people think?



I'm OK with switching to POLL_SERVER_HEAT in theory, however I have a 
couple of practical considerations:
1. POLL_SERVER_HEAT doesn't work for me at the moment, I haven't 
investigated why:
   WARNING os_collect_config.heat [-] Invalid username or password 
(Disable debug mode to suppress these details.) (HTTP 401)
2. We *must* ensure that existing stacks that were launched with default 
POLL_SERVER_CFN continue to work when the default changes to 
POLL_SERVER_HEAT


What I think would be more useful than changing the default in our 
release is making puppet-heat set the default to POLL_TEMP_URL if a 
swift or ceph is configured, falling back to POLL_SERVER_HEAT otherwise.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Nominating Vijendar Komalla for Solum core

2016-02-02 Thread James Y. Li
+1


On Tue, Feb 2, 2016 at 1:33 PM, Devdatta Kulkarni <
devdatta.kulka...@rackspace.com> wrote:

> Hi team,
>
> I would like to propose Vijendar Komalla for our core team. Vijendar has
> been actively
> contributing to Solum for several months now submitting patches, providing
> great reviews,
> and participating actively in our IRC meetings and on Solum IRC channel.
> You can find Vijendar's contributions at [1][2].
>
> Please respond with your votes.
>
> Regards,
> Devdatta
>
> [1] http://stackalytics.com/?module=solum_id=vijendar-komalla
> [2]
> http://stackalytics.com/?module=python-solumclient_id=vijendar-komalla
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Cinder Mitaka Midcycle Summary

2016-02-02 Thread Sean McGinnis
War and Peace
or
Notes from the Cinder Mitaka Midcycle Sprint
January 26-29

Etherpads from discussions:
* https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-1
* https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-2
* https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-3
* https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-4

*Topics Covered*

In no particular order...

Disable Old Volume Types

There was a request from an end user to have a mechanism to disable
a volume type as part of a workflow for progressing from a beta to
production state.

Of what was known of the request, there was some confusion as to
whether the desired use case couldn't be met with the existing
functionality. It was decided nothing would be done for this until
more input is receieved explaining what is needed and why it cannot
be done as it is today.

User Provided Encryption Keys for Volume Encryption
===
The question was raised as to whether we want to allow user specified
keys. Google has something today where this key can be passed in
headers.

Some concern with doing this, both from a security and amount of work
perspective. It was ultimately agreed this was a better fit for a
cross project discussion.

Adding a Common cinder.conf Setting for Suppressing SSL Warnings

Log files get a TON of warnings when using a driver that uses the
requests library internally for communication and you do not have
a signed valid certificate. Some drivers have gotten around this
by implementing their own settings for disabling these warnings.

The question was raised that although not all drivers use requests,
and therefore are not affected by this, should we still have a common
config setting to disable these warnings for those drivers that do use
it.

Different approaches to disabling this will be explored. As long as
it is clear what the option does, we were not opposed to this.

Nested Quotas
=
The current nested quota enforcement is badly broken. There are many
scenarios the just do not work as expected. There is also some
confusion around how nested quotas should work. Things like setting
-1 for a child quota do not work as expected and things are not
properly enforced during volume creation.

Glance has also started to look at implementing nested quota support
based on Cinder's implementation, so we don't want to cause broken
implementation in Cinder to be propogated to other projects.

Ryan McNair is working with folks on other projects to make find
a better solution and to work through our current issues. This will
be an ongoing effort for now.

The Future of CLI for python-cinderclient
=
A cross project spec has been approved to work toward removing
individual project CLIs to center on the one common osc CLI. We
discussed the feasibility of deprecating the cinder CLI in favor
of focusing all CLI work on osc.

There is also concern about delays getting new functionality
deployed. First we need to make server side API changes, then get
them added to the client library, then get them added to osc.

There is not feature parity between the cinder and osc CLI's at the
moment for cinder functionality. This needs to be addressed first
before we can consider removing or deprecating anything in the cinder
client CLI. Once we have the same level of functionality with both,
we can then decide at what point to only add new CLI commands to osc
and start deprecating the cinder CLI.

Ivan and Ryan will look in to how to implement osc plugins.

We will also look in to using cliff and other osc patterns to see if
we can bring the existing cinder client implementation closer to the
osc implementation to make the switch over smoother.

API Microversions
=
Scott gave an update on the microversion work.

Cinder patch: https://review.openstack.org/#/c/224910
cinderclient patch: https://review.openstack.org/#/c/248163/
spec: https://review.openstack.org/#/c/223803/
Test cases: https://github.com/scottdangelo/TestCinderAPImicroversions

Ben brought up the need to have a new unique URL endpoint for
this to get around some backward compatibility problems. This new URL
would be made v3 even though it will initially be the same as v2.

We would like to get this in soon so it has some runtime. There were
a lot of work items identified though that should get done before we
land. Scott is going to continue working through these issues.

Async User Reporting

Alex Meade and Sheel have been working on ways to report back better
information for async operations.

https://etherpad.openstack.org/p/mitaka-cinder-midcycle-user-notifications

We will store data in the database rather than the originally
investigated Zaqar approach. There was general agreement that this
work should move forward and 

Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-02 Thread Preston L. Bannister
On a side note, of the folk with interest in this thread, how many are
going to the Austin OpenStack conference? Would you be interested in
presenting as a panel?

I submitted for a presentation on "State of the Art for in-Cloud backup of
high-value Applications". Notion is to give context for the folk who need
this sort of backup. Something about where we have been, where we are, and
what might become possible. I think it would be great to pull in folk from
Freezer and Ekko. Jay Pipes seems to like to weigh in on the topic, and
could represent Nova? Will gladly add as speakers interested folk! (Of
course, the odds of winning a slot are pretty low, but worth a shot.)

Any folk who expect to be in Austin, and are interested?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Partition images in ironic

2016-02-02 Thread Arun SAG
Hi All,

I am looking for operators who have deployed OS using partition images
in ironic. I have
following questions

1. The partition image deployment by default seems to install grub2. How
did that work for distro's like RHEL6 which has legacy version of grub?

2. Any one has created and supported LVM/soft raid with partition
images? How did you
do that? How hard was it?


-- 
Arun S A G
http://zer0c00l.in/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-02 Thread Sean McGinnis
On Tue, Feb 02, 2016 at 02:04:44PM -0800, Preston L. Bannister wrote:
> On a side note, of the folk with interest in this thread, how many are
> going to the Austin OpenStack conference? Would you be interested in
> presenting as a panel?
> 
> I submitted for a presentation on "State of the Art for in-Cloud backup of
> high-value Applications". Notion is to give context for the folk who need
> this sort of backup. Something about where we have been, where we are, and
> what might become possible. I think it would be great to pull in folk from
> Freezer and Ekko. Jay Pipes seems to like to weigh in on the topic, and
> could represent Nova? Will gladly add as speakers interested folk! (Of
> course, the odds of winning a slot are pretty low, but worth a shot.)

I'd love to see a presentation on Ekko, Freezer, Smaug, as well as the
built in capabilities such as Cinder backup.

It would be interesting to see an overview of all the existing and planned
capabilities for backup. It might also be a good way to gain interest in
one or more of these projects, and possibly some operator feedback on
what would be interesting and desired in a final solution.

> 
> Any folk who expect to be in Austin, and are interested?

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - L3 flavors and issues with use cases for multiple L3 backends

2016-02-02 Thread Eichberger, German
Not that you could call it scheduling. The intent was that the user could pick 
the best flavor for his task (e.g. a gold router as opposed to a silver one). 
The system then would “schedule” the driver configured for gold or silver. 
Rescheduling wasn’t really a consideration…

German

From: Doug Wiegley 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, February 1, 2016 at 8:17 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use cases 
for multiple L3 backends

Yes, scheduling was a big gnarly wart that was punted for the first pass. The 
intention was that any driver you put in a single flavor had equivalent 
capabilities/plumbed to the same networks/etc.

doug


On Feb 1, 2016, at 7:08 AM, Kevin Benton 
> wrote:


Hi all,

I've been working on an implementation of the multiple L3 backends RFE[1] using 
the flavor framework and I've run into some snags with the use-cases.[2]

The first use cases are relatively straightforward where the user requests a 
specific flavor and that request gets dispatched to a driver associated with 
that flavor via a service profile. However, several of the use-cases are based 
around the idea that there is a single flavor with multiple drivers and a 
specific driver will need to be used depending on the placement of the router 
interfaces. i.e. a router cannot be bound to a driver until an interface is 
attached.

This creates some painful coordination problems amongst drivers. For example, 
say the first two networks that a user attaches a router to can be reached by 
all drivers because they use overlays so the first driver chosen by the 
framework works  fine. Then the user connects to an external network which is 
only reachable by a different driver. Do we immediately reschedule the entire 
router at that point to the other driver and interrupt the traffic between the 
first two networks?

Even if we are fine with a traffic interruption for rescheduling, what should 
we do when a failure occurs half way through switching over because the new 
driver fails to attach to one of the networks (or the old driver fails to 
detach from one)? It would seem the correct API experience would be switch 
everything back and then return a failure to the caller trying to add an 
interface. This is where things get messy.

If there is a failure during the switch back, we now have a single router's 
resources smeared across two drivers. We can drop the router into the ERROR 
state and re-attempt the switch in a periodic task, or maybe just leave it 
broken.

How should we handle this much orchestration? Should we pull in something like 
taskflow, or maybe defer that use case for now?

What I want to avoid is what happened with ML2 where error handling is still a 
TODO in several cases. (e.g. Any post-commit update or delete failures in 
mechanism drivers will not trigger a revert in state.)

1. https://bugs.launchpad.net/neutron/+bug/1461133
2. 
https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases

--
Kevin Benton

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - L3 flavors and issues with use cases for multiple L3 backends

2016-02-02 Thread Kevin Benton
Choosing from multiple drivers for the same flavor is scheduling. I didn't
mean automatically selecting other flavors.
On Feb 2, 2016 17:53, "Eichberger, German" 
wrote:

> Not that you could call it scheduling. The intent was that the user could
> pick the best flavor for his task (e.g. a gold router as opposed to a
> silver one). The system then would “schedule” the driver configured for
> gold or silver. Rescheduling wasn’t really a consideration…
>
> German
>
> From: Doug Wiegley >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Date: Monday, February 1, 2016 at 8:17 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use
> cases for multiple L3 backends
>
> Yes, scheduling was a big gnarly wart that was punted for the first pass.
> The intention was that any driver you put in a single flavor had equivalent
> capabilities/plumbed to the same networks/etc.
>
> doug
>
>
> On Feb 1, 2016, at 7:08 AM, Kevin Benton > wrote:
>
>
> Hi all,
>
> I've been working on an implementation of the multiple L3 backends RFE[1]
> using the flavor framework and I've run into some snags with the
> use-cases.[2]
>
> The first use cases are relatively straightforward where the user requests
> a specific flavor and that request gets dispatched to a driver associated
> with that flavor via a service profile. However, several of the use-cases
> are based around the idea that there is a single flavor with multiple
> drivers and a specific driver will need to be used depending on the
> placement of the router interfaces. i.e. a router cannot be bound to a
> driver until an interface is attached.
>
> This creates some painful coordination problems amongst drivers. For
> example, say the first two networks that a user attaches a router to can be
> reached by all drivers because they use overlays so the first driver chosen
> by the framework works  fine. Then the user connects to an external network
> which is only reachable by a different driver. Do we immediately reschedule
> the entire router at that point to the other driver and interrupt the
> traffic between the first two networks?
>
> Even if we are fine with a traffic interruption for rescheduling, what
> should we do when a failure occurs half way through switching over because
> the new driver fails to attach to one of the networks (or the old driver
> fails to detach from one)? It would seem the correct API experience would
> be switch everything back and then return a failure to the caller trying to
> add an interface. This is where things get messy.
>
> If there is a failure during the switch back, we now have a single
> router's resources smeared across two drivers. We can drop the router into
> the ERROR state and re-attempt the switch in a periodic task, or maybe just
> leave it broken.
>
> How should we handle this much orchestration? Should we pull in something
> like taskflow, or maybe defer that use case for now?
>
> What I want to avoid is what happened with ML2 where error handling is
> still a TODO in several cases. (e.g. Any post-commit update or delete
> failures in mechanism drivers will not trigger a revert in state.)
>
> 1. https://bugs.launchpad.net/neutron/+bug/1461133
> 2. https://etherpad.openstack.org/p/<
> https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases
> >neutron-modular-l3-router-plugin-use-cases<
> https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases
> >
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Neutron] IPv6 related intermittent test failures

2016-02-02 Thread Armando M.
Folks,

We have some IPv6 related bugs [1,2,3] that have been lingering for some
time now. They have been hurting the gate (e.g. [4] the most recent
offending failure) and since it looks like they have been without owners
nor a plan of action for some time, I made the hard decision of skipping
them [5] ahead of the busy times ahead.

Now one might argue that skipping them is counterproductive because it may
allow other regressions to sneak in, but I am hoping that this
controversial action will indeed smoke out the right folks.

Comments welcome.

Regards,
Armando

[1] https://bugs.launchpad.net/neutron/+bug/1477192
[2] https://bugs.launchpad.net/neutron/+bug/1509004
[3] https://bugs.launchpad.net/openstack-gate/+bug/1540983
[4]
http://logs.openstack.org/37/264937/5/gate/gate-tempest-dsvm-neutron-full/afeaabd//logs/testr_results.html.gz
[5] https://review.openstack.org/#/c/275457/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] Opinion/help for the eavesdrop home page

2016-02-02 Thread Tony Breeds
On Tue, Feb 02, 2016 at 10:43:52AM -0500, Doug Hellmann wrote:

> The columns are definitely more visually appealing, but I agree the
> ordering from left to right is unusual. How hard would it be to change
> that?

yaml2ical:
https://review.openstack.org/#/c/275459/

irc-meetings:
https://review.openstack.org/#/c/275461

As it's a little fiddly to run the comination in the gate I have saved the
generated HTML here:
http://bakeyournoodle.com/~tony/OpenStack_Meetings-index.html

Thoughts?

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-02 Thread Sam Yaple
On Tue, Feb 2, 2016 at 11:31 PM, Sean McGinnis 
wrote:

> On Tue, Feb 02, 2016 at 02:04:44PM -0800, Preston L. Bannister wrote:
> > I submitted for a presentation on "State of the Art for in-Cloud backup
> of
> > high-value Applications". Notion is to give context for the folk who need
> > this sort of backup. Something about where we have been, where we are,
> and
> > what might become possible. I think it would be great to pull in folk
> from
> > Freezer and Ekko. Jay Pipes seems to like to weigh in on the topic, and
> > could represent Nova? Will gladly add as speakers interested folk! (Of
> > course, the odds of winning a slot are pretty low, but worth a shot.)
>
> I'd love to see a presentation on Ekko, Freezer, Smaug, as well as the
> built in capabilities such as Cinder backup.
>

I submitted an Ekko talk "The 'B' Word -- Backups in OpenStack". Title
seems all inclusive, but in reality I am just talking about the block-based
side of backups. I am co-presenting with another Ekko dev and we do have a
brief slot in our outline for explaining Ekko's place in the OpenStack
ecosystem and its difference from nova-snapshot or cinder-backup.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing a simple new tool: git-restack

2016-02-02 Thread James E. Blair
Paul Michali  writes:

> Sounds interesting... the link
> https://docs.openstack.org/infra/git-restack/ referenced
> as the home page in PyPI is a broken link.

I'm clearly getting ahead of things.  The correct link is:

  http://docs.openstack.org/infra/git-restack/

Thanks,

Jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases for multiple L3 backends

2016-02-02 Thread Kevin Benton
So flavors are for routers with different behaviors that you want the user
to be able to choose from (e.g. High performance, slow but free, packet
logged, etc). Multiple drivers are for when you have multiple backends
providing the same flavor (e.g. The high performance flavor has several
drivers for various bare metal routers).
On Feb 2, 2016 18:22, "rzang"  wrote:

> What advantage can we get from putting multiple drivers into one flavor
> over strictly limit one flavor one driver (or whatever it is called).
>
> Thanks,
> Rui
>
> -- Original --
> *From: * "Kevin Benton";;
> *Send time:* Wednesday, Feb 3, 2016 8:55 AM
> *To:* "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [neutron] - L3 flavors and issues with
> usecases for multiple L3 backends
>
> Choosing from multiple drivers for the same flavor is scheduling. I didn't
> mean automatically selecting other flavors.
> On Feb 2, 2016 17:53, "Eichberger, German" 
> wrote:
>
>> Not that you could call it scheduling. The intent was that the user could
>> pick the best flavor for his task (e.g. a gold router as opposed to a
>> silver one). The system then would “schedule” the driver configured for
>> gold or silver. Rescheduling wasn’t really a consideration…
>>
>> German
>>
>> From: Doug Wiegley  doug...@parksidesoftware.com>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org> openstack-dev@lists.openstack.org>>
>> Date: Monday, February 1, 2016 at 8:17 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org> openstack-dev@lists.openstack.org>>
>> Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use
>> cases for multiple L3 backends
>>
>> Yes, scheduling was a big gnarly wart that was punted for the first pass.
>> The intention was that any driver you put in a single flavor had equivalent
>> capabilities/plumbed to the same networks/etc.
>>
>> doug
>>
>>
>> On Feb 1, 2016, at 7:08 AM, Kevin Benton  blak...@gmail.com>> wrote:
>>
>>
>> Hi all,
>>
>> I've been working on an implementation of the multiple L3 backends RFE[1]
>> using the flavor framework and I've run into some snags with the
>> use-cases.[2]
>>
>> The first use cases are relatively straightforward where the user
>> requests a specific flavor and that request gets dispatched to a driver
>> associated with that flavor via a service profile. However, several of the
>> use-cases are based around the idea that there is a single flavor with
>> multiple drivers and a specific driver will need to be used depending on
>> the placement of the router interfaces. i.e. a router cannot be bound to a
>> driver until an interface is attached.
>>
>> This creates some painful coordination problems amongst drivers. For
>> example, say the first two networks that a user attaches a router to can be
>> reached by all drivers because they use overlays so the first driver chosen
>> by the framework works  fine. Then the user connects to an external network
>> which is only reachable by a different driver. Do we immediately reschedule
>> the entire router at that point to the other driver and interrupt the
>> traffic between the first two networks?
>>
>> Even if we are fine with a traffic interruption for rescheduling, what
>> should we do when a failure occurs half way through switching over because
>> the new driver fails to attach to one of the networks (or the old driver
>> fails to detach from one)? It would seem the correct API experience would
>> be switch everything back and then return a failure to the caller trying to
>> add an interface. This is where things get messy.
>>
>> If there is a failure during the switch back, we now have a single
>> router's resources smeared across two drivers. We can drop the router into
>> the ERROR state and re-attempt the switch in a periodic task, or maybe just
>> leave it broken.
>>
>> How should we handle this much orchestration? Should we pull in something
>> like taskflow, or maybe defer that use case for now?
>>
>> What I want to avoid is what happened with ML2 where error handling is
>> still a TODO in several cases. (e.g. Any post-commit update or delete
>> failures in mechanism drivers will not trigger a revert in state.)
>>
>> 1. https://bugs.launchpad.net/neutron/+bug/1461133
>> 2. https://etherpad.openstack.org/p/<
>> https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases
>> >neutron-modular-l3-router-plugin-use-cases<
>> https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases
>> >
>>
>> --
>> Kevin Benton
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> 

Re: [openstack-dev] [glance] Glance Core team additions/removals

2016-02-02 Thread Kairat Kushaev
Thank You! I will try to do my best in contributing to Glance.

Best regards,
Kairat Kushaev

On Tue, Feb 2, 2016 at 8:11 PM, Flavio Percoco  wrote:

> On 26/01/16 10:11 -0430, Flavio Percoco wrote:
>
>>
>> Greetings,
>>
>> I'd like us to have one more core cleanup for this cycle:
>>
>> Additions:
>>
>> - Kairat Kushaev
>> - Brian Rosmaita
>>
>> Both have done amazing reviews either on specs or code and I think they
>> both
>> would be an awesome addition to the Glance team.
>>
>> Removals:
>>
>> - Alexander Tivelkov
>> - Fei Long Wang
>>
>> Fei Long and Alexander are both part of the OpenStack community. However,
>> their
>> focus and time has shifted from Glance and, as it stands right now, it
>> would
>> make sense to have them both removed from the core team. This is not
>> related to
>> their reviews per-se but just prioritization. I'd like to thank both,
>> Alexander
>> and Fei Long, for their amazing contributions to the team. If you guys
>> want to
>> come back to Glance, please, do ask. I'm sure the team will be happy to
>> have you
>> on board again.
>>
>> To all other members of the community. Please, provide your feedback.
>> Unless
>> someone objects, the above will be effective next Tuesday.
>>
>
>
> The following steps were taken:
>
> - Kairat and Brian have been added. Welcome and thanks for joining
> - Fei Long was kept as core. Thanks a lot for weighting in and catching up
> with Glance.
> - Alexander has been removed. I can't stress how sad it is to see Alex go.
> I
> hope Alex will be able to join in the not so far future.
>
>
> Cheers,
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-02-02 Thread Jay Lau
Welcome Ton and Egor!!

On Wed, Feb 3, 2016 at 12:04 AM, Adrian Otto 
wrote:

> Thanks everyone for your votes. Welcome Ton and Egor to the core team!
>
> Regards,
>
> Adrian
>
> > On Feb 1, 2016, at 7:58 AM, Adrian Otto 
> wrote:
> >
> > Magnum Core Team,
> >
> > I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core
> Reviewers. Please respond with your votes.
> >
> > Thanks,
> >
> > Adrian Otto
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][Monasca] Launching Graphana UI from Monitoring tab in Monasca plugin for Horizon

2016-02-02 Thread Pradip Mukhopadhyay
Hello,


We followed the https://github.com/openstack/monasca-ui/ to integrate the
plugin for Horizon. The Monitoring tab is coming in left side navigation
perfectly and working as expected. Only problem is: when we're trying to
launch the Graphana UI, it is failing with the following trace:


Page not found (404)
Request Method: GET
Request URL: http://10.74.150.152:8000/grafana/index.html

Using the URLconf defined in openstack_dashboard.urls, Django tried these
URL patterns, in this order:

   1. ^$ [name='splash']
   2. ^api/
   3. ^home/$ [name='user_home']
   4. ^i18n/js/(?P\S+?)/$ [name='jsi18n']
   5. ^i18n/setlang/$ [name='set_language']
   6. ^i18n/
   7. ^jasmine-legacy/$ [name='jasmine_tests']
   8. ^jasmine/.*?$
   9. ^identity/
   10. ^admin/
   11. ^developer/
   12. ^project/
   13. ^settings/
   14. ^monitoring/
   15. ^auth/
   16. ^static\/(?P.*)$
   17. ^media\/(?P.*)$
   18. ^500/$

The current URL, grafana/index.html, didn't match any of these.



Any pointer of how to solve it, if any, would be highly appreciated.




Thanks in advance,
Pradip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] URLs are not reported in the endpoint listing

2016-02-02 Thread Pradip Mukhopadhyay
Hello,


I did a stacking recently, noticing a behavior:

eystone --os-username admin --os-password secretadmin --os-tenant-name
admin --os-auth-url http://host:5000/v2.0 endpoint-list

Returns null URLs for public/internal/admin.


+--+---+---+-+--+--+
|id|   region  | publicurl | internalurl |
adminurl |service_id|
+--+---+---+-+--+--+
| 169f7f5090ea442c8ae534d6cd38c484 | RegionOne |   |
|  | 8d30999ba36943359b4e7c4ae4f0a15c |
| 255f7316074d4aecb34b69e3f28309c1 | RegionOne |   |
|  | f26931e1fa43438da4c32fe530f33796 |


Some of the keystone CLIs are not working. e.g. user-list is working. But
not the others, say service-list/role-list. It is returning : The resource
could not be found. (HTTP 404) (Request-ID:
req-b52eace0-b65a-4ba3-afb9-616689a0833e)


Not sure what I have messed up.


Any help would be solicited.




--pradip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] towards a keystone v3 only devstack

2016-02-02 Thread Julien Danjou
On Mon, Feb 01 2016, Sean Dague wrote:

> The revert is here - https://review.openstack.org/#/c/274703/ - and will
> move it's way through the gate once the tests complete.

So… yeah, but FTR this revert broke again at least Gnocchi's gate. The
telemetry projects had already adapted to the v3 switch last week.

We did not complain about the first break; we fixed our gate silently in
just a few minutes, and we've been done with it and resumed to your
daily routines. We were somehow happy with moving to a newer API, TBH,
even if was maybe a bit brutal in the first place.

But then… reverting… Well. I don't think making sluggish, technically in
debt or dead projects first class citizen is a good choice. If they
can't adapt, fast enough or at all, then… they should read this as a
signal and figure out what their problem is and solve it?

Food for thought.

But well, don't worry – we already re-fixed our gate. :-)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cross-project] #openstack-meeting-cp

2016-02-02 Thread Thierry Carrez

Tony Breeds wrote:

 I'm not certain who needs to decide this but I think the time has come to
get explicit about which project teams can use the #openstack-meeting-cp room.

The room was created in November after:
  * http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-11-17-20.01.log.html
 (skim from 20:50:29 on)
  * 
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-11-17.log.html#t2015-11-17T21:01:50
  * https://review.openstack.org/#/c/246628/

At that time the discussion said that clear guidelines need to laid out, and it
was suggested that the TC could "bless" any use of that meeting room.

I request that the TC/Cross-Project team set out those guidelines and document
them.

In [2] Thierry said:
---
The current idea around the meeting-cp channel is that it's limited to
cross-project discussion (so that 1/ there is always a slot available,
facilitating scheduling and 2/ there is only one cross-project
discussion at a time). So it should not be used for more vertical team
meetings.
---

This comes up because of https://review.openstack.org/#/c/271361/1 where it can
be argued that docs is both a cross-project effort and a vertical team.


The original intent was to replace the weekly cross-project meeting with 
a set of ad-hoc meetings in a dedicated channel -- not to have all 
horizontal team meetings there. If we follow that intent, the docs team 
meeting is not a cross-team meeting and doesn't belong there. The docs 
team is an horizontal project team, which means it intersects with 
multiple vertical teams. This is different from a cross-project effort, 
which involves multiple vertical and horizontal team temporarily 
collaborating together on a specific endeavor.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cross-project] Meeting SKIPPED, Tue February 2nd, 21:00 UTC

2016-02-02 Thread Mike Perez
Hi all!

We will not be having a cross-project meeting tomorrow, however, there are
specs that will be discussed at the next meeting [1] on February 9th at 21:00
UTC:

* Common policy for projects [2]
* Query config for UI [3]

All cross-project spec liaisons [4] should be ready to discuss these on
February 9th at 21:00 UTC in #openstack-meeting-cp with the authors of the
specs. Thanks!

[1] - 
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda
[2] - https://review.openstack.org/#/c/245629/5
[3] - https://review.openstack.org/#/c/242852/2
[4] - 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Liaisons

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Promoting Anastasia Kuznetsova to core reviewers

2016-02-02 Thread Anastasia Kuznetsova
Thanks !

I will do my best for further development of the project and will try to
bring more quality in Mistral.

On Tue, Feb 2, 2016 at 8:13 AM, Renat Akhmerov 
wrote:

> Alright, I think we all agree on that. Anastasia, welcome to the core team!
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
> On 01 Feb 2016, at 14:50, Hardik Parekh 
> wrote:
>
> I'm not a core reviewer, but if I was, I'd definitely vote with +1.
>
> Great job, Anastasia!
>
> On Mon, Feb 1, 2016 at 4:45 PM, Nikolay Makhotkin  > wrote:
>
>> Hi, great work, Anastasia!
>>
>> +1 for you!
>>
>> On Fri, Jan 29, 2016 at 11:27 AM, Lingxian Kong 
>> wrote:
>>
>>> +1 from me, welcome Anastasia!
>>>
>>> On Fri, Jan 29, 2016 at 9:00 PM, Igor Marnat 
>>> wrote:
>>>
 Folks,
 I'm not a core reviewer, but if I was, I'd definitely vote with +1.

 Good job, Anastasia! Keep going!

 Regards,
 Igor Marnat

 On Fri, Jan 29, 2016 at 10:23 AM, Elisha, Moshe (Nokia - IL) <
 moshe.eli...@nokia.com> wrote:

> A very big +1
> 
> From: Renat Akhmerov [rakhme...@mirantis.com]
> Sent: Friday, January 29, 2016 8:13 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [mistral] Promoting Anastasia Kuznetsova to
> core   reviewers
>
> Team,
>
> I’d like to promote Anastasia Kuznetsova to the core team. What she’s
> been doing for 2 years is hard to overestimate. She’s done a huge amount 
> of
> work reviewing code, bugfixing and participating in public discussions
> including our IRC channel #openstack-mistral. She is very reliable,
> diligent and consistent about her work.
>
> Please vote.
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> *Regards!*
>>> *---*
>>> *Lingxian Kong*
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Nikolay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] Opinion/help for the eavesdrop home page

2016-02-02 Thread Thierry Carrez

Tony Breeds wrote:

 eavesdrop.openstack.org collates all of our meeting information.  The front
page has a list of all the meetings.  The list is long and mostly a free-form
paragraph.  Sometime ago we recieved a review [1] to change the layout to be
more tabular.

Compare [2] with [3]

It's be great to get some UX comments in that review or for super bonus points
an implementation that makes the eavesdrop home page better.


I find them both equally unreadable :) In [2] it is too busy, and in [3] 
it's difficult to understand the ordering and therefore find a meeting 
in that list (which is the goal after all). A proper column layout would 
be preferable...


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] do not account compute resource of instances in state SHELVED_OFFLOADED

2016-02-02 Thread Sascha Vogt
Hi,

Am 31.01.2016 um 18:57 schrieb John Garbutt:
> We need to make sure we don't have configuration values that change
> the semantic of our API.
> Such things, at a minimum, need to be discoverable, but are best avoided.
I totally agree on that. I

>> I think an off-loaded / shelved resource should still count against the
>> quota being used (instance, allocated floating IPs, disk space etc) just
>> not the resources which are no longer consumed (CPU and RAM)
> 
> OK, but that does mean unshelve can fail due to qutoa. Maybe thats OK?
For me that would be ok, just like a boot could fail. Even now I think
an unshelve can fail, because a new scheduling run is triggered and
depending on various things you could get a "no valid host" (e.g. we
have properties on Windows instances to only run them on a host with a
datacenter license. If that host is full (we only have one at the
moment), unshelve shouldn't work, should it?).

> The quota really should live with the project that owns the resource.
> i.e. nova has the "ephemeral" disk quota, but glance should have the
> glance quota.
Oh sure, I didn't mean to have that quota in Nova just to have them in
general "somewhere". When I first started playing around with OpenStack,
I was surprised that there are no quotas for images and ephemeral disks.

What is the general feeling about this? Should I ask on "operators" if
there is someone else who would like to have this added?

Greetings
-Sascha-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] towards a keystone v3 only devstack

2016-02-02 Thread gordon chung


On 02/02/2016 8:45 AM, Jordan Pittier wrote:
On Tue, Feb 2, 2016 at 2:09 PM, gordon chung 
> wrote:
yeah... the revert broke us across all telemetry projects since we fixed 
plugins to adapt to v3. i'm very much for adapting to v3 since it's lingered 
around for years. i think given the time lapsed, if it breaks them, tough.

 This is not how our community works.

sure. currently, i would say we work as: it depends who is broken, then 
possibly tough.

again, as i previously stated, this change should have been publicised (if it 
was, apologies, i missed it). i had to fix many separate projects to adapt to 
this so i've never said it went smoothly or that this was done correctly. i'm 
just mentioning that we as a community have been dragging our feet for years 
(not months). so this is not entirely a keystone fault. i'd much rather we see 
stuff break and light a fire under people because it's clearly evident no one 
was making an effort on this (myself included) so we should all take some blame.

cheers,

--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] URLs are not reported in the endpoint listing

2016-02-02 Thread Pradip Mukhopadhyay
Thanks. openstack commands are working fine.

I am trying with Monasca which is using keystone v3 for auth.


--pradip





On Tue, Feb 2, 2016 at 6:13 PM, Matt Fischer  wrote:

> I've seen similar odd behavior when using the Keystone client to try to
> list endpoints created using the v3 API (via puppet). Try using the
> openstack client  and the v3 endpoint. Be sure to set --os-api-version 3.
> On Feb 2, 2016 3:06 AM, "Pradip Mukhopadhyay" 
> wrote:
>
>> Hello,
>>
>>
>> I did a stacking recently, noticing a behavior:
>>
>> eystone --os-username admin --os-password secretadmin --os-tenant-name
>> admin --os-auth-url http://host:5000/v2.0 endpoint-list
>>
>> Returns null URLs for public/internal/admin.
>>
>>
>>
>> +--+---+---+-+--+--+
>> |id|   region  | publicurl | internalurl
>> | adminurl |service_id|
>>
>> +--+---+---+-+--+--+
>> | 169f7f5090ea442c8ae534d6cd38c484 | RegionOne |   |
>> |  | 8d30999ba36943359b4e7c4ae4f0a15c |
>> | 255f7316074d4aecb34b69e3f28309c1 | RegionOne |   |
>> |  | f26931e1fa43438da4c32fe530f33796 |
>>
>>
>> Some of the keystone CLIs are not working. e.g. user-list is working. But
>> not the others, say service-list/role-list. It is returning : The resource
>> could not be found. (HTTP 404) (Request-ID:
>> req-b52eace0-b65a-4ba3-afb9-616689a0833e)
>>
>>
>> Not sure what I have messed up.
>>
>>
>> Any help would be solicited.
>>
>>
>>
>>
>> --pradip
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] New API Guidelines Ready for Cross Project Review

2016-02-02 Thread michael mccune

hi all,

The following API guideline is ready for cross project review. It will 
be merged on Feb. 9 if there is no further feedback.


1. Must not return server-side tracebacks
https://review.openstack.org/#/c/183599

regards,
mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] PWG - Rolling Upgrades & Updates User Story

2016-02-02 Thread Kenny Johnston
The Product Working Group[1] is seeking additional reviews on a
"Rolling Updates & Upgrades" User Story[2]. The goal of user stories
is to create a cohesive understanding of the intent and impact of
features which may span projects and releases. If you are working on
or interested in improving upgrades please provide comments.

Thanks!

Kenny Johnston
[1] https://wiki.openstack.org/wiki/ProductTeam
[2] https://review.openstack.org/#/c/274969/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][virt] rebuild action not in support-matrix

2016-02-02 Thread Daniel P. Berrange
On Mon, Feb 01, 2016 at 05:04:37PM -0700, Matt Riedemann wrote:
> 
> 
> On 2/1/2016 12:39 PM, Chen CH Ji wrote:
> >Hi
> >   We have been trying to enablement of our CI work for our nova
> >virt layer code ,so we need to configure the tempest cases based on our
> >nova driver capability
> >   I found that rebuild action is not listed in [1] (only talk
> >about rebuild in evacuate), but code [2] seems support virt layer
> >abstraction
> >   can someone point the rebuild action in [1] or it's missing
> >on purpose ? Thanks
> >
> >[1]http://docs.openstack.org/developer/nova/support-matrix.html
> >[2]https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2920
> >
> >
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> Only the Ironic driver overrides the rebuild method, otherwise the compute
> manager has a default impl, so it's technically implemented for all virt
> drivers. There is also confusion around rebuild vs evacuate since both
> operations go through the rebuild_instance method in the compute manager,
> they are just separated by the 'recreate' parameter.
> 
> danpb might have reasons for not listing rebuild in the hypervisor support
> matrix - it might have just never been on the original wiki matrix. It'd be
> worth asking him.

The hypervisor matrix just copied the data from the original wiki. It is
certainly not a complete list of all features that are relevant. You could
make the matrix x10 bigger and it still wouldn't cover all interesting facts
across virt drivers. If anyone has things they want shown they should submit
patches

> But at the same time, since there is a default implementation, I'm also not
> sure if it's worth listing separately in the support matrix (but is also
> confusing I suppose to not list it at all).

That there is a default impl is really just an impl detail - if it is an
interesting feature from the user POV it is worth listing IMHO

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adding Dmitry Ukhlov to Oslo-Messaging-Core

2016-02-02 Thread Victor Stinner

+1 for me

Le 28/01/2016 16:25, Davanum Srinivas a écrit :

Team,

Dmitry has been focused solely on the Pika Driver this cycle:
http://stackalytics.com/?user_id=dukhlov=commits

Now that we have Pika driver in master, i'd like to invite Dmitry to
continue his work on all of Oslo.Messaging in addition to Pika.
Clearly over time he will expand to other Oslo stuff (hopefully!).
Let's please make add him to the Oslo-Messaging-Core in the meantime.

Thanks,
Dims



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] Error while using Rally with Python

2016-02-02 Thread varun bhatnagar
Hi,

I am trying out Rally with Python and I have written a very small snippet
of code to register the deployment

from rally.api import Deployment
deploymentObject = Deployment()
deploymentObject.create({
"type": "ExistingCloud",
"auth_url": "http://192.168.136.145:5000/v2.0;,
"region_name": "RegionOne",
"endpoint_type": "public",
"admin": {
"username": "admin",
"password": "admin",
"tenant_name": "admin"
},
"https_insecure": "false",
"https_cacert": ""
}, "MyCloud")


but it is throwing an error.

  File
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line
450, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table:
deployments [SQL: u'INSERT INTO deployments (created_at, updated_at, uuid,
parent_uuid, name, started_at, completed_at, config, admin, users,
enum_deployments_status) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)']
[parameters: ('2016-02-02 10:39:46.549138', '2016-02-02 10:39:46.549149',
'7f56f359-4001-42ee-a60f-fbe9023f7dc7', None, 'ExistingCloud', None, None,
'{"endpoint_type": "public", "auth_url": "http://192.168.136.145:5000/v2.0;,
"region_name": "RegionOne", "https_insecure": "false", "admin":
{"username": "admin", "tenant_name": "admin", "password": "admin"}, "type":
"ExistingCloud", "https_cacert": ""}', None, , 'deploy->init')]

Can anyone please help.

BR,
Varun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] shelve/unshelve cluster

2016-02-02 Thread Trevor McKay
And of course, don't forget the sahara specs repo:

https://github.com/openstack/sahara-specs

This is always a good way to propose a new idea.
Create a high-level blueprint, then submit an associated spec
through gerrit in the openstack/sahara-specs project.

Trev


On Mon, 2016-02-01 at 18:59 +0300, Vitaly Gridnev wrote:
> Hi,
> 
> 
> Can explain more precisely, what does it mean exactly to
> "shelve/unshelve" cluster?
> 
> On Mon, Feb 1, 2016 at 6:39 PM, Yacine SAÏBI
>  wrote:
> Hello,
> 
> I need to add features "shelve/unshelve" a cluster (sahara
> shelve ).
> 
> Is there anyone who had already worked on this issue ?
> 
> Any suggestions will be welcome.
> 
> Best regards,
> 
> Yacine Saïbi
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Best Regards,
> 
> Vitaly Gridnev
> Mirantis, Inc
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] Opinion/help for the eavesdrop home page

2016-02-02 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2016-02-02 09:39:22 +0100:
> Tony Breeds wrote:
> >  eavesdrop.openstack.org collates all of our meeting information.  The 
> > front
> > page has a list of all the meetings.  The list is long and mostly a 
> > free-form
> > paragraph.  Sometime ago we recieved a review [1] to change the layout to be
> > more tabular.
> >
> > Compare [2] with [3]
> >
> > It's be great to get some UX comments in that review or for super bonus 
> > points
> > an implementation that makes the eavesdrop home page better.
> 
> I find them both equally unreadable :) In [2] it is too busy, and in [3] 
> it's difficult to understand the ordering and therefore find a meeting 
> in that list (which is the goal after all). A proper column layout would 
> be preferable...
> 

The columns are definitely more visually appealing, but I agree the
ordering from left to right is unusual. How hard would it be to change
that?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-02 Thread Foley, Emma L
Hi Simon,

So collectd acts as a statsd server, and the metrics are aggregated and 
dispatched to the collectd daemon.
Collectd’s write plugins then output the stats to wherever we want them to go.

In order to interact with gnocchi using statsd, we require collectd to act as a 
statsd client and dispatch the metrics to gnocchi-statsd service.

Regards,
Emma


From: Simon Pasquier [mailto:spasqu...@mirantis.com]
Sent: Monday, February 1, 2016 9:02 AM
To: OpenStack Development Mailing List (not for usage questions) 
; Foley, Emma L 
Subject: Re: [openstack-dev] [telemetry][ceilometer] New project: 
collectd-ceilometer-plugin



On Fri, Jan 29, 2016 at 6:30 PM, Julien Danjou 
> wrote:
On Fri, Jan 29 2016, Foley, Emma L wrote:

> Supporting statsd would require some more investigation, as collectd's
> statsd plugin supports reading stats from the system, but not writing
> them.

I'm not sure what that means?
https://collectd.org/wiki/index.php/Plugin:StatsD seems to indicate it
can send metrics to a statsd daemon.

Nope that is the opposite: collectd can act as a statsd server. The man page 
[1] is clearer than the collectd Wiki.
Simon

[1] 
https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_statsd

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Task Based Deployment Is at Least Twice Faster

2016-02-02 Thread Alexey Shtokolov
Hi Fuelers!

As you may be aware, since [0] Fuel has implemented a new orchestration
engine [1]
We switched the deployment paradigm from role-based (aka granular) to
task-based and now Fuel can deploy all nodes simultaneously using
cross-node dependencies between deployment tasks.

This feature is experimental in Fuel 8.0 and will be enabled by default for
Fuel 9.0

Allow me to show you the results. We made some benchmarks on our bare metal
lab [2]

Case #1. 3 controllers + 7 computes w/ ceph.
Task-based deployment takes *~38* minutes vs *~1h15m* for granular (*~2*
times faster)
Here and below the deployment time is average time for 10 runs

Case #2. 3 controllers + 3 mongodb + 4 computes w/ ceph.
Task-based deployment takes *~41* minutes vs *~1h32m* for granular (*~2.24*
times faster)



Also we took measurements for Fuel CI test cases. Standard BVT (Master node
+ 3 controllers + 3 computes w/ ceph. All are in qemu VMs on one host)

Fuel CI slaves with *4 *cores *~1.1* times faster
In case of 4 cores for 7 VMs they are fighting for CPU resources and
it marginalizes
the gain of task-based deployment

Fuel CI slaves with *6* cores *~1.6* times faster

Fuel CI slaves with *12* cores *~1.7* times faster

You can see additional information and charts in the presentation [3].

[0] -
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082093.html
[1] -
https://specs.openstack.org/openstack/fuel-specs/specs/8.0/task-based-deployment-mvp.html
[2] -  3 x HP ProLiant DL360p Gen8 (XeonE5 6 cores/64GB/SSD)  + 7 x HP
ProLiant DL320p Gen8 (XeonE3 4 cores/8-16GB/HDD)
[3] -
https://docs.google.com/presentation/d/1jZCFZlXHs_VhjtVYS2VuWgdxge5Q6sOMLz4bRLuw7YE

---
WBR, Alexey Shtokolov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [taskflow] Begging for reviews

2016-02-02 Thread Greg Hill
Normally I reserve the begging for IRC, but since the other cores aren't always 
on, I'm taking the shotgun approach.  If you aren't a core on taskflow, then 
you can safely skip the rest of this email.

We have a number of open reviews with a single +2 that need another core 
reviewer to sign off.  Included in that is a blocker for the next release:

https://review.openstack.org/#/c/272748/

That was a bug introduced since the last release with trapping exception 
arguments from tasks that will affect anyone running the worker-based engine.  
The case I ran into was in requests where it did something akin to:

raise RetryException(ConnectionError())

That inner exception could not be converted to JSON so it threw an exception 
and aborted the job.

Also, these are important changes (IMO):

Make the worker stop processing queued tasks when it stops:
https://review.openstack.org/272862

Similarly for the conductor:
https://review.openstack.org/270319

Allow revert methods to have different method signatures from execute and just 
work:
https://review.openstack.org/270853

(depends on https://review.openstack.org/267131 )

Don't revert tasks that were never executed (I'm simplifying):
https://review.openstack.org/273731

And more.  I'm just highlighting the ones that most affect me personally, but 
in general, the review queue is really backlogged.  Any help is appreciated.  
Bring it!

https://review.openstack.org/#/q/status:open+project:openstack/taskflow+branch:master

Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca] Allowing 'admin' user and 'admin' project full access to Monasca

2016-02-02 Thread Pradip Mukhopadhyay
Hello,


Following [1]
, I set up
Monasca (with all other services). I get the 'mini-mon' project;  'monasca'
service; 'monasca-agent', 'monasca-user' role and 'mini-mon',
'monasca-agent' users.

I can do all expected operations (alarm defn, alarm lising, metric push,
notifications) using mini-mon user and 'mini-mon' project as follows:

#monasca --os-username mini-mon --os-password password --os-project-name
mini-mon  notification-create pradipm_email_try EMAIL 



However, I am getting an error when trying to do the same for 'admin' user
and 'admin' project:

$ monasca --os-username admin --os-password secretadmin --os-project-name
admin  notification-create pradipm_email EMAIL <...>
{
  "description": "Tenant ID is missing a required role to access this
service",
  "title": "Forbidden"
}




How can I make it work for 'admin' user also?



There are following three files. Do I need to add 'admin' role in

1. /opt/stack/monasca-api/devstack/files/monasca-api/api-config.yml  //
admin is there (defaultAuthorizedRoles)

2. /opt/stack/monasca-api/java/src/main/resources/api-config.yml  //
admin not there

3. /etc/monasca/api-config.conf  // admin not there in
default_authorized_roles clause



Any help will be solicited.


Thanks,
Pradip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] notification subteam meeting

2016-02-02 Thread Balázs Gibizer
> From: Balázs Gibizer
> Sent: February 01, 2016 13:05
> 
> > -Original Message-
> > From: Balázs Gibizer [mailto:balazs.gibi...@ericsson.com]
> > Sent: February 01, 2016 12:57
> > To: 'OpenStack Development Mailing List (not for usage questions)'
> > Subject: Re: [openstack-dev] [Nova] notification subteam meeting
> >
> > Hi,
> >
> > The next meeting of the nova notification subteam will happen
> > 2016-02-02 Tuesday _17:00_ UTC [1] on #openstack-meeting-alt on
> > freenode
> >
> > Agenda:
> > - Status of the outstanding specs and code reviews
> > - AOB
> >
> > See you there.
> >
> > Cheers,
> > Gibi
> >
> > [1]
> >
> https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160119T2
> > 0
> 
> The correct time link is [1]
> https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160202T1
> 7

And the correct place is #openstack-meeting

Sorry for the mixup.
Cheers,
Gibi

> 
> Sorry,
> Gibi
> 
> > [2] https://wiki.openstack.org/wiki/Meetings/NovaNotification
> >
> >
> >
> >
> __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][ceilometer][all] stable/kilo 2015.1.3 delayed

2016-02-02 Thread Jeremy Stanley
On 2016-01-30 12:54:50 + (+), Dave Walker wrote:
> Unless anyone else objects, I'd be really happy if you are willing to
> scp a handrolled tarball.

Since there have been no objections, I've generated the tarball and
wheel for ceilometer 2015.1.3 on a representative of the same host
types we use for the CI job which normally does this, and replicated
the steps tox would perform with the exception of downgrading to
pip<8 in the virtualenv first.

> I'm happy to help validate it's pristine-state locally here.

Please do! They're in the usual location at
https://tarballs.openstack.org/ceilometer/ and the checksums for
them are as follows...

md5sum:

a705892697b3ca97eaf4ccc39a013257 
/srv/static/tarballs/ceilometer/ceilometer-2015.1.3-py2-none-any.whl
2f6b10ad557dc524d494e7fa0b140e96 
/srv/static/tarballs/ceilometer/ceilometer-2015.1.3.tar.gz

sha256sum:

a84fd2b18f922be4b2aca7c89baf2153f9656ebe6791a9de37f56283f866645c 
/srv/static/tarballs/ceilometer/ceilometer-2015.1.3-py2-none-any.whl
465f8605639b36bbb86c3198a58ef3282e54546bc587c436db34cb613e1f2404 
/srv/static/tarballs/ceilometer/ceilometer-2015.1.3.tar.gz

-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-02 Thread Preston L. Bannister
To be clear, I work for EMC, and we are building a backup product for
OpenStack (which at this point is very far along). The primary lack is a
good means to efficiently extract changed-block information from OpenStack.
About a year ago I worked through the entire Nova/Cinder/libvirt/QEMU
stack, to see what was possible. The changes to QEMU (which have been
in-flight since 2011) looked most promising, but when they would land was
unclear. They are starting to land. This is big news. :)

That is not the end of the problem. Unless the QEMU folk are perfect, there
are likely bugs to be found when the code is put into production. (With
more exercise, the sooner any problems can be identified and addressed.)
OpenStack uses libvirt to talk to QEMU, and libvirt is a fairly thick
abstraction. Likely there will want to be adjustments to libvirt. Bypassing
Nova and chatting with libvirt directly is a bit suspect (but may be
needed). There might be adjustments needed in Nova.

To offer suggestions...

Ekko is an *opinionated* approach to backup. This is not the only way to
solve the problem. I happen very much like the approach, but as a *specific
*approach, it probably does not belong in Cinder or Nova. (I believe it was
Jay who offered a similar argument about backup more generally.)

(Keep in mind QEMU is not the only hypervisor supported by Nova, if the
majority of use. Would you want to attempt a design that works for all
hypervisors? I would not!  ...at least at this point. Also, last I checked
the Cinder folk were a bit hung up on replication, as finding common
abstractions across storage was not easy. This problem looks similar.)

While wary of bypassing Nova/Cinder, my suggestion would to be rude in the
beginning, with every intent of becoming civil in the end.

Start by talking to libvirt directly. (The was a bypass mechanism in
libvirt that looked like it might be sufficient.) Break QEMU early, and get
it fixed. :)

When QEMU usage is working, talk to the libvirt folk about *proven* needs,
and what is needed to become civil.

When libvirt is updated (or not), talk to Nova folk about *proven* needs,
and what is needed to become civil. (Perhaps simply awareness, or a small
set of primitives.)

It might take quite a while for the latest QEMU and libvirt to ripple
through into OpenStack distributions. Getting any fixes into QEMU early (or
addressing discovered gaps in needed function) seems like a good thing.

All the above is a sufficiently ambitious project, just by itself. To my
mind, that justifies Ekko as a unique, focused project.






On Mon, Feb 1, 2016 at 4:28 PM, Sam Yaple  wrote:

> On Mon, Feb 1, 2016 at 10:32 PM, Fausto Marzi 
> wrote:
>
>> Hi Preston,
>> Thank you. You saw Fabrizio in Vancouver, I'm Fausto, but it's allright,
>> : P
>>
>> The challenge is interesting. If we want to build a dedicated backup API
>> service (which is always what we wanted to do), probably we need to:
>>
>>
>>- Place out of Nova and Cinder the backup features, as it wouldn't
>>make much sense to me to have a Backup service and also have backups
>>managed independently by Nova and Cinder.
>>
>>
>> That said, I'm not a big fan of the following:
>>
>>- Interacting with the hypervisors and the volumes directly without
>>passing through the Nova and Cinder API.
>>
>> Passing through the api will be a huge issue for extracting data due to
> the sheer volume of data needed (TB through the api is going to kill
> everything!)
>
>>
>>- Adding any additional workload on the compute nodes or block
>>storage nodes.
>>- Computing incremental, compression, encryption is expensive. Have
>>many simultaneous process doing that may lead  to bad behaviours on core
>>services.
>>
>> These are valid concerns, but the alternative is still shipping the raw
> data elsewhere to do this work, and that has its own issue in terms of
> bandwidth.
>
>>
>> My (flexible) thoughts are:
>>
>>- The feature is needed and is brilliant.
>>- We should probably implement the newest feature provided by the
>>hypervisor in Nova and export them from the Nova API.
>>- Create a plugin that is integrated with Freezer to leverage that
>>new features.
>>- Same apply for Cinder.
>>- The VMs and Volumes backup feature is already available by Nova,
>>Cinder and Freezer. It needs to be improved for sure a lot, but do we need
>>to create a new project for a feature that needs to be improved, rather
>>than work with the existing Teams?
>>
>> I disagree with this statement strongly as I have stated before. Nova has
> snapshots. Cinder has snapshots (though they do say cinder-backup). Freezer
> wraps Nova and Cinder. Snapshots are not backups. They are certainly not
> _incremental_ backups. They can have neither compression, nor encryption.
> With this in mind, Freezer does not have this "feature" at all. Its not
> that it needs improvement, it simply 

[openstack-dev] [nova] Nova Midcycle Summary (i.e. mid mitaka progress report)

2016-02-02 Thread John Garbutt
Hi,

For all the details see this etherpad:
https://etherpad.openstack.org/p/mitaka-nova-midcycle

Here I am attempting a brief summary, picking out some highlights.
Feel free to reply and add your own details / corrections.

**Process**

Non-priority FFE deadline is this Friday (5th Feb).
Now open for Newton specs.
Please move any proposed Mitaka specs to Newton.

**Priorities**

Cells v2:
It is moving forward, see alaski's great summary:
http://lists.openstack.org/pipermail/openstack-dev/2016-January/084545.html
Mitaka aim is around the new create instance flow.
This will make the cell zero and the API database required.
Need to define a list of instance info that is "valid" in the API
before the instance has been built.

v2.1 API:
API docs updates moving forward, as is the removal of project-ids and
related work to support the scheduler. Discussed policy discovery for
newton, in relation to the live-resize blueprint, alaski to follow up
with keystone folks.

Live-Migrate:
Lots of code to review (see usual etherpad for priority order). Some
details around storage pools need agreeing, but the general approach
seems to have reached consensus. CI is making good progress, as its
finding bugs. Folks signed up for manual testing.
Spoke about the need to look into the token expiry fix discussed at the summit.

Scheduler:
Discussed jay's blueprints. For mitaka we agreed to focus on:
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-classes.html,
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-providers.html,
and possibly https://review.openstack.org/253187. The latter is likely
to require a new Scheduler API endpoint within Nova.
Overall there seemed a general agreement on the approach jaypipes
proposed, and happiness that its almost all in written down now in
spec changes.
Discussed the new scheduler plan in relation to IP scheduling for
neutron's routed networks with armax and carl_baldwin. Made a lot of
progress towards better understanding each others requirements
(https://review.openstack.org/#/c/263898/)

priv-sep:
We must have priv-sep in os-brick for mitaka to avoid more upgrade problems.
Go back and do a better job after we fix the burning upgrade issue

os-vif:
Work continues.
Decided it doesn't have to wait for priv-sep.
Agreed base os-vif lib to include ovs, ovs hybrid, and linux-bridge

**Testing**

* Got folks to help get a bleeding edge libvirt test working
* Agreement on the need to improve ironic driver testing
* Agreed on the intent to move forward with Feature Classification
* Reminder about the new CI related review guideline

**Cross Project**

Neutron:
We had armax and carl_baldwin in the room.
Discussed routed networks and the above scheduler impacts.
Spoke about API changes so we have less downtime during live-migrate
when using DVR (or similar tech).
Get me a network Neutron API needs to be idempotent. Still need help
with the patch on the Nova side, jaypipes to find someone. Agreed
overall direction.

Cinder:
Joined Cinder meetup via hangout.
Got a heads up around the issues they are having with nested quotas.
A patch broke backwards compatibility with olderer cinders, so the
patch has been reverted.
Spoke about priv-sep and os-brick, agreed above plan for the brute
force conversion.
Agreed multi-attach should wait till Newton. We have merged the DB
fixes that we need to avoid data corruption. Spoke about using service
version reporting to stop the API allowing mutli-attach until the
upgrade has completed. To make remove_export not race, spoke about the
need for every volume attachment having its own separate host
attachment, rather than trying to share connections. Still questions
around upgrade.

**Other**

Spoke about the need for policy discovery via the API, before we add
something like the live-resize blueprint.

Spoke about the architectural aim to not have computes communicate
with each other, and instead have the conductor send messages between
computes. This was in relation to tdurakov's proposal to refactor the
live-migrate workflow.

**Thank You**

Many thanks for Paul Murray and others at HP for hosting up during our
time in Bristol, UK.

Also many thanks to all who made the long trip to Bristol to help
discuss all these up coming efforts, and start to build consensus
ahead of the Newton design summit in Austin.

Thanks for reading,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [senlin] Cancelling this and the next weekly meeting

2016-02-02 Thread Qiming Teng

Since most of the developers are on vacation during the Chinese New Year
festival. We are skipping the weekly IRC meetings this week (Feb 02
UTC 1300) and next week (Feb 09 UTC 1300).

Best wishes to you all and your family.

Regards,
  Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][virt] rebuild action not insupport-matrix

2016-02-02 Thread Chen CH Ji
 ok, thanks, guess even if all virt layer support this 'rebuild' , we can still say it's supported by all hypervisors so others can take it as a reference, will submit a new patch for it, thanks-"Daniel P. Berrange"  wrote: -To: "OpenStack Development Mailing List (not for usage questions)" From: "Daniel P. Berrange" Date: 02/02/2016 10:21AMSubject: Re: [openstack-dev] [nova][virt] rebuild action not insupport-matrixOn Mon, Feb 01, 2016 at 05:04:37PM -0700, Matt Riedemann wrote:> > > On 2/1/2016 12:39 PM, Chen CH Ji wrote:> >Hi> >           We have been trying to enablement of our CI work for our nova> >virt layer code ,so we need to configure the tempest cases based on our> >nova driver capability> >           I found that rebuild action is not listed in [1] (only talk> >about rebuild in evacuate), but code [2] seems support virt layer> >abstraction> >           can someone point the rebuild action in [1] or it's missing> >on purpose ? Thanks> >> >[1]http://docs.openstack.org/developer/nova/support-matrix.html> >[2]https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2920> >> >> >> >__> >OpenStack Development Mailing List (not for usage questions)> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> >> > Only the Ironic driver overrides the rebuild method, otherwise the compute> manager has a default impl, so it's technically implemented for all virt> drivers. There is also confusion around rebuild vs evacuate since both> operations go through the rebuild_instance method in the compute manager,> they are just separated by the 'recreate' parameter.> > danpb might have reasons for not listing rebuild in the hypervisor support> matrix - it might have just never been on the original wiki matrix. It'd be> worth asking him.The hypervisor matrix just copied the data from the original wiki. It iscertainly not a complete list of all features that are relevant. You couldmake the matrix x10 bigger and it still wouldn't cover all interesting factsacross virt drivers. If anyone has things they want shown they should submitpatches> But at the same time, since there is a default implementation, I'm also not> sure if it's worth listing separately in the support matrix (but is also> confusing I suppose to not list it at all).That there is a default impl is really just an impl detail - if it is aninteresting feature from the user POV it is worth listing IMHORegards,Daniel-- |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :||: http://libvirt.org              -o-             http://virt-manager.org :||: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :||: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Error while using Rally with Python

2016-02-02 Thread Aleksandr Maretskiy
Hi

try this:

  # It is important to have these lines first!!!
  from rally.common import db
  db.api.db_options.set_defaults(db.api.CONF,
connection='sqlite:///path/to/your/rally.sqlite')

  # now run your code


On Tue, Feb 2, 2016 at 12:47 PM, varun bhatnagar 
wrote:

> Hi,
>
> I am trying out Rally with Python and I have written a very small snippet
> of code to register the deployment
>
> from rally.api import Deployment
> deploymentObject = Deployment()
> deploymentObject.create({
> "type": "ExistingCloud",
> "auth_url": "http://192.168.136.145:5000/v2.0;,
> "region_name": "RegionOne",
> "endpoint_type": "public",
> "admin": {
> "username": "admin",
> "password": "admin",
> "tenant_name": "admin"
> },
> "https_insecure": "false",
> "https_cacert": ""
> }, "MyCloud")
>
>
> but it is throwing an error.
>
>   File
> "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line
> 450, in do_execute
> cursor.execute(statement, parameters)
> sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table:
> deployments [SQL: u'INSERT INTO deployments (created_at, updated_at, uuid,
> parent_uuid, name, started_at, completed_at, config, admin, users,
> enum_deployments_status) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)']
> [parameters: ('2016-02-02 10:39:46.549138', '2016-02-02 10:39:46.549149',
> '7f56f359-4001-42ee-a60f-fbe9023f7dc7', None, 'ExistingCloud', None, None,
> '{"endpoint_type": "public", "auth_url": "http://192.168.136.145:5000/v2.0;,
> "region_name": "RegionOne", "https_insecure": "false", "admin":
> {"username": "admin", "tenant_name": "admin", "password": "admin"}, "type":
> "ExistingCloud", "https_cacert": ""}', None,  0x7f8fd25b9ae0, size -1, offset 0 at 0x7f8fd20ac2b0>, 'deploy->init')]
>
> Can anyone please help.
>
> BR,
> Varun
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] New BP for live migration with direct pci passthru

2016-02-02 Thread Xie, Xianshan
Hi, all,
I'd better explain my thoughts in more detail I think.

>1)Bond the direct pci passthru NIC with a virtual NIC.
> This will keep the network connectivity during the live migration.
>2)Unenslave the direct pci passthru NIC
The step 1-2 could be implemented by any way of the following two approaches:
A: embed scripts into image in advance, depends on the DIB,
  which will be run automatically while server is spawned to bond the vNIC with 
the direct pci passthru NIC.
B: bond the two NICs by manually after server is up.

>3)Hot-unplug the direct pci passthru NIC
We should add a logic for the "live-migration-with-pci-passthru-nic" (temporary 
command name)
to indirectly invoke dettach_interface() to unplug the pci NIC.

>4)Live-migrate guest with the virtual NIC
There mainly are two logic we should implement I think.
A: enhance the conductor and scheduler to vote the target host based on the 
passthrough-able NIC matching.
B: add a logic to invoke the general live migration operation to migrate the 
server (now the server has only vNICs).

>5)Hot-plug the direct pci passthru NIC on the target host
Add a logic to plug the pci NIC with attach-interface().

>6)Enslave the direct pci passthru NIC
A script will be run by the udev rules which will be notified by kernel 
whenever the pci NIC device is attached.
Of course, the script should also be prepared before the migration operation, 
by administrator or by DIB.

Does that make sense?
Any reply will be appreciated, thanks in advance.

Best regards,
xiexs



-Original Message-
From: Xie, Xianshan [mailto:xi...@cn.fujitsu.com] 
Sent: Monday, February 01, 2016 6:25 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][neutron] New BP for live migration with direct 
pci passthru

Hi, all,
  I have registered a new BP about the live migration with a direct pci 
passthru device.
  Could you please help me to review it? Thanks in advance.

The following is the details:
--
SR-IOV has been supported for a long while, in the community's point of view, 
the pci passthru with Macvtap can be live migrated possibly, but the direct pci 
passthru seems hard to implement the migration as the passthru VF is totally 
controlled by the VMs so that some internal states may be unknown by the 
hypervisor.

But we think the direct pci passthru model can also be live migrated with the 
following combination of a series of technology/operation based on the enhanced
Qemu-Geust-Agent(QGA) which has already been supported by nova.
   1)Bond the direct pci passthru NIC with a virtual NIC.
 This will keep the network connectivity during the live migration.
   2)Unenslave the direct pci passthru NIC
   3)Hot-unplug the direct pci passthru NIC
   4)Live-migrate guest with the virtual NIC
   5)Hot-plug the direct pci passthru NIC on the target host
   6)Enslave the direct pci passthru NIC

And more inforation about this concept can refer to [1].
[1]https://www.kernel.org/doc/ols/2008/ols2008v2-pages-261-267.pdf
--

Best regards,
Xiexs



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Failed to create network with kuryr driver type

2016-02-02 Thread Antoni Segura Puimedon
On Tue, Feb 2, 2016 at 7:26 AM, Mars Ma  wrote:

> hi Vikas,
>
> ubuntu@kuryr1:~$ docker network create --driver=kuryr --ipam-driver=kuryr
> --subnet 10.10.0.0/16 --gateway 10.10.0.1 --ip-range 10.10.0.0/24 foo
> 68f14fe701710d6f3472d1626c33f0036a145aaa8d81265429e509cf759adfe1
> ubuntu@kuryr1:~$ docker network ls
> NETWORK ID  NAMEDRIVER
> 8cbbcae5d143bridge  bridge
> d64b70ca9b64nonenull
> 65510a4e71dehosthost
> ubuntu@kuryr1:~$ neutron net-list
>
> +--+--+--+
> | id   | name
> | subnets
>|
>
> +--+--+--+
> | 29f8fd92-26f5-42b6-86db-ae09fc77cd91 | public
> | 18dcdefd-741f-4ec2-ba22-60610f071ed7
> 2001:db8::/64   |
> |  |
>| 430a7e0b-ca74-41a1-a6b0-d8958072eee2
> 172.24.4.0/24   |
> | db489255-be4e-439d-babd-20a52a412e25 |
> 68f14fe701710d6f3472d1626c33f0036a145aaa8d81265429e509cf759adfe1 |
> 18e035b2-d26d-488a-b780-9c91eb36b294 10.10.0.0/24|
> | 37475896-5f2a-4308-a4cd-8095cefa3b7c | private
>| 1e69eb3c-08a9-4312-882d-179c1e546aed
> fd54:99bd:a3b8::/64 |
> |  |
>| 08a1b408-3915-4d11-932d-f8c3e9ee4c0f
> 10.0.0.0/24 |
>
> +--+--+--+
> ubuntu@kuryr1:~$ docker run --net=foo -itd --name=container1 busybox
> Unable to find image 'busybox:latest' locally
> latest: Pulling from library/busybox
> 583635769552: Pull complete
> b175bcb79023: Pull complete
> Digest:
> sha256:c1bc9b4bffe665bf014a305cc6cf3bca0e6effeb69d681d7a208ce741dad58e0
> Status: Downloaded newer image for busybox:latest
> 4a7ad6fe18174e3e427e73283325446929daf9af87fb11534f9febc91acca272
> Error response from daemon: Cannot start container
> 4a7ad6fe18174e3e427e73283325446929daf9af87fb11534f9febc91acca272: network
> foo not found
> ubuntu@kuryr1:~$
>
> why does not docker list the created network? but neutron can list it.
> any comment ?
>

Do you have etcd installed and Docker talking to it?
Which capability scope are you using in kuryr.conf?

>
>
> Thanks & Best regards !
> Mars Ma
> 
>
> On Wed, Jan 20, 2016 at 6:18 PM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Cheers :) !!
>>
>> On Wed, Jan 20, 2016 at 3:41 PM, Mars Ma  wrote:
>>
>>> Much thanks to @Vikas
>>>
>>> Thanks & Best regards !
>>> Mars Ma
>>> 
>>>
>>> On Wed, Jan 20, 2016 at 5:55 PM, Vikas Choudhary <
>>> choudharyvika...@gmail.com> wrote:
>>>
 Hi Mars,

 Your problem will be solved by this patch,
 https://review.openstack.org/#/c/265744/ . It has not been merged yet
 though.


 Thanks
 Vikas


 On Wed, Jan 20, 2016 at 2:39 PM, Mars Ma  wrote:

> Hi Vikas,
>
> I added your fix , and also have problem, but different :
>
> $ neutron subnetpool-list
>
> +--+---+---+---+--+
> | id   | name  | prefixes  |
> default_prefixlen | address_scope_id |
>
> +--+---+---+---+--+
> | 360765af-fd5d-432c-990f-f787600c30ab | kuryr | [u'10.10.1.0/24'] |
> 24|  |
>
> +--+---+---+---+--+
> ubuntu@kuryr1:~$ sudo docker network create -d kuryr
> --ipam-driver=kuryr kuryr
> Error response from daemon: Plugin Error: NetworkDriver.CreateNetwork,
> {
>   "Err": "u'Gateway' is a required property\n\nFailed validating
> u'required' in schema[u'properties'][u'IPv4Data'][u'items']:\n
>  {u'description': u'IPv4 data',\n u'example': {u'AddressSpace':
> u'foo',\n  u'AuxAddresses': {u'db': u'192.168.42.3',\n
>u'web': u'192.168.42.2'},\n
>  u'Gateway': u'192.168.42.1/24',\n  u'Pool': u'
> 192.168.42.0/24'},\n u'properties': {u'AddressSpace':
> {u'description': u'The name of the address space.',\n
> u'example': u'foo',\n
> u'type': u'string'},\n  

[openstack-dev] Adding Ironic node kills devstack configuration

2016-02-02 Thread Pavel Fedin
 Hello again!

 Now i am trying to add Ironic-driven compute note to existing devstack. Below 
is my local.conf for it. When i run stack.sh, it does
something, then starts to reinitialize projects, users, groups, tenants, etc, 
effectively destroying my existing configuration.
After that it dies with "cannot connect to... something", and my system is in 
non-working state, ready for reinstalling from
scratch.
 Actually, two questions:
1. Is it possible to tell stack.sh to keep old configuration? Rebuilding it 
every time is very tedious task.
2. Why does my compute node wipe everything out? Because i enable 'key' 
(keystone ?) service? But ironic forces me to do it ("key"
service is required by ironic). So how do i install the thing correctly ?

--- cut ---
[[local|localrc]]
HOST_IP=10.51.0.5
SERVICE_HOST=10.51.0.4
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ADMIN_PASSWORD=nfv
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
DATABASE_TYPE=mysql

# Services that a compute node runs
ENABLED_SERVICES=n-cpu,rabbit,q-agt

## Open vSwitch provider networking options
PHYSICAL_NETWORK=public
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_INTERFACE=ens33
Q_USE_PROVIDER_NETWORKING=True
Q_L3_ENABLED=False

# Enable Ironic plugin
enable_plugin ironic git://git.openstack.org/openstack/ironic

enable_service key
enable_service glance

# Enable Swift for agent_* drivers
enable_service s-proxy
enable_service s-object
enable_service s-container
enable_service s-account

# Swift temp URL's are required for agent_* drivers.
SWIFT_ENABLE_TEMPURLS=True

# Create 3 virtual machines to pose as Ironic's baremetal nodes.
IRONIC_VM_COUNT=2
IRONIC_VM_SSH_PORT=22
IRONIC_BAREMETAL_BASIC_OPS=True
IRONIC_DEPLOY_DRIVER_ISCSI_WITH_IPA=True

# Enable Ironic drivers.
IRONIC_ENABLED_DRIVERS=fake,agent_ssh,agent_ipmitool,pxe_ssh,pxe_ipmitool

# Change this to alter the default driver for nodes created by devstack.
# This driver should be in the enabled list above.
IRONIC_DEPLOY_DRIVER=pxe_ssh

# The parameters below represent the minimum possible values to create
# functional nodes.
IRONIC_VM_SPECS_RAM=1024
IRONIC_VM_SPECS_DISK=10

# Size of the ephemeral partition in GB. Use 0 for no ephemeral partition.
IRONIC_VM_EPHEMERAL_DISK=0

# To build your own IPA ramdisk from source, set this to True
IRONIC_BUILD_DEPLOY_RAMDISK=False

VIRT_DRIVER=ironic
--- cut ---

Kind regards,
Pavel Fedin
Senior Engineer
Samsung Electronics Research center Russia




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] CentOS bootstrap image retirement

2016-02-02 Thread Sergey Kulanov
Hi Folks,

I think it's time to declare CentOS bootstrap image retirement.
Since Fuel 8.0 we've switched to Ubuntu bootstrap image usage [1, 2] and
CentOS one became deprecated,
so in Fuel 9.0 we can freely remove it [2].
For now we are building CentOS bootstrap image together with ISO and then
package it into rpm [3], so by removing fuel-bootstrap-image [3] we:

* simplify patching/update story, since we don't need to rebuild/deliver
this
  package on changes in dependent packages [4].

* speed-up ISO build process, since building centos bootstrap image takes ~
20%
  of build-iso time.

We've prepared related blueprint for this change [5] and spec [6]. We also
have some draft patchsets [7]
which passed BVT tests.

So the next steps are:
* get feedback by reviewing the spec/patches;
* remove related code from the rest fuel projects (fuel-menu, fuel-devops,
fuel-qa).


Thank you


[1]
https://specs.openstack.org/openstack/fuel-specs/specs/7.0/fuel-bootstrap-on-ubuntu.html
[2]
https://specs.openstack.org/openstack/fuel-specs/specs/8.0/dynamically-build-bootstrap.html
[3]
https://github.com/openstack/fuel-main/blob/master/packages/rpm/specs/fuel-bootstrap-image.spec
[4]
https://github.com/openstack/fuel-main/blob/master/bootstrap/module.mk#L12-L50
[5]
https://blueprints.launchpad.net/fuel/+spec/remove-centos-bootstrap-from-fuel
[6] https://review.openstack.org/#/c/273159/
[7]
https://review.openstack.org/#/q/topic:bp/remove-centos-bootstrap-from-fuel


-- 
Sergey
DevOps Engineer
IRC: SergK
Skype: Sergey_kul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon] Projects acting as a domain at the top of the project hierarchy

2016-02-02 Thread Raildo Mascena
See responses inline.

On Mon, Feb 1, 2016 at 6:25 PM Michał Dulko  wrote:

> On 01/30/2016 07:02 PM, Henry Nash wrote:
> > Hi
> >
> > One of the things the keystone team was planning to merge ahead of
> milestone-3 of Mitaka, was “projects acting as a domain”. Up until now,
> domains in keystone have been stored totally separately from projects, even
> though all projects must be owned by a domain (even tenants created via the
> keystone v2 APIs will be owned by a domain, in this case the ‘default’
> domain). All projects in a project hierarchy are always owned by the same
> domain. Keystone supports a number of duplicate concepts (e.g. domain
> assignments, domain tokens) similar to their project equivalents.
> >
> > 
> >
> > I’ve got a couple of questions about the impact of the above:
> >
> > 1) I already know that if we do exactly as described above, the cinder
> gets confused with how it does quotas today - since suddenly there is a new
> parent to what it thought was a top level project (and the permission rules
> it encodes requires the caller to be cloud admin, or admin of the root
> project of a hierarchy).
>
> These problems are there because our nested quotas code is really buggy
> right now. Once Keystone merges a fix allowing non-admin users to fetch
> his own project hierarchy - we should be able to fix it.
>

++ The patch to fix this problem are closer to be merged, there is just
minor comments to fix: https://review.openstack.org/#/c/270057/  So I
believe that we can fix this bug on cinder in in next days.

>
> > 2) I’m not sure of the state of nova quotas - and whether it would
> suffer a similar problem?
>
> As far as I know Nova haven't had merged nested quotas code and will not
> do that in Mitaka due to feature freeze.

Nested quotas code on Nova is very similar with the Cinder code and we are
already fixing the bugs that we found on Cinder. Agreed that It will not be
merged in Mitaka due to feature freeze.

>
> > 3) Will Horizon get confused by this at all?
> >
> > Depending on the answers to the above, we can go in a couple of
> directions. The cinder issues looks easy to fix (having had a quick look at
> the code) - and if that was the only issue, then that may be fine. If we
> think there may be problems in multiple services, we could, for Mitaka,
> still create the projects acting as domains, but not set the parent_id of
> the current top level projects to point at the new project acting as a
> domain - that way those projects acting as domains remain isolated from the
> hierarchy for now (and essentially invisible to any calling service). Then
> as part of Newton we can provide patches to those services that need
> changing, and then wire up the projects acting as a domain to their
> children.
> >
> > Interested in feedback to the questions above.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Virtual Mid-Cycle meeting next week

2016-02-02 Thread Flavio Percoco

On 29/01/16 09:33 -0430, Flavio Percoco wrote:

Greetings,

As promissed (although I promissed it yday), here's the link to vote for the
days you'd like the Glance Virtual Midcycle to happen. We'll be meeting just for
2 days and at maximum for 3 hours. The 2 days with more votes are the ones that
will be picked. Since there's such a short notice, I'll be actively pinging you
all and I'll close the vote on Monday Feb 1st.

http://doodle.com/poll/eck5hr5d746fdxh6

Thank you all for jumping in with such a short notice,
Flavio

P.S: I'll be sending the details of the meeting out with the invitation.

--
@flaper87
Flavio Percoco



Hey Folks,

So, Let's do this:

I've started putting together an agenda for these 2 days here:

https://etherpad.openstack.org/p/glance-mitaka-virtual-mid-cycle

Please, chime in and comment on what topics you'd like to talk about.

The virtual mid-cycle will be held on the follwing dates:

Thursday 4th from 15:00 UTC to 17:00 UTC

Thursday 5th from 15:00 UTC to 17:00 UTC

The calls will happen on BlueJeans and it's open to everyone. Please, do reply
off-list if you'd like to get a proper invite on your calendar. Otherwise, you
can simply join the link below at the meeting time and meet us there.

Bluejeans link: https://bluejeans.com/1759335191

One more note. The virtual mid-cycle will be recorded and when you join, the
recording will likely have been started already.

Hope to see you all there!
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [Artifacts] [app-catalog] Proposal to move artifacts meeting time

2016-02-02 Thread Flavio Percoco

On 01/02/16 12:43 -0500, Nikhil Komawar wrote:

Hi all,

Please find the request to move the artifacts meeting pushed half an
hour later on the same day in the following review request. I got
positive response from the initial poll. If you happened to miss today's
meeting, please take a moment to vote on the review.

The proposal is to have the weekly 30 mins meetings on
#openstack-meeting-alt on Mondays at 1730 UTC starting next week (i.e.
Feb 8th)

https://review.openstack.org/#/c/274806/


+1 :D

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Glance Core team additions/removals

2016-02-02 Thread Flavio Percoco

On 26/01/16 10:11 -0430, Flavio Percoco wrote:


Greetings,

I'd like us to have one more core cleanup for this cycle:

Additions:

- Kairat Kushaev
- Brian Rosmaita

Both have done amazing reviews either on specs or code and I think they both
would be an awesome addition to the Glance team.

Removals:

- Alexander Tivelkov
- Fei Long Wang

Fei Long and Alexander are both part of the OpenStack community. However, their
focus and time has shifted from Glance and, as it stands right now, it would
make sense to have them both removed from the core team. This is not related to
their reviews per-se but just prioritization. I'd like to thank both, Alexander
and Fei Long, for their amazing contributions to the team. If you guys want to
come back to Glance, please, do ask. I'm sure the team will be happy to have you
on board again.

To all other members of the community. Please, provide your feedback. Unless
someone objects, the above will be effective next Tuesday.



The following steps were taken:

- Kairat and Brian have been added. Welcome and thanks for joining
- Fei Long was kept as core. Thanks a lot for weighting in and catching up with 
Glance.
- Alexander has been removed. I can't stress how sad it is to see Alex go. I
hope Alex will be able to join in the not so far future.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova Midcycle Summary (i.e. mid mitaka progress report)

2016-02-02 Thread Carl Baldwin
On Tue, Feb 2, 2016 at 4:07 AM, John Garbutt  wrote:
> Scheduler:
> Discussed jay's blueprints. For mitaka we agreed to focus on:
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-classes.html,
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-providers.html,
> and possibly https://review.openstack.org/253187. The latter is likely
> to require a new Scheduler API endpoint within Nova.
> Overall there seemed a general agreement on the approach jaypipes
> proposed, and happiness that its almost all in written down now in
> spec changes.
> Discussed the new scheduler plan in relation to IP scheduling for
> neutron's routed networks with armax and carl_baldwin. Made a lot of
> progress towards better understanding each others requirements
> (https://review.openstack.org/#/c/263898/)

This was a highlight for me and made the trip well worth it.  There
was a lot of great discussion that I think will be the start of a
great collaboration.  Nova were great hosts a very gracious.

> Neutron:
> We had armax and carl_baldwin in the room.

It was a pleasure to be in attendance.  Thank you.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Mid-cycle sprint debrief

2016-02-02 Thread Tim Hinrichs
Hi all,

TL;DR.  We had a great midcycle sprint.  If you want to help us move
Congress toward its new distributed architecture, there are a list of items
you can help with below.


1. We had a productive mid-cycle sprint last week!  Here were the topics we
covered...

- Design discussions about implementing the distributed architecture using
oslo-messaging.

- Discussion about integrating Monasca so that when people write policy
Congress can use Monasca alarms to push information to us.

- Discussion with Su Zhang about his experiences using Congress at Symantec.

- Code sprint aimed at migrating off of our current, in-process message bus
to oslo-messaging.

For details about the discussions, check out the etherpad...
https://etherpad.openstack.org/p/congress-mitaka-sprint

2. Moving forward in the short-term, we are focusing on migrating from our
existing in-process message bus (DSE) to a small wrapper around
oslo-messaging (DSE2).

- As we migrate from DSE to DSE2, we're leaving the mainline code in place
and minimizing the changes so that it runs on both the old arch and the new
arch.  We use the flag 'distributed_architecture' to signal which version
we are running.

- Ideally the tests will all continue to function without modification in
both the old and new architectures.  But for those test files that don't
pass, we are copying them to congress/tests2 and modifying them there.

- We have disabled a few tests temporarily and marked them with TODO(dse2)
and an explanation as to why they are disabled so we can easily grep for
them later.

- tox -enew_arch will (soon) run all the tests in congress/tests2.

3. For those of you looking to help out, here are a few items you can sign
up for.

3.1. Work on porting congress/tests/test_congress.py to the new distributed
arch.  See congress/tests2/test_dse2.py for an example.  In fact, there may
be tests commented out in tests2/dse2/test_dse2.py that we should
re-enable/port.
https://bugs.launchpad.net/congress/+bug/1541008

3.2. Work on porting the remaining API models to the new arch.  See my
recent changesets for the basic idea.
https://review.openstack.org/#/c/274957/

https://bugs.launchpad.net/congress/+bug/1541001
https://bugs.launchpad.net/congress/+bug/1541002
https://bugs.launchpad.net/congress/+bug/1541003
https://bugs.launchpad.net/congress/+bug/1541004

3.3. Create a non-voting gate job for tox -enew_arch
https://bugs.launchpad.net/congress/+bug/1540990

3.4. Try out the scripts/start_process.py script to see how we would use it
for the new arch.  Eventually we'll want a non-voting job in the gate that
runs all the tempest tests on the new architecture.
https://bugs.launchpad.net/congress/+bug/1541019

Questions/comments?
Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-02 Thread Ben Swartzlander
Rodrigo (ganso on IRC) joined the Manila project back in the Kilo 
release and has been working on share migration (an important core 
feature) for the last 2 releases. Since Tokyo he has dedicated himself 
to reviews and community participation. I would like to nominate him to 
join the Manila core reviewer team.


-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #68

2016-02-02 Thread Emilien Macchi
We did our meeting, you can read the notes here:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-02-02-14.59.log.html

Thanks!

On Mon, Feb 1, 2016 at 11:26 AM, Emilien Macchi  wrote:

> Hey, we'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting4.
>
> https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack
>
> As usual, free free to bring topics in this etherpad:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160201
>
> We'll also have open discussion for bugs & reviews, so anyone is welcome
> to join.
>
> See you there,
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-02-02 Thread Adrian Otto
Thanks everyone for your votes. Welcome Ton and Egor to the core team!

Regards,

Adrian

> On Feb 1, 2016, at 7:58 AM, Adrian Otto  wrote:
> 
> Magnum Core Team,
> 
> I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core Reviewers. 
> Please respond with your votes.
> 
> Thanks,
> 
> Adrian Otto


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.concurrency 3.4.0 release (mitaka)

2016-02-02 Thread davanum
We are amped to announce the release of:

oslo.concurrency 3.4.0: Oslo Concurrency library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.concurrency

With package available at:

https://pypi.python.org/pypi/oslo.concurrency

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

For more details, please see below.


Changes in oslo.concurrency 3.3.0..3.4.0


e9d00d1 Update translation setup
b2e7856 Add prlimit parameter to execute()
70118c2 Updated from global requirements
e168906 Updated from global requirements
3eed329 Updated from global requirements
4b37f13 Updated from global requirements
fec46db Imported Translations from Zanata
ca9801d Updated from global requirements

Diffstat (except docs and test files)
-

.../en_GB/LC_MESSAGES/oslo.concurrency-log-info.po |  27 -
.../locale/en_GB/LC_MESSAGES/oslo.concurrency.po   |  99 ---
.../es/LC_MESSAGES/oslo.concurrency-log-info.po|  27 -
.../locale/es/LC_MESSAGES/oslo.concurrency.po  |  99 ---
.../fr/LC_MESSAGES/oslo.concurrency-log-info.po|  27 -
.../locale/fr/LC_MESSAGES/oslo.concurrency.po  |  99 ---
.../locale/oslo.concurrency-log-info.pot   |  25 -
oslo.concurrency/locale/oslo.concurrency.pot   |  95 --
oslo_concurrency/_i18n.py  |   2 +-
.../en_GB/LC_MESSAGES/oslo_concurrency-log-info.po |  27 +
.../locale/en_GB/LC_MESSAGES/oslo_concurrency.po   | 100 +++
.../es/LC_MESSAGES/oslo_concurrency-log-info.po|  27 +
.../locale/es/LC_MESSAGES/oslo_concurrency.po  | 100 +++
.../fr/LC_MESSAGES/oslo_concurrency-log-info.po|  27 +
.../locale/fr/LC_MESSAGES/oslo_concurrency.po  | 100 +++
.../locale/oslo_concurrency-log-info.pot   |  25 +
oslo_concurrency/locale/oslo_concurrency.pot   |  95 ++
oslo_concurrency/prlimit.py|  89 +
oslo_concurrency/processutils.py   |  52 ++
requirements.txt   |  16 +--
setup.cfg  |  12 +--
test-requirements.txt  |  10 +-
23 files changed, 771 insertions(+), 518 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index ae213e5..e4ccc01 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,8 +5,8 @@
-pbr>=1.6
-Babel>=1.3
-enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3'
-iso8601>=0.1.9
-oslo.config>=3.2.0 # Apache-2.0
-oslo.i18n>=1.5.0 # Apache-2.0
-oslo.utils>=3.2.0 # Apache-2.0
-six>=1.9.0
+pbr>=1.6 # Apache-2.0
+Babel>=1.3 # BSD
+enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' 
# BSD
+iso8601>=0.1.9 # MIT
+oslo.config>=3.4.0 # Apache-2.0
+oslo.i18n>=2.1.0 # Apache-2.0
+oslo.utils>=3.4.0 # Apache-2.0
+six>=1.9.0 # MIT
diff --git a/test-requirements.txt b/test-requirements.txt
index d7b0033..b1a22fa 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7,3 +7,3 @@ oslotest>=1.10.0 # Apache-2.0
-coverage>=3.6
-futures>=3.0;python_version=='2.7' or python_version=='2.6'
-fixtures>=1.3.1
+coverage>=3.6 # Apache-2.0
+futures>=3.0;python_version=='2.7' or python_version=='2.6' # BSD
+fixtures>=1.3.1 # Apache-2.0/BSD
@@ -13 +13 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
@@ -15 +15 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
-eventlet>=0.17.4
+eventlet!=0.18.0,>=0.17.4 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.context 2.0.0 release (mitaka)

2016-02-02 Thread davanum
We are thrilled to announce the release of:

oslo.context 2.0.0: Oslo Context library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.context

With package available at:

https://pypi.python.org/pypi/oslo.context

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.context

For more details, please see below.


Changes in oslo.context 1.0.1..2.0.0


b621d4e Improve Context docs with example syntax
22ad2c2 Define method for oslo.log context parameters
800208d Add additional unit tests
4a8a1df Fix request_id type on Python 3: use text (Unicode)
ebf47d1 Updated from global requirements
1f3719d Provide a helper to load a context from environment

Diffstat (except docs and test files)
-

.gitignore |   2 +-
oslo_context/context.py|  45 ++-
requirements.txt   |   4 +-
test-requirements.txt  |   4 +-
12 files changed, 542 insertions(+), 12 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 08b0f01..b2608e4 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,2 +5,2 @@
-pbr>=1.6
-Babel>=1.3
+pbr>=1.6 # Apache-2.0
+Babel>=1.3 # BSD
diff --git a/test-requirements.txt b/test-requirements.txt
index 4ea068e..fe145a5 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7 +7 @@ oslotest>=1.10.0 # Apache-2.0
-coverage>=3.6
+coverage>=3.6 # Apache-2.0
@@ -11 +11 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova Midcycle Summary (i.e. mid mitaka progress report)

2016-02-02 Thread Matt Riedemann



On 2/2/2016 4:07 AM, John Garbutt wrote:

Hi,

For all the details see this etherpad:
https://etherpad.openstack.org/p/mitaka-nova-midcycle

Here I am attempting a brief summary, picking out some highlights.
Feel free to reply and add your own details / corrections.

**Process**

Non-priority FFE deadline is this Friday (5th Feb).
Now open for Newton specs.
Please move any proposed Mitaka specs to Newton.

**Priorities**

Cells v2:
It is moving forward, see alaski's great summary:
http://lists.openstack.org/pipermail/openstack-dev/2016-January/084545.html
Mitaka aim is around the new create instance flow.
This will make the cell zero and the API database required.
Need to define a list of instance info that is "valid" in the API
before the instance has been built.

v2.1 API:
API docs updates moving forward, as is the removal of project-ids and
related work to support the scheduler. Discussed policy discovery for
newton, in relation to the live-resize blueprint, alaski to follow up
with keystone folks.

Live-Migrate:
Lots of code to review (see usual etherpad for priority order). Some
details around storage pools need agreeing, but the general approach
seems to have reached consensus. CI is making good progress, as its
finding bugs. Folks signed up for manual testing.
Spoke about the need to look into the token expiry fix discussed at the summit.

Scheduler:
Discussed jay's blueprints. For mitaka we agreed to focus on:
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-classes.html,
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-providers.html,
and possibly https://review.openstack.org/253187. The latter is likely
to require a new Scheduler API endpoint within Nova.
Overall there seemed a general agreement on the approach jaypipes
proposed, and happiness that its almost all in written down now in
spec changes.
Discussed the new scheduler plan in relation to IP scheduling for
neutron's routed networks with armax and carl_baldwin. Made a lot of
progress towards better understanding each others requirements
(https://review.openstack.org/#/c/263898/)

priv-sep:
We must have priv-sep in os-brick for mitaka to avoid more upgrade problems.
Go back and do a better job after we fix the burning upgrade issue

os-vif:
Work continues.
Decided it doesn't have to wait for priv-sep.
Agreed base os-vif lib to include ovs, ovs hybrid, and linux-bridge

**Testing**

* Got folks to help get a bleeding edge libvirt test working
* Agreement on the need to improve ironic driver testing
* Agreed on the intent to move forward with Feature Classification
* Reminder about the new CI related review guideline

**Cross Project**

Neutron:
We had armax and carl_baldwin in the room.
Discussed routed networks and the above scheduler impacts.
Spoke about API changes so we have less downtime during live-migrate
when using DVR (or similar tech).
Get me a network Neutron API needs to be idempotent. Still need help
with the patch on the Nova side, jaypipes to find someone. Agreed
overall direction.

Cinder:
Joined Cinder meetup via hangout.
Got a heads up around the issues they are having with nested quotas.
A patch broke backwards compatibility with olderer cinders, so the
patch has been reverted.
Spoke about priv-sep and os-brick, agreed above plan for the brute
force conversion.
Agreed multi-attach should wait till Newton. We have merged the DB
fixes that we need to avoid data corruption. Spoke about using service
version reporting to stop the API allowing mutli-attach until the
upgrade has completed. To make remove_export not race, spoke about the
need for every volume attachment having its own separate host
attachment, rather than trying to share connections. Still questions
around upgrade.

**Other**

Spoke about the need for policy discovery via the API, before we add
something like the live-resize blueprint.


While the policy discovery discussion was mostly prompted by the live 
resize spec, I think it also applies to multi-attach since that's 
backend specific and operators are likely to disable the API if they 
aren't supporting it, e.g. Rackspace since the xenapi driver doesn't 
implement multi-attach.




Spoke about the architectural aim to not have computes communicate
with each other, and instead have the conductor send messages between
computes. This was in relation to tdurakov's proposal to refactor the
live-migrate workflow.

**Thank You**

Many thanks for Paul Murray and others at HP for hosting up during our
time in Bristol, UK.

Also many thanks to all who made the long trip to Bristol to help
discuss all these up coming efforts, and start to build consensus
ahead of the Newton design summit in Austin.

Thanks for reading,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] Google Sumer of Code 2016 - Call for ideas and mentors (deadline 19/02/2016)

2016-02-02 Thread Davanum Srinivas
Victoria,

Thanks for doing this. I've signed up as a Mentor at
https://wiki.openstack.org/wiki/GSoC2016#Mentors

Folks,

This is a great opportunity for students to get to know your
project(s), so please sign up!

Thanks,
Dims


On Mon, Feb 1, 2016 at 1:12 PM, Victoria Martínez de la Cruz
 wrote:
> Hi all,
>
> Google Summer of Code (GSoC) is a program that matches mentoring
> organizations with college and university student developers who are paid to
> write open source code. It has been around 2005 and we had been accepted as
> a mentor organization in only one opportunity (2014) having a great outcome
> for both interns and for our community. We expect to be able to join this
> year again, but for that, we will need your help.
>
> Mentors
>
> We need to submit our application as a mentoring organization, but for that,
> we need to have a clear outline of what different projects we have for
> interns to work on.
>
> *** The deadline for mentoring organizations applications is 19/02/2016. ***
>
> If you are interested in mentoring but you have doubts about it, please feel
> free to reach us here or on #openstack-gsoc. We will be happy to reply any
> doubt you may have about mentoring for this internship. Also, you can check
> out this guide [0].
>
> If you are already convinced that you want to join us as a mentor for this
> round, add your name in the OpenStack Google Summer of Code 2016 wiki page
> [1] and add your project ideas in [2]. Make sure you leave your contact
> information in the OpenStack GSoC 2016 wiki and that you add all the
> important details about the project idea. Also reach us if there is
> something you are not certain about.
>
> Interns
>
> Whereas we don't know yet if we are going to make it as a mentoring
> organization for this round, if you want to join us as an intern and you
> want to help OpenStack to get selected as a mentoring organization, you can
> help us proposing different tasks for the various projects we have in our
> ecosystem.
>
> For your inspiration, you can check out past projects in [3] and [4].
>
> Looking forward to see GSoC happening again in our community!
>
> Thanks,
>
> Victoria
>
> [0] http://en.flossmanuals.net/gsocmentoring/
> [1] https://wiki.openstack.org/wiki/GSoC2016
> [2] https://wiki.openstack.org/wiki/Internship_ideas
> [3] https://wiki.openstack.org/wiki/GSoC2014
> [4] https://wiki.openstack.org/wiki/GSoC2015



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova Midcycle Summary (i.e. mid mitaka progress report)

2016-02-02 Thread Balázs Gibizer
> From: John Garbutt [mailto:j...@johngarbutt.com]
> Sent: February 02, 2016 12:08
> 
> Hi,
> 
> For all the details see this etherpad:
> https://etherpad.openstack.org/p/mitaka-nova-midcycle
> 
> Here I am attempting a brief summary, picking out some highlights.
> Feel free to reply and add your own details / corrections.
> 
> **Process**
> 
> Non-priority FFE deadline is this Friday (5th Feb).
> Now open for Newton specs.
> Please move any proposed Mitaka specs to Newton.
> 
> **Priorities**
> 
> Cells v2:
> It is moving forward, see alaski's great summary:
> http://lists.openstack.org/pipermail/openstack-dev/2016-
> January/084545.html
> Mitaka aim is around the new create instance flow.
> This will make the cell zero and the API database required.
> Need to define a list of instance info that is "valid" in the API
> before the instance has been built.
> 
> v2.1 API:
> API docs updates moving forward, as is the removal of project-ids and
> related work to support the scheduler. Discussed policy discovery for
> newton, in relation to the live-resize blueprint, alaski to follow up
> with keystone folks.
> 
> Live-Migrate:
> Lots of code to review (see usual etherpad for priority order). Some
> details around storage pools need agreeing, but the general approach
> seems to have reached consensus. CI is making good progress, as its
> finding bugs. Folks signed up for manual testing.
> Spoke about the need to look into the token expiry fix discussed at the
> summit.
> 
> Scheduler:
> Discussed jay's blueprints. For mitaka we agreed to focus on:
> http://specs.openstack.org/openstack/nova-
> specs/specs/mitaka/approved/resource-classes.html,
> http://specs.openstack.org/openstack/nova-
> specs/specs/mitaka/approved/resource-providers.html,
> and possibly https://review.openstack.org/253187. The latter is likely
> to require a new Scheduler API endpoint within Nova.
> Overall there seemed a general agreement on the approach jaypipes
> proposed, and happiness that its almost all in written down now in
> spec changes.
> Discussed the new scheduler plan in relation to IP scheduling for
> neutron's routed networks with armax and carl_baldwin. Made a lot of
> progress towards better understanding each others requirements
> (https://review.openstack.org/#/c/263898/)
> 
> priv-sep:
> We must have priv-sep in os-brick for mitaka to avoid more upgrade
> problems.
> Go back and do a better job after we fix the burning upgrade issue
> 
> os-vif:
> Work continues.
> Decided it doesn't have to wait for priv-sep.
> Agreed base os-vif lib to include ovs, ovs hybrid, and linux-bridge
> 
> **Testing**
> 
> * Got folks to help get a bleeding edge libvirt test working
> * Agreement on the need to improve ironic driver testing
> * Agreed on the intent to move forward with Feature Classification
> * Reminder about the new CI related review guideline
> 
> **Cross Project**
> 
> Neutron:
> We had armax and carl_baldwin in the room.
> Discussed routed networks and the above scheduler impacts.
> Spoke about API changes so we have less downtime during live-migrate
> when using DVR (or similar tech).
> Get me a network Neutron API needs to be idempotent. Still need help
> with the patch on the Nova side, jaypipes to find someone. Agreed
> overall direction.
> 
> Cinder:
> Joined Cinder meetup via hangout.
> Got a heads up around the issues they are having with nested quotas.
> A patch broke backwards compatibility with olderer cinders, so the
> patch has been reverted.
> Spoke about priv-sep and os-brick, agreed above plan for the brute
> force conversion.
> Agreed multi-attach should wait till Newton. We have merged the DB
> fixes that we need to avoid data corruption. Spoke about using service
> version reporting to stop the API allowing mutli-attach until the
> upgrade has completed. To make remove_export not race, spoke about the
> need for every volume attachment having its own separate host
> attachment, rather than trying to share connections. Still questions
> around upgrade.
> 
> **Other**
> 
> Spoke about the need for policy discovery via the API, before we add
> something like the live-resize blueprint.
> 
> Spoke about the architectural aim to not have computes communicate
> with each other, and instead have the conductor send messages between
> computes. This was in relation to tdurakov's proposal to refactor the
> live-migrate workflow.

We spoke about versioned notifications as well.
As the versioned-notification infrastructure [1] has been landed we agreed that
we will only allow new notification with versioned payload in nova from now on
and we will continue the work to transform the existing notifications to the new
format in Newton. All the details are on the notification etherpad [2]

Cheers,
Gibi

[1] https://review.openstack.org/#/q/topic:bp/versioned-notification-api 
[2] https://etherpad.openstack.org/p/nova-versioned-notifications 

> 
> **Thank You**
> 
> Many thanks for Paul Murray and others at HP for 

[openstack-dev] Announcing a simple new tool: git-restack

2016-02-02 Thread James E. Blair
Hi,

I'm pleased to announce a new and very simple tool to help with managing
large patch series with our Gerrit workflow.

In our workflow we often find it necessary to create a series of
dependent changes in order to make a larger change in manageable chunks,
or because we have a series of related changes.  Because these are part
of larger efforts, it often seems like they are even more likely to have
to go through many revisions before they are finally merged.  Each step
along the way reviewers look at the patches in Gerrit and leave
comments.  As a reviewer, I rely heavily on looking at the difference
between patchsets to see how the series evolves over time.

Occasionally we also find it necessary to re-order the patch series, or
to include or exclude a particular patch from the series.  Of course the
interactive git rebase command makes this easy -- but in order to use
it, you need to supply a base upon which to "rebase".  A simple choice
would be to rebase the series on master, however, that creates
difficulties for reviewers if master has moved on since the series was
begun.  It is very difficult to see any actual intended changes between
different patch sets when they have different bases which include
unrelated changes.

The best thing to do to make it easy for reviewers (and yourself as you
try to follow your own changes) is to keep the same "base" for the
entire patch series even as you "rebase" it.  If you know how long your
patch series is, you can simply run "git rebase -i HEAD~N" where N is
the patch series depth.  But if you're like me and have trouble with
numbers other than 0 and 1, then you'll like this new command.

The git-restack command is very simple -- it looks for the most recent
commit that is both in your current branch history and in the branch it
was based on.  It uses that as the base for an interactive rebase
command.  This means that any time you are editing a patch series, you
can simply run:

  git restack

and you will be placed in an interactive rebase session with all of the
commits in that patch series staged.  Git-restack is somewhat
branch-aware as well -- it will read a .gitreview file to find the
remote branch to compare against.  If your stack was based on a
different branch, simply run:

  git restack 

and it will use that branch for comparison instead.

Git-restack is on pypi so you can install it with:

  pip install git-restack

The source code is based heavily on git-review and is in Gerrit under
openstack-infra/git-restack.

https://pypi.python.org/pypi/git-restack/1.0.0
https://git.openstack.org/cgit/openstack-infra/git-restack

I hope you find this useful,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-02 Thread Sturdevant, Mark
+1  Rodrigo is the obvious choice and a needed addition.


From: Ben Swartzlander [b...@swartzlander.org]
Sent: Tuesday, February 02, 2016 9:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core
reviewer team

Rodrigo (ganso on IRC) joined the Manila project back in the Kilo
release and has been working on share migration (an important core
feature) for the last 2 releases. Since Tokyo he has dedicated himself
to reviews and community participation. I would like to nominate him to
join the Manila core reviewer team.

-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Logging and traceback at same time.

2016-02-02 Thread Khayam Gondal
Is there a way to do logging the information and traceback at the same time. 
Currently I am doing it like this.

 

 

LOG.error(_LE('Record already exists: %(exception)s '

 '\n %(traceback)'),

   {'exception': e1},

   {'traceback': traceback.print_stack()}).

 

Let me know if this is correct way?

Regards

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Logging and traceback at same time.

2016-02-02 Thread 王华
You can use LOG.exception.

Regards,
Wanghua

On Wed, Feb 3, 2016 at 2:28 PM, Khayam Gondal 
wrote:

> Is there a way to do logging the information and traceback at the same
> time. Currently I am doing it like this.
>
>
>
>
>
> LOG.error(_LE('Record already exists: %(exception)s '
>
>  '\n %(traceback)'),
>
>{'exception': e1},
>
>{'traceback': traceback.print_stack()}).
>
>
>
> Let me know if this is correct way?
>
> Regards
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-02 Thread Silvan Kaiser
+1 very helpful, absolutely

2016-02-02 21:43 GMT+01:00 Ravi, Goutham :

> +1 ganso's been extremely diligent and helpful with his reviews! He's a
> great addition!
>
> From: Erlon Cruz 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, February 2, 2016 at 2:39 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core
> reviewer team
>
> +1
> Great addition. I'm not active in Manila, but we work in the same team in
> our company and I see he is doing a great work! Well deserved!
>
> On Tue, Feb 2, 2016 at 4:43 PM, Valeriy Ponomaryov <
> vponomar...@mirantis.com> wrote:
>
>> Rodrigo is the first person in queue for taking such responsibility.
>> +2
>>
>> On Tue, Feb 2, 2016 at 8:14 PM, Dustin Schoenbrun 
>> wrote:
>>
>>> +1 He's been a great resource to the community and the nomination is
>>> very well deserved!
>>>
>>>
>>> On 02/02/2016 12:58 PM, Knight, Clinton wrote:
>>>
 +1  Great addition.  Welcome, Rodrigo!

 Clinton


 On 2/2/16, 12:30 PM, "Ben Swartzlander"  wrote:

 Rodrigo (ganso on IRC) joined the Manila project back in the Kilo
> release and has been working on share migration (an important core
> feature) for the last 2 releases. Since Tokyo he has dedicated himself
> to reviews and community participation. I would like to nominate him to
> join the Manila core reviewer team.
>
> -Ben Swartzlander
> Manila PTL
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> --
>>> Dustin Schoenbrun
>>> OpenStack Quality Engineer
>>> Red Hat, Inc.
>>> dscho...@redhat.com
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Kind Regards
>> Valeriy Ponomaryov
>> www.mirantis.com
>> vponomar...@mirantis.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dr. Silvan Kaiser
Quobyte GmbH
Hardenbergplatz 2, 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender

-- 

--
*Quobyte* GmbH
Hardenbergplatz 2 - 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]Question regarding pep8 test

2016-02-02 Thread Zhipeng Huang
Hi Khayam,

We try to get every communication on the mailing list for transparency :)
So I copied your email here.
-

Zhiyan pls check to see what happened.

Sent from HUAWEI AnyOffice

-

Hi Chaoyi,

When I run local *pep8 *test using* tox -e pep8*, It always shows error
regarding

*./tricircle/tests/unit/network/test_plugin.py:268:1: D102  Missing
docstring in public method*

but on Jenkins no such errors are shown. Due to this contradictory
behavior, I am unable to locally test *pep8*.

May be there is something I am missing or not well using. Is it possible
for you to guide me through.



Regards

Khayam

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] API service won't work if conductor down?

2016-02-02 Thread Eli Qiao

hi
Whey I try to run magnum service-list to list all services (seems now we 
only have m-cond service), it m-cond is down(which means no conductor at 
all),

API won't response and will return a timeout error.

taget@taget-ThinkStation-P300:~/devstack$ magnum service-list
ERROR: Timed out waiting for a reply to message ID 
fd1e9529f60f42bf8db903bbf75bbade (HTTP 500)


And I debug more and compared with nova service-list, nova will give 
response and will tell the conductor is down.


and deeper I get this in magnum-api boot up:
/
/ /# Enable object backporting via the conductor//
//base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()/

so in magnum_service api code

return objects.MagnumService.list(context, limit, marker, sort_key,
  sort_dir)

will require to use magnum-conductor to access DB, but no 
magnum-conductor at all, then we get a 500 error.

(nova-api doesn't specify /indirection_api so nova-api can access DB/)

My question is:

1) is this by designed that we don't allow magnum-api to access DB 
directly ?
2) if 1) is by designed, then `magnum service-list` won't work, and the 
error message should be improved such as "magnum service is down , 
please check magnum conductor is alive"


What do you think?

P.S. I tested comment this line:
/# base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()/
magnum-api will response but failed to create bay(), which means api 
service have read access but can not write it at all since(all db write 
happened in conductor layer).


--
Best Regards, Eli(Li Yong)Qiao
Intel OTC China

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-02 Thread Thomas Bechtold
On Tue, Feb 02, 2016 at 12:30:44PM -0500, Ben Swartzlander wrote:
> Rodrigo (ganso on IRC) joined the Manila project back in the Kilo release
> and has been working on share migration (an important core feature) for the
> last 2 releases. Since Tokyo he has dedicated himself to reviews and
> community participation. I would like to nominate him to join the Manila
> core reviewer team.

+2 . Keep up the good work Rodrigo!

-- 
Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] weekly meeting of Feb.3rd

2016-02-02 Thread joehuang
Hi,

The Chinese new year is approaching, let's have a weekly meeting, and agenda as 
following. After this meeting, we'll cancel the meeting of next week, and 
resume on Feb.17.

Agenda:
# Progress of To-do list review: https://etherpad.openstack.org/p/TricircleToDo
# Query optimization.
# Quota management.

Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases for multiple L3 backends

2016-02-02 Thread rzang
What advantage can we get from putting multiple drivers into one flavor over 
strictly limit one flavor one driver (or whatever it is called).


Thanks,
Rui


-- Original --
From:  "Kevin Benton";;
Send time: Wednesday, Feb 3, 2016 8:55 AM
To: "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases 
for multiple L3 backends




Choosing from multiple drivers for the same flavor is scheduling. I didn't mean 
automatically selecting other flavors. 
 On Feb 2, 2016 17:53, "Eichberger, German"  wrote:
Not that you could call it scheduling. The intent was that the user could pick 
the best flavor for his task (e.g. a gold router as opposed to a silver one). 
The system then would ??schedule?? the driver configured for gold or silver. 
Rescheduling wasn??t really a consideration??
 
 German
 
 From: Doug Wiegley 
>
 Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
 Date: Monday, February 1, 2016 at 8:17 PM
 To: "OpenStack Development Mailing List (not for usage questions)" 
>
 Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use cases 
for multiple L3 backends
 
 Yes, scheduling was a big gnarly wart that was punted for the first pass. The 
intention was that any driver you put in a single flavor had equivalent 
capabilities/plumbed to the same networks/etc.
 
 doug
 
 
 On Feb 1, 2016, at 7:08 AM, Kevin Benton 
> wrote:
 
 
 Hi all,
 
 I've been working on an implementation of the multiple L3 backends RFE[1] 
using the flavor framework and I've run into some snags with the use-cases.[2]
 
 The first use cases are relatively straightforward where the user requests a 
specific flavor and that request gets dispatched to a driver associated with 
that flavor via a service profile. However, several of the use-cases are based 
around the idea that there is a single flavor with multiple drivers and a 
specific driver will need to be used depending on the placement of the router 
interfaces. i.e. a router cannot be bound to a driver until an interface is 
attached.
 
 This creates some painful coordination problems amongst drivers. For example, 
say the first two networks that a user attaches a router to can be reached by 
all drivers because they use overlays so the first driver chosen by the 
framework works  fine. Then the user connects to an external network which is 
only reachable by a different driver. Do we immediately reschedule the entire 
router at that point to the other driver and interrupt the traffic between the 
first two networks?
 
 Even if we are fine with a traffic interruption for rescheduling, what should 
we do when a failure occurs half way through switching over because the new 
driver fails to attach to one of the networks (or the old driver fails to 
detach from one)? It would seem the correct API experience would be switch 
everything back and then return a failure to the caller trying to add an 
interface. This is where things get messy.
 
 If there is a failure during the switch back, we now have a single router's 
resources smeared across two drivers. We can drop the router into the ERROR 
state and re-attempt the switch in a periodic task, or maybe just leave it 
broken.
 
 How should we handle this much orchestration? Should we pull in something like 
taskflow, or maybe defer that use case for now?
 
 What I want to avoid is what happened with ML2 where error handling is still a 
TODO in several cases. (e.g. Any post-commit update or delete failures in 
mechanism drivers will not trigger a revert in state.)
 
 1. https://bugs.launchpad.net/neutron/+bug/1461133
 2. 
https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases
 
 --
 Kevin Benton
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [QA][Neutron] IPv6 related intermittent test failures

2016-02-02 Thread Brian Haley

On 02/02/2016 10:03 PM, Matthew Treinish wrote:

On Tue, Feb 02, 2016 at 05:09:47PM -0800, Armando M. wrote:

Folks,

We have some IPv6 related bugs [1,2,3] that have been lingering for some
time now. They have been hurting the gate (e.g. [4] the most recent
offending failure) and since it looks like they have been without owners
nor a plan of action for some time, I made the hard decision of skipping
them [5] ahead of the busy times ahead.


So TBH I don't think the failure rate for these tests are really at a point
necessitating a skip:

http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac
http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os
http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os

(also just a cool side-note, you can see the very obvious performance regression
caused by the keystonemiddleware release and when we excluded that version in
requirements)

Well, test_dualnet_dhcp6_stateless_from_os is kinda there with a ~10% failure
rate, but the other 2 really aren't. I normally would be -1 on the skip patch
because of that. We try to save the skips for cases where the bugs are really
severe and preventing productivity at a large scale.

But, in this case these ipv6 tests are kinda of out of place in tempest. Having
all the permutations of possible ip allocation configurations always seemed a
bit too heavy handed. These tests are also consistently in the top 10 slowest
for a run. We really should have trimmed down this set a while ago so we're only
have a single case in tempest. Neutron should own the other possible
configurations as an in-tree test.

Brian Haley has a patch up from Dec. that was trying to clean it up:

https://review.openstack.org/#/c/239868/


I just updated that to mark six of the eight tests as "slow" per your previous 
comment, such that only the dual-NIC/dual-stack tests are run in the gate, the 
others will run in the periodic nightly job.


http://status.openstack.org/openstack-health/#/job/periodic-tempest-dsvm-all-master

Will help lessen the impact until we can determine if it's the test or Neutron.

-Brian


We probably should revisit that soon, since quite clearly no one is looking at
these right now.


-Matt Treinish




Now one might argue that skipping them is counterproductive because it may
allow other regressions to sneak in, but I am hoping that this
controversial action will indeed smoke out the right folks.

Comments welcome.

Regards,
Armando

[1] https://bugs.launchpad.net/neutron/+bug/1477192
[2] https://bugs.launchpad.net/neutron/+bug/1509004
[3] https://bugs.launchpad.net/openstack-gate/+bug/1540983
[4]
http://logs.openstack.org/37/264937/5/gate/gate-tempest-dsvm-neutron-full/afeaabd//logs/testr_results.html.gz
[5] https://review.openstack.org/#/c/275457/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Domain Specific Roles vs Local Groups

2016-02-02 Thread Yee, Guang
I presume there’s a spec coming for this “seductive approach”? Not sure if I 
get all of it. From what’s been described here, conceptually, isn’t “local 
groups”, DSRs, or role groups the same thing?


Guang


From: Henry Nash [mailto:henryna...@mac.com]
Sent: Monday, February 01, 2016 3:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [keystone] Domain Specific Roles vs Local Groups

Hi

During the recent keystone midcycle, it was suggested than an alternative 
domain specific roles (see spec: 
https://github.com/openstack/keystone-specs/blob/master/specs/mitaka/domain-specific-roles.rst
 and code patches starting at: https://review.openstack.org/#/c/261846/) might 
be to somehow re-use the group concept. This was actually something we had 
discussed in previous proposals for this functionality. As I mentioned during 
the last day, while this is a seductive approach, it doesn’t actually scale 
well (or in fact provide the right abstraction). The best way to illustrate 
this is with an example:

Let’s say a customer is being hosted by a cloud provider. The customer has 
their own domain containing their own users and groups, to keep them segregated 
from other customers. The cloud provider, wanting to attract as many different 
types of customer as possible, has created a set of fine-grained global roles 
tied to APIs via the policy files. The domain admin of the customer wants to 
create a collection of 10 such fine-grained roles that represent some function 
that is meaningful to their setup (perhaps it’s job that allows you to monitor 
resources and fix a subset of problems).

With domain specific roles (DSR) , the domain admin creates a DSR (which is 
just a role with a domain_id attribute), and then adds the 10 global policy 
roles required using the implied roles API. They can then assign this DSR to 
all the projects they need to, probably as a group assignment (where the groups 
could be local, federated or LDAP). One assignment per project is required, so 
if there were, over time, 100 projects, then that’s 100 assignments. Further, 
if they want to add another global role (maybe to allow access to a new API) to 
that DSR, then it’s a single API call to do it.

The proposal to use groups instead would work something like this: We would 
support a concept of “local groups” in keystone, that would be independent of 
whatever groups the identity backend was mapped to. In order to represent the 
DSR, a local group would be created (perhaps with the name of the functional 
job members of the group could carry out). User who could carry out this 
function would be added to this group (presumably we might also have to support 
“remote” groups being members of such local groups, a concept we don’t really 
support today, but not too much of a stretch). This group would then need to be 
assigned to each project in turn, but for each of the 10 global roles that this 
“DSR equivalent” provided in turn (so an immediate increase by a factor of N 
API calls, where N is the number of roles per DSR) - so 1000 assignments in our 
example. If the domain admin wanted to add a new role to (or remove a role 
from) the “DSR”, they would have to do another assignment to each project that 
this “DSR” was being used (100 new assignments in our example).  Again, I would 
suggest, much less convenient.

Given the above, I believe the current DSR proposal does provide the right 
abstraction and scalability, and we should continue to review and merge it as 
planned. Obviously this is still dependant on Implied Roles (either in its 
current form, or a modified version). Alternative code of doing a 
one-level-only inference part of DSRs does exist (from an earlier attempt), but 
I don’t think we want to do that if we are going to have any kind of implied 
roles.

Henry
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Nominating Vijendar Komalla for Solum core

2016-02-02 Thread ashish.jain14
My +1


From: Devdatta Kulkarni 
Sent: Wednesday, February 3, 2016 1:03:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Nominating Vijendar Komalla for Solum core

Hi team,

I would like to propose Vijendar Komalla for our core team. Vijendar has been 
actively
contributing to Solum for several months now submitting patches, providing 
great reviews,
and participating actively in our IRC meetings and on Solum IRC channel.
You can find Vijendar's contributions at [1][2].

Please respond with your votes.

Regards,
Devdatta

[1] http://stackalytics.com/?module=solum_id=vijendar-komalla
[2] http://stackalytics.com/?module=python-solumclient_id=vijendar-komalla
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Domain Specific Roles vs Local Groups

2016-02-02 Thread Morgan Fainberg
On Feb 2, 2016 19:38, "Yee, Guang"  wrote:
>
> I presume there’s a spec coming for this “seductive approach”? Not sure
if I get all of it. From what’s been described here, conceptually, isn’t
“local groups”, DSRs, or role groups the same thing?
>

Subtle differences. Local groups would be locked to a specific scope /
group of scopes. And Domain Specific Role (dont use the initialism/acronym
overloaded), would be global that could be assinged to many various scopes.

E.g. local group would be role x, Y, z on domain q.

Domain specific role would be "role a, which is role x, y, z", and works
like any other role for user/project(ordomain) combination.

The local groups we have all the code to do today.

--M
>
>
>
>
> Guang
>
>
>
>
>
> From: Henry Nash [mailto:henryna...@mac.com]
> Sent: Monday, February 01, 2016 3:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [keystone] Domain Specific Roles vs Local Groups
>
>
>
> Hi
>
>
>
> During the recent keystone midcycle, it was suggested than an alternative
domain specific roles (see spec:
https://github.com/openstack/keystone-specs/blob/master/specs/mitaka/domain-specific-roles.rst
and
code patches starting at: https://review.openstack.org/#/c/261846/) might
be to somehow re-use the group concept. This was actually something we had
discussed in previous proposals for this functionality. As I mentioned
during the last day, while this is a seductive approach, it doesn’t
actually scale well (or in fact provide the right abstraction). The best
way to illustrate this is with an example:
>
>
>
> Let’s say a customer is being hosted by a cloud provider. The customer
has their own domain containing their own users and groups, to keep them
segregated from other customers. The cloud provider, wanting to attract as
many different types of customer as possible, has created a set of
fine-grained global roles tied to APIs via the policy files. The domain
admin of the customer wants to create a collection of 10 such fine-grained
roles that represent some function that is meaningful to their setup
(perhaps it’s job that allows you to monitor resources and fix a subset of
problems).
>
>
>
> With domain specific roles (DSR) , the domain admin creates a DSR (which
is just a role with a domain_id attribute), and then adds the 10 global
policy roles required using the implied roles API. They can then assign
this DSR to all the projects they need to, probably as a group assignment
(where the groups could be local, federated or LDAP). One assignment per
project is required, so if there were, over time, 100 projects, then that’s
100 assignments. Further, if they want to add another global role (maybe to
allow access to a new API) to that DSR, then it’s a single API call to do
it.
>
>
>
> The proposal to use groups instead would work something like this: We
would support a concept of “local groups” in keystone, that would be
independent of whatever groups the identity backend was mapped to. In order
to represent the DSR, a local group would be created (perhaps with the name
of the functional job members of the group could carry out). User who could
carry out this function would be added to this group (presumably we might
also have to support “remote” groups being members of such local groups, a
concept we don’t really support today, but not too much of a stretch). This
group would then need to be assigned to each project in turn, but for each
of the 10 global roles that this “DSR equivalent” provided in turn (so an
immediate increase by a factor of N API calls, where N is the number of
roles per DSR) - so 1000 assignments in our example. If the domain admin
wanted to add a new role to (or remove a role from) the “DSR”, they would
have to do another assignment to each project that this “DSR” was being
used (100 new assignments in our example).  Again, I would suggest, much
less convenient.
>
>
>
> Given the above, I believe the current DSR proposal does provide the
right abstraction and scalability, and we should continue to review and
merge it as planned. Obviously this is still dependant on Implied Roles
(either in its current form, or a modified version). Alternative code of
doing a one-level-only inference part of DSRs does exist (from an earlier
attempt), but I don’t think we want to do that if we are going to have any
kind of implied roles.
>
>
>
> Henry
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Neutron] IPv6 related intermittent test failures

2016-02-02 Thread Armando M.
On 2 February 2016 at 19:03, Matthew Treinish  wrote:

> On Tue, Feb 02, 2016 at 05:09:47PM -0800, Armando M. wrote:
> > Folks,
> >
> > We have some IPv6 related bugs [1,2,3] that have been lingering for some
> > time now. They have been hurting the gate (e.g. [4] the most recent
> > offending failure) and since it looks like they have been without owners
> > nor a plan of action for some time, I made the hard decision of skipping
> > them [5] ahead of the busy times ahead.
>
> So TBH I don't think the failure rate for these tests are really at a point
> necessitating a skip:
>
>
> http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac
>
> http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os
>
> http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os
>
> (also just a cool side-note, you can see the very obvious performance
> regression
> caused by the keystonemiddleware release and when we excluded that version
> in
> requirements)
>
> Well, test_dualnet_dhcp6_stateless_from_os is kinda there with a ~10%
> failure
> rate, but the other 2 really aren't. I normally would be -1 on the skip
> patch
> because of that. We try to save the skips for cases where the bugs are
> really
> severe and preventing productivity at a large scale.
>

I am being overly aggressive here, just because I am conscious of the time
of the year :)


>
> But, in this case these ipv6 tests are kinda of out of place in tempest.
> Having
> all the permutations of possible ip allocation configurations always
> seemed a
> bit too heavy handed. These tests are also consistently in the top 10
> slowest
> for a run. We really should have trimmed down this set a while ago so
> we're only
> have a single case in tempest. Neutron should own the other possible
> configurations as an in-tree test.
>

+1


>
> Brian Haley has a patch up from Dec. that was trying to clean it up:
>
> https://review.openstack.org/#/c/239868/
>
> We probably should revisit that soon, since quite clearly no one is
> looking at
> these right now.
>
>
I thought that had merged already...my memory doesn't serve me as it used
to anymore :(


>
> -Matt Treinish
>
>
> >
> > Now one might argue that skipping them is counterproductive because it
> may
> > allow other regressions to sneak in, but I am hoping that this
> > controversial action will indeed smoke out the right folks.
> >
> > Comments welcome.
> >
> > Regards,
> > Armando
> >
> > [1] https://bugs.launchpad.net/neutron/+bug/1477192
> > [2] https://bugs.launchpad.net/neutron/+bug/1509004
> > [3] https://bugs.launchpad.net/openstack-gate/+bug/1540983
> > [4]
> >
> http://logs.openstack.org/37/264937/5/gate/gate-tempest-dsvm-neutron-full/afeaabd//logs/testr_results.html.gz
> > [5] https://review.openstack.org/#/c/275457/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Domain Specific Roles vs Local Groups

2016-02-02 Thread Adam Young

On 02/02/2016 10:37 PM, Yee, Guang wrote:


I presume there’s a spec coming for this “seductive approach”? Not 
sure if I get all of it. From what’s been described here, 
conceptually, isn’t “local groups”, DSRs, or role groups the same thing?


Guang

*From:*Henry Nash [mailto:henryna...@mac.com]
*Sent:* Monday, February 01, 2016 3:50 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* [openstack-dev] [keystone] Domain Specific Roles vs Local 
Groups


Hi

During the recent keystone midcycle, it was suggested than an 
alternative domain specific roles (see spec: 
https://github.com/openstack/keystone-specs/blob/master/specs/mitaka/domain-specific-roles.rst and 
code patches starting at: https://review.openstack.org/#/c/261846/) 
might be to somehow re-use the group concept. This was actually 
something we had discussed in previous proposals for this 
functionality. As I mentioned during the last day, while this is a 
seductive approach, it doesn’t actually scale well (or in fact provide 
the right abstraction). The best way to illustrate this is with an 
example:


Let’s say a customer is being hosted by a cloud provider. The customer 
has their own domain containing their own users and groups, to keep 
them segregated from other customers. The cloud provider, wanting to 
attract as many different types of customer as possible, has created a 
set of fine-grained global roles tied to APIs via the policy files. 
The domain admin of the customer wants to create a collection of 10 
such fine-grained roles that represent some function that is 
meaningful to their setup (perhaps it’s job that allows you to monitor 
resources and fix a subset of problems).


With domain specific roles (DSR) , the domain admin creates a DSR 
(which is just a role with a domain_id attribute), and then adds the 
10 global policy roles required using the implied roles API. They can 
then assign this DSR to all the projects they need to, probably as a 
group assignment (where the groups could be local, federated or LDAP). 
One assignment per project is required, so if there were, over time, 
100 projects, then that’s 100 assignments. Further, if they want to 
add another global role (maybe to allow access to a new API) to that 
DSR, then it’s a single API call to do it.


The proposal to use groups instead would work something like this: We 
would support a concept of “local groups” in keystone, that would be 
independent of whatever groups the identity backend was mapped to. In 
order to represent the DSR, a local group would be created (perhaps 
with the name of the functional job members of the group could carry 
out). User who could carry out this function would be added to this 
group (presumably we might also have to support “remote” groups being 
members of such local groups, a concept we don’t really support today, 
but not too much of a stretch). This group would then need to be 
assigned to each project in turn, but for each of the 10 global roles 
that this “DSR equivalent” provided in turn (so an immediate increase 
by a factor of N API calls, where N is the number of roles per DSR) - 
so 1000 assignments in our example. If the domain admin wanted to add 
a new role to (or remove a role from) the “DSR”, they would have to do 
another assignment to each project that this “DSR” was being used (100 
new assignments in our example).  Again, I would suggest, much less 
convenient.




Let me see if I can say the same thing clearer:

Groups will only map to a single set of users to projects.  If you have 
two or more distince sets of users, and you want to map them to two 
distinct sets of projects, groups won't help.  You will end up having to 
duplicate the structure of the role assignments for each group.



DSRs are essentially templates of role assignments.  If you want to make 
it so users of your cloud get only operations on VMs, read on glance, 
and that is it, you create a domain specific role which points only to 
those system defined roles, and you use that.



You can't do it with groups alone.

Given the above, I believe the current DSR proposal does provide the 
right abstraction and scalability, and we should continue to review 
and merge it as planned. Obviously this is still dependant on Implied 
Roles (either in its current form, or a modified version). Alternative 
code of doing a one-level-only inference part of DSRs does exist (from 
an earlier attempt), but I don’t think we want to do that if we are 
going to have any kind of implied roles.


Henry



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [neutron] - L3 flavors and issues with usecasesfor multiple L3 backends

2016-02-02 Thread rzang
Is it possible that the third router interface that the user wants to add will 
bind to a provider network that the chosen driver (for bare metal routers) can 
not access physically? Even though the chosen driver has the capability for 
that type of network? Is it a third dimension that needs to take into 
consideration besides flavors and capabilities? If this case is possible, it is 
a problem even we restrict all the drivers in the same flavor should have the 
same capability set. 




-- Original --
From:  "Kevin Benton";;
Send time: Wednesday, Feb 3, 2016 9:43 AM
To: "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [neutron] - L3 flavors and issues with 
usecasesfor multiple L3 backends




So flavors are for routers with different behaviors that you want the user to 
be able to choose from (e.g. High performance, slow but free, packet logged, 
etc). Multiple drivers are for when you have multiple backends providing the 
same flavor (e.g. The high performance flavor has several drivers for various 
bare metal routers). 
 On Feb 2, 2016 18:22, "rzang"  wrote:
What advantage can we get from putting multiple drivers into one flavor over 
strictly limit one flavor one driver (or whatever it is called).


Thanks,
Rui


-- Original --
From:  "Kevin Benton";;
Send time: Wednesday, Feb 3, 2016 8:55 AM
To: "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases 
for multiple L3 backends




Choosing from multiple drivers for the same flavor is scheduling. I didn't mean 
automatically selecting other flavors. 
 On Feb 2, 2016 17:53, "Eichberger, German"  wrote:
Not that you could call it scheduling. The intent was that the user could pick 
the best flavor for his task (e.g. a gold router as opposed to a silver one). 
The system then would ??schedule?? the driver configured for gold or silver. 
Rescheduling wasn??t really a consideration??
 
 German
 
 From: Doug Wiegley 
>
 Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
 Date: Monday, February 1, 2016 at 8:17 PM
 To: "OpenStack Development Mailing List (not for usage questions)" 
>
 Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use cases 
for multiple L3 backends
 
 Yes, scheduling was a big gnarly wart that was punted for the first pass. The 
intention was that any driver you put in a single flavor had equivalent 
capabilities/plumbed to the same networks/etc.
 
 doug
 
 
 On Feb 1, 2016, at 7:08 AM, Kevin Benton 
> wrote:
 
 
 Hi all,
 
 I've been working on an implementation of the multiple L3 backends RFE[1] 
using the flavor framework and I've run into some snags with the use-cases.[2]
 
 The first use cases are relatively straightforward where the user requests a 
specific flavor and that request gets dispatched to a driver associated with 
that flavor via a service profile. However, several of the use-cases are based 
around the idea that there is a single flavor with multiple drivers and a 
specific driver will need to be used depending on the placement of the router 
interfaces. i.e. a router cannot be bound to a driver until an interface is 
attached.
 
 This creates some painful coordination problems amongst drivers. For example, 
say the first two networks that a user attaches a router to can be reached by 
all drivers because they use overlays so the first driver chosen by the 
framework works  fine. Then the user connects to an external network which is 
only reachable by a different driver. Do we immediately reschedule the entire 
router at that point to the other driver and interrupt the traffic between the 
first two networks?
 
 Even if we are fine with a traffic interruption for rescheduling, what should 
we do when a failure occurs half way through switching over because the new 
driver fails to attach to one of the networks (or the old driver fails to 
detach from one)? It would seem the correct API experience would be switch 
everything back and then return a failure to the caller trying to add an 
interface. This is where things get messy.
 
 If there is a failure during the switch back, we now have a single router's 
resources smeared across two drivers. We can drop the router into the ERROR 
state and re-attempt the switch in a periodic task, or maybe just leave it 
broken.
 
 How should we handle this much orchestration? Should we pull in something like 
taskflow, or maybe 

[openstack-dev] [fuel] Fuel plugins: lets have some rules

2016-02-02 Thread Dmitry Borodaenko
It has been over a year since pluggable architecture was introduced in
Fuel 6.0, and I think it's safe to declare it an unmitigated success. A
search for "fuel-plugin" on GitHub brings up 167 repositories [0],
there's 63 Fuel plugin repositories on review.openstack.org [1], 25 Fuel
plugins are listed in the DriverLog [2].

[0] https://github.com/search?q=fuel-plugin-
[1] 
https://review.openstack.org/#/admin/projects/?filter=openstack%252Ffuel-plugin-
[2] http://stackalytics.com/report/driverlog?project_id=openstack%2Ffuel

Even though the plugin engine is not yet complete (there still are
things you can do in Fuel core that you cannot do in a plugin), dozens
of deployers and developers [3] used it to expand Fuel capabilities
beyond the limitations of our default reference architecture.

[3] http://stackalytics.com/report/contribution/fuel-plugins-group/360

There's a noticeable bump in contributions around October 2015 after
Fuel 7.0 was released, most likely inspired by the plugin engine
improvements introduced in that version [4]. As we continue to expand
plugins capabilities, I expect more and more plugins to appear.

[4] 
https://git.openstack.org/cgit/openstack/fuel-docs/tree/pages/release-notes/v7-0/new_features/plugins.rst?h=stable/7.0

The question of how useful exactly all those plugins are is a bit harder
to answer. DriverLog isn't much help: less than half of Fuel plugins
hosted on OpenStack infrastructure are even registered there, and of
those that are, only 6 have CI jobs with recent successful runs. Does
this mean that 90% of Fuel plugins are broken and unmaintained? Not
necessarily, but it does mean that we have no way to tell.

An even harder question is: once we determine that some plugins are more
equal than others, what should we do about the less useful and the less
actively maintained?

To objectively answer both questions, we need to define support levels
for Fuel plugins and set some reasonable expectations about how plugins
can qualify for each level.

Level 3. Plugin is not actively supported

I believe that having hundreds of Fuel plugins out there on GitHub and
elsewhere is great, and we should encourage people to create more of
those and do whatever they like with them. Even a single-commit "deploy
and forget" plugin is useful as an idea, a source of inspiration, and a
starting point for other people who might want to take it further.

At this level, there should be zero expectations and zero obligations
between Fuel plugin writers and OpenStack community. At the moment, Fuel
plugins developers guide recommends [5] to request a Gerrit repo in the
openstack/ namespace and set up branches, tags, CI, and a code review
process around it, aligned with OpenStack development process. Which is
generally a good idea, except for all the cases where it's too much
overhead and ends up not being followed closely enough to be useful.

[5] https://wiki.openstack.org/wiki/Fuel/Plugins#Repo

Instead of vague blanket recommendations, we should explictly state that
it's fine to do none of that and just stay on GitHub, and that if you
intend to move to the next level and actively maintain your plugin, and
expect support with that from Fuel developers and other OpenStack
projects, these recommendations are not optional and must be fulfilled.

Level 2. Plugin is actively supported by its registered maintainers 

To support a Fuel plugin, we need to answer two fundamental questions:
Can we? Should we?

I think the minimum requirements to say "yes" to both are:

a) All of the plugin's source code is explicitly licensed under an
   OSI-approved license;

b) The plugin source code repository does not contain binary artefacts
   such as RPM packages or ISO images (*);

c) The plugin is registered in DriverLog;

d) Plugin maintainers listed in DriverLog have confirmed the intent to
   support the plugin;

e) Plugin repository on review.openstack.org has a voting CI job that is
   passing with the latest or, at least, previous major release of Fuel.

f) All deviations from the OpenStack development process (alternative
   issue trackers, mailing lists, etc.) are documented in the plugin's
   README file.

*  Aside from purely technical issues we're getting because git is not
   suitable for tracking binary files [6], contaminating the source code
   with opaque binary blobs makes it impossible to ensure that the
   plugin remains compliant with the open source requirement (a).

[6] http://lists.openstack.org/pipermail/openstack-dev/2016-January/083812.html

In addition to above requirements, we need to set up graceful
transitions from level 3 to level 2 and back. Meeting the requirements
should be easy (well, except rewriting commit history to get rid of
binary blobs under .git, I think it's reasonable to require plugin
developers to do this where applicable), and if maintainers step down or
go MIA, we should stash the code in a common repository
(fuel-plugins-contrib) where it can recovered from later.


Re: [openstack-dev] [QA][Neutron] IPv6 related intermittent test failures

2016-02-02 Thread Matthew Treinish
On Tue, Feb 02, 2016 at 05:09:47PM -0800, Armando M. wrote:
> Folks,
> 
> We have some IPv6 related bugs [1,2,3] that have been lingering for some
> time now. They have been hurting the gate (e.g. [4] the most recent
> offending failure) and since it looks like they have been without owners
> nor a plan of action for some time, I made the hard decision of skipping
> them [5] ahead of the busy times ahead.

So TBH I don't think the failure rate for these tests are really at a point
necessitating a skip:

http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac
http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os
http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os

(also just a cool side-note, you can see the very obvious performance regression
caused by the keystonemiddleware release and when we excluded that version in
requirements)

Well, test_dualnet_dhcp6_stateless_from_os is kinda there with a ~10% failure
rate, but the other 2 really aren't. I normally would be -1 on the skip patch
because of that. We try to save the skips for cases where the bugs are really
severe and preventing productivity at a large scale. 

But, in this case these ipv6 tests are kinda of out of place in tempest. Having
all the permutations of possible ip allocation configurations always seemed a
bit too heavy handed. These tests are also consistently in the top 10 slowest
for a run. We really should have trimmed down this set a while ago so we're only
have a single case in tempest. Neutron should own the other possible
configurations as an in-tree test.

Brian Haley has a patch up from Dec. that was trying to clean it up:

https://review.openstack.org/#/c/239868/

We probably should revisit that soon, since quite clearly no one is looking at
these right now.


-Matt Treinish


> 
> Now one might argue that skipping them is counterproductive because it may
> allow other regressions to sneak in, but I am hoping that this
> controversial action will indeed smoke out the right folks.
> 
> Comments welcome.
> 
> Regards,
> Armando
> 
> [1] https://bugs.launchpad.net/neutron/+bug/1477192
> [2] https://bugs.launchpad.net/neutron/+bug/1509004
> [3] https://bugs.launchpad.net/openstack-gate/+bug/1540983
> [4]
> http://logs.openstack.org/37/264937/5/gate/gate-tempest-dsvm-neutron-full/afeaabd//logs/testr_results.html.gz
> [5] https://review.openstack.org/#/c/275457/



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases for multiple L3 backends

2016-02-02 Thread Doug Wiegley
The lbaas use case was something like having one flavor with hardware SSL 
offload and one that doesn’t, e.g. You can easily have multiple backends that 
can do both (in fact, you might even want to let the lower flavor provision 
onto the higher, if you have spare capacity on one and not the other.) And the 
initial “scheduler” in such cases was supposed to be a simple round robin or 
hash, to be revisted later, including the inevitable rescheduling problem, or 
oversubscription issue. It quickly becomes as the same hairy wart that nova has 
to deal with, and all are valid use cases.

doug


> On Feb 2, 2016, at 6:43 PM, Kevin Benton  wrote:
> 
> So flavors are for routers with different behaviors that you want the user to 
> be able to choose from (e.g. High performance, slow but free, packet logged, 
> etc). Multiple drivers are for when you have multiple backends providing the 
> same flavor (e.g. The high performance flavor has several drivers for various 
> bare metal routers).
> 
> On Feb 2, 2016 18:22, "rzang"  > wrote:
> What advantage can we get from putting multiple drivers into one flavor over 
> strictly limit one flavor one driver (or whatever it is called).
> 
> Thanks,
> Rui
> 
> -- Original --
> From:  "Kevin Benton";>;
> Send time: Wednesday, Feb 3, 2016 8:55 AM
> To: "OpenStack Development Mailing List (not for usage 
> questions)" >;
> Subject:  Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases 
> for multiple L3 backends
> 
> Choosing from multiple drivers for the same flavor is scheduling. I didn't 
> mean automatically selecting other flavors.
> 
> On Feb 2, 2016 17:53, "Eichberger, German"  > wrote:
> Not that you could call it scheduling. The intent was that the user could 
> pick the best flavor for his task (e.g. a gold router as opposed to a silver 
> one). The system then would “schedule” the driver configured for gold or 
> silver. Rescheduling wasn’t really a consideration…
> 
> German
> 
> From: Doug Wiegley   >>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>    >>
> Date: Monday, February 1, 2016 at 8:17 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
>    >>
> Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use cases 
> for multiple L3 backends
> 
> Yes, scheduling was a big gnarly wart that was punted for the first pass. The 
> intention was that any driver you put in a single flavor had equivalent 
> capabilities/plumbed to the same networks/etc.
> 
> doug
> 
> 
> On Feb 1, 2016, at 7:08 AM, Kevin Benton   >> wrote:
> 
> 
> Hi all,
> 
> I've been working on an implementation of the multiple L3 backends RFE[1] 
> using the flavor framework and I've run into some snags with the use-cases.[2]
> 
> The first use cases are relatively straightforward where the user requests a 
> specific flavor and that request gets dispatched to a driver associated with 
> that flavor via a service profile. However, several of the use-cases are 
> based around the idea that there is a single flavor with multiple drivers and 
> a specific driver will need to be used depending on the placement of the 
> router interfaces. i.e. a router cannot be bound to a driver until an 
> interface is attached.
> 
> This creates some painful coordination problems amongst drivers. For example, 
> say the first two networks that a user attaches a router to can be reached by 
> all drivers because they use overlays so the first driver chosen by the 
> framework works  fine. Then the user connects to an external network which is 
> only reachable by a different driver. Do we immediately reschedule the entire 
> router at that point to the other driver and interrupt the traffic between 
> the first two networks?
> 
> Even if we are fine with a traffic interruption for rescheduling, what should 
> we do when a failure occurs half way through switching over because the new 
> driver fails to attach to one of the networks (or the old driver fails to 
> detach from one)? It would seem the correct API experience would be switch 
> everything back and then return a failure to 

Re: [openstack-dev] [Fuel] [Fuel UI] Node role list grouping

2016-02-02 Thread Julia Aranovich
I support Vitaly's proposal with 'base' group name instead of 'controller'.
So, now we have the following suggestion for role list grouping:

BASE: controller, detach-* plugin roles, murano (if it will go to plugin)
COMPUTE: compute, virt, compute-vmware, ironic
STORAGE: cinder, cinder-block-device, cinder-vmware, ceph-osd
OTHER: base-os, mongo, zabbix

On Mon, Feb 1, 2016 at 8:46 PM Vitaly Kramskikh 
wrote:

> Folks,
>
> That's true, Nailgun is still using Role entity - in DB, API, plugins can
> provide new roles, etc., and it's not going away, at least in 9.0.
>
> I'm fine with proposed set of role groups, except the "controller" group.
> We don't have anything else but "controller" role in this group in the base
> installation, but there are plugins that can detach some services from the
> controller, like detach-database, detach-rabbitmq, etc. So these roles with
> detached services should also be in the "controller" group, but it looks a
> little illogical to me. So I'd prefer to go with something like "base" or
> "core" group.
>
> 2016-01-29 16:53 GMT+03:00 Bogdan Dobrelya :
>
>> On 29.01.2016 13:35, Vladimir Kuklin wrote:
>> >> We removed role as abstraction from library. It's very very artificial
>> >> abstraction. Instead we use tasks, grouping them to different
>> >> combinations. That allows plugin developers to adjust reference
>> >> architecture to their needs.
>>
>> I only replied to that. We did not remove role as abstraction
>>
>> --
>> Best regards,
>> Bogdan Dobrelya,
>> Irc #bogdando
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Vitaly Kramskikh,
> Fuel UI Tech Lead,
> Mirantis, Inc.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] towards a keystone v3 only devstack

2016-02-02 Thread Henrique Truta
Hi Sean,

I've been working on v3 only related stuff in a while and I volunteer on
this too.

Henrique

Em ter, 2 de fev de 2016 às 05:41, Julien Danjou 
escreveu:

> On Mon, Feb 01 2016, Sean Dague wrote:
>
> > The revert is here - https://review.openstack.org/#/c/274703/ - and will
> > move it's way through the gate once the tests complete.
>
> So… yeah, but FTR this revert broke again at least Gnocchi's gate. The
> telemetry projects had already adapted to the v3 switch last week.
>
> We did not complain about the first break; we fixed our gate silently in
> just a few minutes, and we've been done with it and resumed to your
> daily routines. We were somehow happy with moving to a newer API, TBH,
> even if was maybe a bit brutal in the first place.
>
> But then… reverting… Well. I don't think making sluggish, technically in
> debt or dead projects first class citizen is a good choice. If they
> can't adapt, fast enough or at all, then… they should read this as a
> signal and figure out what their problem is and solve it?
>
> Food for thought.
>
> But well, don't worry – we already re-fixed our gate. :-)
>
> --
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] Announcing our new Olso Project

2016-02-02 Thread Amrith Kumar
Was there a reason why this project did not go through the normal
oslo-incubator process? 

Seriously though, congratulations Ron on being added to oslo core.

-amrith

--
Amrith Kumar, CTO   | amr...@tesora.com
Tesora, Inc | @amrithkumar
125 CambridgePark Drive, Suite 400  | http://www.tesora.com
Cambridge, MA. 02140| GPG: 0x5e48849a9d21a29b 

On 02/01/2016 12:50 PM, Ronald Bradford wrote:
> The Olso team is proud to announce the release of Oslo Bingo.  In Oslo
> we like to spice up our release notes using meaningful random
> adjectives [1].
>
> Each month the Oslo team will select an adjective to be the Oslo Bingo
> word of the month.
>
> For February 2016 we have selected "jazzed" (from rlrossit).
>
> To play, simply pick the first Oslo project that will have release
> notes using our Bingo word of the month (i.e. jazzed). Check out
> recent release notes that selected "overjoyed" [2] and "jubilant" [3]
> to see what we mean.
>
> Entry is free for all at http://j.mp/Oslo-bingo [4]
>
> The winner each month will get a limited edition Oslo t-shirt,
> sponsored by HPE (quantity and sizes limited):
> http://j.mp/Oslo-bingo-prize
>
> More details at [5]
>
>
> [1] 
> http://git.openstack.org/cgit/openstack-infra/release-tools/tree/releasetools/release_notes.py#n33
> [2] 
> http://lists.openstack.org/pipermail/openstack-dev/2016-January/085000.html
> [3] 
> http://lists.openstack.org/pipermail/openstack-dev/2016-January/083797.html
> [4] http://j.mp/Oslo-bingo
> [5] https://etherpad.openstack.org/p/Oslo_Bingo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] towards a keystone v3 only devstack

2016-02-02 Thread gordon chung
yeah... the revert broke us across all telemetry projects since we fixed 
plugins to adapt to v3. i'm very much for adapting to v3 since it's lingered 
around for years. i think given the time lapsed, if it breaks them, tough. the 
only issue i had with original patch was it merged on a Friday with no mention.

On 01/02/2016 9:48 PM, Steve Martinelli wrote:

Thanks for bringing this up Sean.

I went ahead and documented all the way the projects/gates/clients broke in an 
etherpad:  
https://etherpad.openstack.org/p/v3-only-devstack

These are all the projects that I know that were affected, if someone knows 
others, please add your findings.

Sean, you can count me in on the volunteering effort to get this straightened 
out.

Steve

Sean Dague  wrote on 2016/02/01 12:21:50 
PM:

> From: Sean Dague 
> To: openstack-dev 
> 
> Date: 2016/02/01 12:23 PM
> Subject: [openstack-dev] [all] towards a keystone v3 only devstack
>
> On Friday last week I hit the go button on a keystone v3 default patch
> change in devstack. While that made it through tests for all the tightly
> integrated projects, we really should have stacked up some other spot
> tests to see how this was going to impact the rest of the ecosystem.
> Novaclient, shade, osc, and a bunch of other things started faceplanting.
>
> The revert is here - https://review.openstack.org/#/c/274703/ - and will
> move it's way through the gate once the tests complete.
>
> Going forward I think we need a more concrete plan on this transition.
> I'm going to be -2 on any v3 related keystone changes in devstack until
> we do, as it feels like we need to revert one of these patches about
> every month for the last 6.
>
> I don't really care what format the plan takes, ML thread, wiki page,
> spec. But we need one, and an owner (probably on the keystone side) to
> walk us through how this transition goes. This is going to include some
> point in the future where:
>
> 1. devstack configures v3 and v2 always, and devstack issues a warning
> if v2 is enabled
> 2. devstack configures v3 only, v2 can be enabled and v2 enabled is a
> warning
> 3. devstack removes v2 support
>
> The transition to stage 2 and stage 3 requires using Depends-On to stack
> up some wider collection of tests to demonstrate that this works on
> novaclient, heat, shade, osc, and anything that comes forward as being
> broken by this last round. It's fine if we give people hard deadlines
> that they need to get their jobs sorted, but like the removal of
> extras.d, we need to be explicit about it.
>
> So, first off, we need a volunteer to step up to pull together this
> plan. Any volunteers here?
>
>-Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-02 Thread Preston L. Bannister
Oh, for the other folk reading, in QEMU you want to look at:

http://wiki.qemu.org/Features/IncrementalBackup

The above page looks to be current. The QEMU wiki seems to have a number of
stale pages that describe proposed function that was abandoned / never
implemented. Originally, I ended up reading the QEMU mailing list and
source code to figure out which bits were real. :)





On Tue, Feb 2, 2016 at 4:04 AM, Preston L. Bannister 
wrote:

> To be clear, I work for EMC, and we are building a backup product for
> OpenStack (which at this point is very far along). The primary lack is a
> good means to efficiently extract changed-block information from OpenStack.
> About a year ago I worked through the entire Nova/Cinder/libvirt/QEMU
> stack, to see what was possible. The changes to QEMU (which have been
> in-flight since 2011) looked most promising, but when they would land was
> unclear. They are starting to land. This is big news. :)
>
> That is not the end of the problem. Unless the QEMU folk are perfect,
> there are likely bugs to be found when the code is put into production.
> (With more exercise, the sooner any problems can be identified and
> addressed.) OpenStack uses libvirt to talk to QEMU, and libvirt is a fairly
> thick abstraction. Likely there will want to be adjustments to libvirt.
> Bypassing Nova and chatting with libvirt directly is a bit suspect (but may
> be needed). There might be adjustments needed in Nova.
>
> To offer suggestions...
>
> Ekko is an *opinionated* approach to backup. This is not the only way to
> solve the problem. I happen very much like the approach, but as a *specific
> *approach, it probably does not belong in Cinder or Nova. (I believe it
> was Jay who offered a similar argument about backup more generally.)
>
> (Keep in mind QEMU is not the only hypervisor supported by Nova, if the
> majority of use. Would you want to attempt a design that works for all
> hypervisors? I would not!  ...at least at this point. Also, last I checked
> the Cinder folk were a bit hung up on replication, as finding common
> abstractions across storage was not easy. This problem looks similar.)
>
> While wary of bypassing Nova/Cinder, my suggestion would to be rude in the
> beginning, with every intent of becoming civil in the end.
>
> Start by talking to libvirt directly. (The was a bypass mechanism in
> libvirt that looked like it might be sufficient.) Break QEMU early, and get
> it fixed. :)
>
> When QEMU usage is working, talk to the libvirt folk about *proven*
> needs, and what is needed to become civil.
>
> When libvirt is updated (or not), talk to Nova folk about *proven* needs,
> and what is needed to become civil. (Perhaps simply awareness, or a small
> set of primitives.)
>
> It might take quite a while for the latest QEMU and libvirt to ripple
> through into OpenStack distributions. Getting any fixes into QEMU early (or
> addressing discovered gaps in needed function) seems like a good thing.
>
> All the above is a sufficiently ambitious project, just by itself. To my
> mind, that justifies Ekko as a unique, focused project.
>
>
>
>
>
>
> On Mon, Feb 1, 2016 at 4:28 PM, Sam Yaple  wrote:
>
>> On Mon, Feb 1, 2016 at 10:32 PM, Fausto Marzi 
>> wrote:
>>
>>> Hi Preston,
>>> Thank you. You saw Fabrizio in Vancouver, I'm Fausto, but it's allright,
>>> : P
>>>
>>> The challenge is interesting. If we want to build a dedicated backup API
>>> service (which is always what we wanted to do), probably we need to:
>>>
>>>
>>>- Place out of Nova and Cinder the backup features, as it wouldn't
>>>make much sense to me to have a Backup service and also have backups
>>>managed independently by Nova and Cinder.
>>>
>>>
>>> That said, I'm not a big fan of the following:
>>>
>>>- Interacting with the hypervisors and the volumes directly without
>>>passing through the Nova and Cinder API.
>>>
>>> Passing through the api will be a huge issue for extracting data due to
>> the sheer volume of data needed (TB through the api is going to kill
>> everything!)
>>
>>>
>>>- Adding any additional workload on the compute nodes or block
>>>storage nodes.
>>>- Computing incremental, compression, encryption is expensive. Have
>>>many simultaneous process doing that may lead  to bad behaviours on core
>>>services.
>>>
>>> These are valid concerns, but the alternative is still shipping the raw
>> data elsewhere to do this work, and that has its own issue in terms of
>> bandwidth.
>>
>>>
>>> My (flexible) thoughts are:
>>>
>>>- The feature is needed and is brilliant.
>>>- We should probably implement the newest feature provided by the
>>>hypervisor in Nova and export them from the Nova API.
>>>- Create a plugin that is integrated with Freezer to leverage that
>>>new features.
>>>- Same apply for Cinder.
>>>- The VMs and Volumes backup feature is already available by Nova,
>>>

  1   2   >