Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Joshua Harlow
Sure, I can see both ways.

It's not easy to find a perfect solution, especially in opensource with such a 
diverse community. How do other projects handle this? I would think the kernel 
would have a similar issue, or hadoop or other diverse and large opensource 
projects.

Sent from my really tiny device...

On Aug 27, 2013, at 9:46 PM, Mike Spreitzer 
mspre...@us.ibm.commailto:mspre...@us.ibm.com wrote:

Joshua, I do not think such a strict and coarse scheduling is a practical way 
to manage developers, who have highly individualized talents, backgrounds, and 
interests.

Regards,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Nova hypervisor: Docker

2013-08-28 Thread Michael Still
On Wed, Aug 28, 2013 at 4:18 AM, Sam Alba sam.a...@gmail.com wrote:
 Hi all,

 We've been working hard during the last couple of weeks with some
 people. Brian Waldon helped a lot designing the Glance integration and
 driver testing. Dean Troyer helped a lot on bringing Docker support in
 Devstack[1]. On top of that, we got several feedback on the Nova code
 review which definitely helped to improve the code.

 The blueprint[2] explains what Docker brings to Nova and how to use it.

I have to say that this blueprint is a fantastic example of how we
should be writing design documents. It addressed almost all of my
questions about the integration.

However, it would be nice for this to be on Launchpad, that being
where we track blueprints. Would it be possible for you to move it
over there? Or just link to the design doc from there?

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] Derekh for tripleo core

2013-08-28 Thread Lucas Alvares Gomes
 So - calling for votes for Derek to become a TripleO core reviewer!

+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Daniel P. Berrange
On Wed, Aug 28, 2013 at 12:45:26PM +1000, Michael Still wrote:
 [Concerns over review wait times in the nova project]
 
 I think that we're also seeing the fact that nova-core's are also
 developers. nova-core members have the same feature freeze deadline,
 and that means that to a certain extent we need to stop reviewing in
 order to get our own code ready by the deadline.
 
 The strength of nova-core is that its members are active developers,
 so I think a reviewer caste would be a mistake. I am also not saying
 that nova-core should get different deadlines (although more leniency
 with exceptions would be nice).

Agreed, I think it is very important for the core reviewers to also be
active developers, since working on the code is how you gain the knowledge
required to do high quality reviews.

 So, I think lower review rates around deadlines are just a fact of life.

This is a fairly common problem across all open source projects really.
People consistently wait until just before review deadlines to submit
their code. You have to actively encourage people to submit their code
well before deadlines / discourage them from waiting till the last
minute. Sometimes the best way to get people to learn this is the hard
way, by postponing their feature if submitted too close to the dealine
and too much other stuff is ahead of it in the queue. IOW we should
prioritize review of work whose authors submitted earlier to encourage
good practice with early submission.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Daniel P. Berrange
On Wed, Aug 28, 2013 at 03:43:21AM +, Joshua Harlow wrote:
 Why not a rotation though, I could see it beneficial to say have a
 group of active developers code for say a release then those
 developers rotate to a reviewer position only (and rotate again for
 every release). This allows for a flow of knowledge between reviewers
 and a different set of coders (instead of a looping flow since
 reviewers are also coders).
 
 For a big project like nova the workload could be spread out more
 like that.

I don't think any kind of rotation system like that is really
practical. Core team members need to have the flexibility to balance
their various conflicting workloads in a way that maximises their
own productivity.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Scheduler support for PCI passthrough

2013-08-28 Thread Gary Kotton
Hi,
Whilst reviewing the code I think that I have stumbled on an issue (I hope that 
I am mistaken). The change set (https://review.openstack.org/#/c/35749/) 
expects pci stats to be returned from the host. There are a number of issues 
here that I have concern with an would like to know what the process is for 
addressing the fact that the compute node may not provide these statistics, for 
example, this could be due to the fact that the driver has not been updated to 
return the pci stats or this could be if the scheduler has been upgraded prior 
to the compute node (what is the process for the upgrade).
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Nova hypervisor: Docker

2013-08-28 Thread Daniel P. Berrange
On Wed, Aug 28, 2013 at 06:00:50PM +1000, Michael Still wrote:
 On Wed, Aug 28, 2013 at 4:18 AM, Sam Alba sam.a...@gmail.com wrote:
  Hi all,
 
  We've been working hard during the last couple of weeks with some
  people. Brian Waldon helped a lot designing the Glance integration and
  driver testing. Dean Troyer helped a lot on bringing Docker support in
  Devstack[1]. On top of that, we got several feedback on the Nova code
  review which definitely helped to improve the code.
 
  The blueprint[2] explains what Docker brings to Nova and how to use it.
 
 I have to say that this blueprint is a fantastic example of how we
 should be writing design documents. It addressed almost all of my
 questions about the integration.

Yes, Sam ( any of the other Docker guys involved) have been great at
responding to reviewers' requests to expand their design document. The
latest update has really helped in understanding how this driver works
in the context of openstack from an architectural and functional POV.

 However, it would be nice for this to be on Launchpad, that being
 where we track blueprints. Would it be possible for you to move it
 over there? Or just link to the design doc from there?

Their blueprint on launchpad already links to the doc in fact, so no
change is needed

  https://blueprints.launchpad.net/nova/+spec/new-hypervisor-docker

links to

  
https://github.com/dotcloud/openstack-docker/blob/master/docs/nova_blueprint.md

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting reminder August 29 18:00 UTC

2013-08-28 Thread Sergey Lukjanov
Hi folks,

We'll be have the Savanna team meeting as usual in #openstack-meeting-alt 
channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_August.2C_29

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20130829T18

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Meeting agenda for Wed Aug 28 at 2100 UTC

2013-08-28 Thread Julien Danjou
The Ceilometer project team holds a meeting in #openstack-meeting, see
https://wiki.openstack.org/wiki/Meetings/MeteringAgenda for more details.

Next meeting is on Wed Aug 28 at 2100 UTC 

Please add your name with the agenda item, so we know who to call on during
the meeting.
* Review Havana-3 milestone
  * https://launchpad.net/ceilometer/+milestone/havana-3
* Ditch Alembic for Havana? (sandy) 
* any plans on expanding Ceilometer coverage (ie. capacity planning,
  optimization, dashboard, analytics) -- gordc
  * are they items for other projects/products to cover?
* expanding metrics captured -- gordc
  * plans to support beyond KVM(libvirt)? ie. VMware, IBM Power, IBM z/VM,
Hyper-V, Cirtix Xen
* Release python-ceilometerclient? 
* Open discussion

If you are not able to attend or have additional topic(s) you would like
to add, please update the agenda on the wiki.

Cheers,
-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Robert Collins
On 28 August 2013 21:13, Daniel P. Berrange berra...@redhat.com wrote:
 On Wed, Aug 28, 2013 at 03:43:21AM +, Joshua Harlow wrote:

 For a big project like nova the workload could be spread out more
 like that.

 I don't think any kind of rotation system like that is really
 practical. Core team members need to have the flexibility to balance
 their various conflicting workloads in a way that maximises their
 own productivity.

So does everyone else, surely? Are you saying 'I don't think I can
commit to regular reviewing', or are you saying 'all reviewers will be
unable to commit to regular reviewing'? Or something else?

There are what - 300? ATC's for nova, and 20 core *reviewers*.

http://russellbryant.net/openstack-stats/nova-reviewers-90.txt
(taking 90 days to avoid some of the peak bulge).

Total reviews: 11327 (6311 by core)
Total reviewers: 290

Thats 125 reviews a day, or 6 per core reviewer, if core reviewers did
every single review. Or 3 reviews a day at the moment. Adjusting up by
2/7 to cover weekends thats 8 per day if core did every review, and 4
per day for the ones they actually did over that period.

Say it takes 20m to do a good review; thats 2.5 hours, more or less.
Thats daily - thats certainly a large enough time period that I can
see a rotation being potentially useful, for folk that need to discuss
their patch in realtime (to reduce roundtrips etc - I think everyone
knows how useful that can be).

Separately, look at the math - if we assume that core reviewers are
twice as productive within Nova due to familiarity with more code, we
can expect at most 40 peoples worth of contributions from nova-core,
vs 280 odd from ~nova-core, all other things things being equal. If
reviewing is (say) 1/10th the time of writing the code, then 260
contributors would create a review load that can fully saturate 52
reviewers (/10 * 2 for the two +2s).

Are we there yet? Arguably yes - there are lots of active reviewers
doing as many reviews as most of the core team. And it's taking a week
to review things at the moment, which means plenty of time for things
to change under the patch and actually cause more review work due to
rework. OTOH those are very coarse numbers with lots of assumptions.
My main point is that scaling Nova development is hard, the problems
are real, and right now it's a significant time investment needed for
anyone wanting to become a core reviewer.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Daniel P. Berrange
On Wed, Aug 28, 2013 at 10:29:23PM +1200, Robert Collins wrote:
 On 28 August 2013 21:13, Daniel P. Berrange berra...@redhat.com wrote:
  On Wed, Aug 28, 2013 at 03:43:21AM +, Joshua Harlow wrote:
 
  For a big project like nova the workload could be spread out more
  like that.
 
  I don't think any kind of rotation system like that is really
  practical. Core team members need to have the flexibility to balance
  their various conflicting workloads in a way that maximises their
  own productivity.
 
 So does everyone else, surely? Are you saying 'I don't think I can
 commit to regular reviewing', or are you saying 'all reviewers will be
 unable to commit to regular reviewing'? Or something else?

No, IIUC, Joshua was suggesting that core team members spend one cycle
doing reviews only, with no coding, and then reverse for the next cycle. 
That is just far too coarse/crude. Core team members need to be free to
balance their time between reviews and coding work on an ongoing basis,
just as any other member of the community can.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Robert Collins
On 28 August 2013 22:39, Daniel P. Berrange berra...@redhat.com wrote:

 No, IIUC, Joshua was suggesting that core team members spend one cycle
 doing reviews only, with no coding, and then reverse for the next cycle.
 That is just far too coarse/crude. Core team members need to be free to
 balance their time between reviews and coding work on an ongoing basis,
 just as any other member of the community can.

Oh! Yes, thats way too heavy.

I do wonder about tweaking the balance more [or scaling the review
team :)], but only-reviews would drive anyone batty.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Add a library js for creating charts

2013-08-28 Thread Ladislav Smola
The Rickshaw library is in the Master. Building of the reusable charts 
on top of it is in the progress.


On 08/27/2013 02:51 PM, Chmouel Boudjnah wrote:

Julien Danjou jul...@danjou.info writes:


It sounds like a good plan to pick Rickshaw. Better building on top of
it, contributing back to it, rather than starting cold or building a new
wheel.

+1

Chmouel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Container-sync automation + enabling a placement engine

2013-08-28 Thread David Hadas

Hi,

We see a need for enhancing Swift to support a federation of swift clusters
such that a set of clusters can work as a unified namespace and allow
control over placement between the clusters.  This requires multiple
extensions to swift.

To promote community inputs and work in this area, I have uploaded today
https://github.com/davidhadas/Autosync.

The uploaded autosync.py middleware allows working with two clusters as a
single entity.
It uses the container-sync mechanism but removes the need for the admin to
configure each and every container with appropriate sync information and
keys.

This code was designed with multiple clusters in mind and assumes that
there is a placement engine deciding which container is placed where (not
included).
As an alternative, configuration directives can be used to define the
default placement (shown as an example configuration)
Yet currently is is aimed only for a first case in which there is a primary
and a backup (secondary) clusters.
Much more work is needed to support a federation of clusters, yet this code
may be useful to some even as is (with some cleanups etc).
Anyone seeking to join and help push this direction forward is most
welcomed.


DH


Regards,
David Hadas,
Openstack Swift ATC, Architect, Master Inventor
IBM Research Labs, Haifa
Tel:Int+972-4-829-6104
Fax:   Int+972-4-829-6112


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] multiple-scheduler-drivers blueprint

2013-08-28 Thread Alex Glikson
It seems that the main concern was that the overridden scheduler 
properties are taken from the flavor, and not from the aggregate. In fact, 
there was a consensus that this is not optimal.

I think that we can still make some progress in Havana towards 
per-aggregate overrides, generalizing on the recently merged changes that 
do just that -- for cpu and for memory with FilterScheduler (and 
leveraging a bit from the original multi-sched patch). As follows:
1. individual filters will call get_config('abc') instead of CONF.abc 
(already implemented in the current version of the multi-sched patch, 
e.g., 
https://review.openstack.org/#/c/37407/30/nova/scheduler/filters/io_ops_filter.py
)
2. get_config() will check whether abc is defined in the aggregate, and if 
so will return the value from the aggregate, and CONF.abc otherwise 
(already implemented in recently merged AggregateCoreFilter and 
AggregateRamFilter -- e.g., 
https://review.openstack.org/#/c/33949/2/nova/scheduler/filters/core_filter.py
).
3. add a global flag that would enable or disable aggregate-based 
overrides

This seems to be a relatively simple rafactoring of existing code, still 
achieving important portion of the original goals of this blueprint.
Of course, we should still discuss the longer-term plan around scheduling 
policies at the summit.

Thoughts?

Regards,
Alex




From:   Russell Bryant rbry...@redhat.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   27/08/2013 10:48 PM
Subject:[openstack-dev] [Nova] multiple-scheduler-drivers 
blueprint



Greetings,

One of the important things to strive for in our community is consensus.
 When there's not consensus, we should take a step back and see if we
need to change directions.

There has been a lot of iterating on this feature, and I'm afraid we
still don't have consensus around the design.  Phil Day has been posting
some really good feedback on the review.  I asked Joe Gordon to take a
look and provide another opinion.  He agreed with Phil that we really
need to have scheduler policies be a first class API citizen.

So, that pushes this feature out to Icehouse, as it doesn't seem
possible to get this done in the required timeframe for Havana.

If you'd really like to push to get this into Havana, please make your
case.  :-)

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Shawn Hartsock
tl;dr at the end... I can ramble on a bit.

I agree with Daniel.

I'm not a core reviewer, but I'm trying to think like one. Over the last few 
weeks I've divested myself of almost all coding tasks, instead trying to 
increase the size of the community that is actively contributing to my area of 
expertise. I have indeed gone batty! I've caught myself a few times, and the 
frustration of feeling like I couldn't contribute code even if I wanted to is 
getting to be a bit too much.

The common refrain I've heard is, That's OpenSource as if this is a natural 
state of affairs for OpenSource projects. I've been either on or around 
OpenSource projects for nearly 20 years at this point and I really feel this 
doesn't have to be the case. Any project is doomed to have an internal 
structure that mirrors the organization that maintains it. That means software 
beyond a certain scale becomes part engineering and part state-craft.

In OpenSource projects that I have worked on recently, the way scale was 
handled was to break up the project into pieces small enough for teams of 1 to 
5 to handle. The core framework developers worked on exposing API to the plugin 
developers. Each plugin developer would then focus on how their plugin could 
expose both additional API and leverage framework API. Feedback went from the 
application developers to the plugin developers and up to the core developers. 
This whole divide-and-conquer strategy was aided by the fact that we could lean 
heavily on a custom dependency management and code/binary distribution system 
leveraged inside the framework itself. It meant that package structure and 
distribution could be controlled by the community directly to suit its needs. 
That makes a powerful combination for building a flexible system but requires a 
fair amount of infrastructure in code and hardware.

It wasn't a perfect solution. This strategy meant that an application or 
deployment became the coordination of plugins mixed at run-time. While efforts 
were made to test common combinations, it was impossible to test all 
combinations. That often meant people in the field were using combinations that 
nobody on official teams had ever considered. Because the plugins weren't on 
the same release cycle as the core framework (and even in different code 
repositories and release infrastructures) a plugin could release weekly or once 
every few years depending on its needs and sub-community.

There is a separate dysfunction you'll see if you go down this path. Core API 
must necessarily lead plugin implementation ... which means you sometimes get 
nonsense API with no backing. To solve this a few plugins are deemed core 
plugins and march in-step with the API release cycle. Then there's the added 
burden of longer backward compatibility cycles that necessarily stretch longer 
and longer leaving deprecated API lying around for years as plugin developers 
are coaxed into leaving them behind (and subsequent plugin users are coaxed to 
upgrade). Some things slow down while others speed up. The core API's evolution 
slows, the plugin/driver speeds up. Is that a fair trade off? It's a judgement 
call. No right answer.

In the end you trade one kind of problem for another and one kind of 
coordination for another. There's no clean answer that just works for this kind 
of problem and its why we have so many different kinds of governments in the 
world. Because, ultimately, that's what human coordination becomes if you don't 
watch out and one size does not fit all.

Based on my experiences this last cycle, I think nova is pretty well broken 
down already. Each driver is practically its own little group, the scheduler is 
surprisingly well fenced off, as are cells. As for our sub-team I think we 
could have moved much faster if we had been able to approve our own driver 
blueprints some of which have been in review since May and others which have 
30+ revisions updated every few days hoping for attention. It's part of why I 
moved to watching over people's work instead of doing my own and I now spend 
most of my time giving feedback to reviews other people are working on and 
seeking out expert opinions on other people's efforts. 

It's not a pleasant place to be and every time I pick up something to work on I 
either get pulled away or someone else picks up the job and finishes before I 
can even get started. I imagine this is much like what it is to be a core 
developer and that this contest of interest is the same strain the 
core-reviewers feel. You end up picking your own work and neglecting others or 
falling on the sword so other people can do their work and doing none of your 
own. Frankly, I don't want to use this strategy next cycle because it is far 
too unsatisfying for me.

BTW:
Anyone interested in this on an academic level, most of these ideas I have are 
from vague recollections of college readings of the work of W. Edwards Deming, 
Coase theorem, and more humorously and 

Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Thierry Carrez
Robert Collins wrote:
 So I'd like to throw two ideas into the mix.
 
 Firstly, consider having a rota - ideally 24x5 but that will need some
 more geographical coverage I suspect for many projects - of folk who
 spend a dedicated time period only reviewing.

We have been doing that in the past for Nova, with little success. The
reason why reviewday is called reviewday is because... well... there
were review days.

The wiki page was a bit eaten by the wiki conversion, but you can still
read it at:

https://wiki.openstack.org/wiki/Nova/ReviewDays

In the end, a strict rotation didn't work out because people just didn't
review on their review day, but rather when they have one hour free
waiting for a patch to pass gate or whatever. In the end, the rotation
gave us way worse results than random ad-hoc reviewing, because people
would stop reviewing on days other than their review day, and would
regularly skip their review day altogether.

Furthermore, there is some specialization going on: I prefer the two Xen
experts in nova-core to review one hour every two days rather than one
day every two weeks... because then Xen patches get better review
roundtrip times.

So I'm not convinced *at all* that a reboot of this would yield better
results.

 Launchpad [the
 project, not the site] did this with considerable success : every
 qualified reviewer committed to a time slot and didn't *try* to code -
 they focused on reviews.

The key difference is that every qualified reviewer was employed by
the same company, and the review day was enforced by their management.
The amount of patches is also significantly lower, and there is less
specialization effect.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Joe Gordon
On Aug 28, 2013 9:42 AM, Shawn Hartsock hartso...@vmware.com wrote:

 tl;dr at the end... I can ramble on a bit.

 I agree with Daniel.

 I'm not a core reviewer, but I'm trying to think like one. Over the last
few weeks I've divested myself of almost all coding tasks, instead trying
to increase the size of the community that is actively contributing to my
area of expertise. I have indeed gone batty! I've caught myself a few
times, and the frustration of feeling like I couldn't contribute code even
if I wanted to is getting to be a bit too much.

 The common refrain I've heard is, That's OpenSource as if this is a
natural state of affairs for OpenSource projects. I've been either on or
around OpenSource projects for nearly 20 years at this point and I really
feel this doesn't have to be the case. Any project is doomed to have an
internal structure that mirrors the organization that maintains it. That
means software beyond a certain scale becomes part engineering and part
state-craft.

 In OpenSource projects that I have worked on recently, the way scale was
handled was to break up the project into pieces small enough for teams of 1
to 5 to handle. The core framework developers worked on exposing API to the
plugin developers. Each plugin developer would then focus on how their
plugin could expose both additional API and leverage framework API.
Feedback went from the application developers to the plugin developers and
up to the core developers. This whole divide-and-conquer strategy was aided
by the fact that we could lean heavily on a custom dependency management
and code/binary distribution system leveraged inside the framework itself.
It meant that package structure and distribution could be controlled by the
community directly to suit its needs. That makes a powerful combination for
building a flexible system but requires a fair amount of infrastructure in
code and hardware.

 It wasn't a perfect solution. This strategy meant that an application or
deployment became the coordination of plugins mixed at run-time. While
efforts were made to test common combinations, it was impossible to test
all combinations. That often meant people in the field were using
combinations that nobody on official teams had ever considered. Because the
plugins weren't on the same release cycle as the core framework (and even
in different code repositories and release infrastructures) a plugin could
release weekly or once every few years depending on its needs and
sub-community.

 There is a separate dysfunction you'll see if you go down this path. Core
API must necessarily lead plugin implementation ... which means you
sometimes get nonsense API with no backing. To solve this a few plugins are
deemed core plugins and march in-step with the API release cycle. Then
there's the added burden of longer backward compatibility cycles that
necessarily stretch longer and longer leaving deprecated API lying around
for years as plugin developers are coaxed into leaving them behind (and
subsequent plugin users are coaxed to upgrade). Some things slow down while
others speed up. The core API's evolution slows, the plugin/driver speeds
up. Is that a fair trade off? It's a judgement call. No right answer.

 In the end you trade one kind of problem for another and one kind of
coordination for another. There's no clean answer that just works for this
kind of problem and its why we have so many different kinds of governments
in the world. Because, ultimately, that's what human coordination becomes
if you don't watch out and one size does not fit all.

 Based on my experiences this last cycle, I think nova is pretty well
broken down already. Each driver is practically its own little group, the
scheduler is surprisingly well fenced off, as are cells. As for our
sub-team I think we could have moved much faster if we had been able to
approve our own driver blueprints some of which have been in review since
May and others which have 30+ revisions updated every few days hoping for
attention. It's part of why I moved to watching over people's work instead
of doing my own and I now spend most of my time giving feedback to reviews
other people are working on and seeking out expert opinions on other
people's efforts.

Updating patches for attention often does the opposite for me.

When I see a patch set being revised every few days, that makes me think (
perhaps incorrectly) that the patch is still in active development and I am
inclined to review something else.

On a related note, I really like when the developer adds a gerrit comment
saying why the revision, that makes my life as a reviewer easier.


 It's not a pleasant place to be and every time I pick up something to
work on I either get pulled away or someone else picks up the job and
finishes before I can even get started. I imagine this is much like what it
is to be a core developer and that this contest of interest is the same
strain the core-reviewers feel. You end up picking your own work and

Re: [openstack-dev] New Nova hypervisor: Docker

2013-08-28 Thread Russell Bryant
On 08/28/2013 05:18 AM, Daniel P. Berrange wrote:
 On Wed, Aug 28, 2013 at 06:00:50PM +1000, Michael Still wrote:
 On Wed, Aug 28, 2013 at 4:18 AM, Sam Alba sam.a...@gmail.com wrote:
 Hi all,

 We've been working hard during the last couple of weeks with some
 people. Brian Waldon helped a lot designing the Glance integration and
 driver testing. Dean Troyer helped a lot on bringing Docker support in
 Devstack[1]. On top of that, we got several feedback on the Nova code
 review which definitely helped to improve the code.

 The blueprint[2] explains what Docker brings to Nova and how to use it.

 I have to say that this blueprint is a fantastic example of how we
 should be writing design documents. It addressed almost all of my
 questions about the integration.
 
 Yes, Sam ( any of the other Docker guys involved) have been great at
 responding to reviewers' requests to expand their design document. The
 latest update has really helped in understanding how this driver works
 in the context of openstack from an architectural and functional POV.

They've been great in responding to my requests, as well.  The biggest
thing was that I wanted to see devstack support so that it's easily
testable, both by developers and by CI.  They delivered.

So, in general, I'm good with this going in.  It's just a matter of
getting the code review completed in the next week before feature
freeze.  I'm going to try to help with it this week.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with Alembic...

2013-08-28 Thread Doug Hellmann
On Tue, Aug 27, 2013 at 12:30 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/27/2013 04:32 AM, Boris Pavlovic wrote:

 Jay,

 I should probably share to you about our work around DB.

 Migrations should be run only in production and only for production
 backends (e.g. psql and mysql)
 In tests we should use Schemas created by Models
 (BASE.metadata.create_all())


 Agree on both.


  We are not able to use in this approach in moment  because we don't have
 any mechanism to check that MODELS and SCHEMAS are EQUAL.
 And actually MODELS and SCHEMAS are DIFFERENT.


 Sorry, I don't understand the connection... how does not having a codified
 way of determining the difference between model and schema (BTW, this does
 exist in sqlalchemy-migrate... look at the compare_model_to_db method) not
 allow you to use metadata.create_all() in tests or mean that you can't run
 migrations only in production?


  E.g. in Celiometer we have BP that syncs models and migration
 https://blueprints.launchpad.**net/ceilometer/+spec/**
 ceilometer-db-sync-models-**with-migrationshttps://blueprints.launchpad.net/ceilometer/+spec/ceilometer-db-sync-models-with-migrations
 (in other projects we are doing the same)

 And also we are working around (oslo) generic tests that checks that
 models and migrations are equal:
 https://review.openstack.org/#**/c/42307/https://review.openstack.org/#/c/42307/


 OK, cool.


  So in our roadmap (in this case is):
 1) Soft switch to alembic (with code that allows to have sqla-migrate
 and alembic migration in the same time)


 I don't see the point in this at all... I would rather see patches that
 just switch to Alembic and get rid of SQLAlchemy-migrate. Create an initial
 Alembic migration that has the last state of the database schema under
 SQLAlchemy-migrate... and then delete SA-Migrate.


We had a rather long discussion about this on the mailing list a while
back. We decided not to spend time changing the existing migrations because
we didn't want to introduce differences for anyone doing continuous
deployment. The work to add alembic was supposed to mark the soft-switch,
and it looks like you're the (unlucky) first person to try to create an
actual alembic migration script.

We have an agenda item on the ceilometer meeting for today to discuss what
to do. At this point I think we should stick with sqlalchemy-migrate to
avoid causing delays before the H3 deadline.

Dou




  2) Sync Models and Migrations (fix DB schemas also)
 3) Add from oslo generic test that checks all this stuff
 4) Use BASE.create_all() for Schema creation instead of migrations.


 This is already done in some projects, IIRC... (Glance used to be this
 way, at least)

  But in OpenStack is not so simple to implement such huge changes, so it
 take some time=)


 Best regards,
 Boris Pavlovic
 ---
 Mirantis Inc.










 On Tue, Aug 27, 2013 at 12:02 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 On 08/26/2013 03:40 PM, Herndon, John Luke (HPCS - Ft. Collins) wrote:

 Jay -

 It looks there is an error in the migration script that causes
 it to abort:

 AttributeError: 'ForeignKeyConstraint' object has no attribute
 'drop'

 My guess is the migration runs on the first test, creates event
 types
 table fine, but exits with the above error, so migration is not
 complete. Thus every subsequent test tries to migrate the db,
 and
 notices that event types already exists.


 I'd corrected that particular mistake and pushed an updated
 migration script.

 Best,
 -jay



 -john

 On 8/26/13 1:15 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 I just noticed that every single test case for SQL-driver
 storage is
 executing every single migration upgrade before every single
 test case
 run:

 https://github.com/openstack/_**_ceilometer/blob/master/__**
 ceilometer/tests/db.pyhttps://github.com/openstack/__ceilometer/blob/master/__ceilometer/tests/db.py
 https://github.com/openstack/**ceilometer/blob/master/**
 ceilometer/tests/db.pyhttps://github.com/openstack/ceilometer/blob/master/ceilometer/tests/db.py
 
 #L46

 https://github.com/openstack/_**_ceilometer/blob/master/__**
 ceilometer/storage/imphttps://github.com/openstack/__ceilometer/blob/master/__ceilometer/storage/imp
 https://github.com/openstack/**ceilometer/blob/master/**
 ceilometer/storage/imphttps://github.com/openstack/ceilometer/blob/master/ceilometer/storage/imp
 
 l_sqlalchemy.py#L153

 instead of simply creating a new database schema from the
 models in the
 current source code base using a call to
 sqlalchemy.MetaData.create___**all().


 This results in re-running migrations over and over again,
  

Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Sandy Walsh


On 08/28/2013 10:58 AM, Thierry Carrez wrote:
 Robert Collins wrote:
 So I'd like to throw two ideas into the mix.

 Firstly, consider having a rota - ideally 24x5 but that will need some
 more geographical coverage I suspect for many projects - of folk who
 spend a dedicated time period only reviewing.
 
 We have been doing that in the past for Nova, with little success. The
 reason why reviewday is called reviewday is because... well... there
 were review days.
 
 The wiki page was a bit eaten by the wiki conversion, but you can still
 read it at:
 
 https://wiki.openstack.org/wiki/Nova/ReviewDays
 
 In the end, a strict rotation didn't work out because people just didn't
 review on their review day, but rather when they have one hour free
 waiting for a patch to pass gate or whatever. In the end, the rotation
 gave us way worse results than random ad-hoc reviewing, because people
 would stop reviewing on days other than their review day, and would
 regularly skip their review day altogether.
 
 Furthermore, there is some specialization going on: I prefer the two Xen
 experts in nova-core to review one hour every two days rather than one
 day every two weeks... because then Xen patches get better review
 roundtrip times.
 
 So I'm not convinced *at all* that a reboot of this would yield better
 results.

+1

That said, I think the reason my reviews dropped off from Nova was not
having a dedicated day for it. But that was my fault, not the fault of
the process. With Ceilometer, I try to set aside one fixed day a week
for reviews (with moderate success ;)


 Launchpad [the
 project, not the site] did this with considerable success : every
 qualified reviewer committed to a time slot and didn't *try* to code -
 they focused on reviews.
 
 The key difference is that every qualified reviewer was employed by
 the same company, and the review day was enforced by their management.
 The amount of patches is also significantly lower, and there is less
 specialization effect.
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Gary Kotton
Hi,
I am not sure that there is a good solution. I guess that we all need to 
'vasbyt' (that is Afrikaans for bite the bullet) and wait for the code posted 
to be reviewed. In Neutron when we were heading towards the end of a cycle and 
there were a ton of BP's being added the PTL would ensure that there were at 
least two reviewer on each BP. This would address the problem in two ways:
1. Accountability for the review process in the critical time period
2. The coder was able to have a person that he/she could be in touch with. 
The above would enhance the cadence of the reviews.
I personally am spending a few hours a day reviewing code. I hope that it is 
helping move things forwards. A review not only means just looking at the code 
(there are some cases that it is simple), but it means running and testing the 
code. In some cases it is not possible to test (for example a Mellanox vif 
driver). 
In cases when a reviewer does not have an option to test the code would a 
tempest run help the reviewer with his/her decision?
Thanks and Alut a continua
Gary



 -Original Message-
 From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
 Sent: Wednesday, August 28, 2013 5:15 PM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Nova] Frustrations with review wait times
 
 
 
 On 08/28/2013 10:58 AM, Thierry Carrez wrote:
  Robert Collins wrote:
  So I'd like to throw two ideas into the mix.
 
  Firstly, consider having a rota - ideally 24x5 but that will need
  some more geographical coverage I suspect for many projects - of folk
  who spend a dedicated time period only reviewing.
 
  We have been doing that in the past for Nova, with little success. The
  reason why reviewday is called reviewday is because... well...
  there were review days.
 
  The wiki page was a bit eaten by the wiki conversion, but you can
  still read it at:
 
  https://wiki.openstack.org/wiki/Nova/ReviewDays
 
  In the end, a strict rotation didn't work out because people just
  didn't review on their review day, but rather when they have one hour
  free waiting for a patch to pass gate or whatever. In the end, the
  rotation gave us way worse results than random ad-hoc reviewing,
  because people would stop reviewing on days other than their review
  day, and would regularly skip their review day altogether.
 
  Furthermore, there is some specialization going on: I prefer the two
  Xen experts in nova-core to review one hour every two days rather than
  one day every two weeks... because then Xen patches get better review
  roundtrip times.
 
  So I'm not convinced *at all* that a reboot of this would yield better
  results.
 
 +1
 
 That said, I think the reason my reviews dropped off from Nova was not
 having a dedicated day for it. But that was my fault, not the fault of the
 process. With Ceilometer, I try to set aside one fixed day a week for reviews
 (with moderate success ;)
 
 
  Launchpad [the
  project, not the site] did this with considerable success : every
  qualified reviewer committed to a time slot and didn't *try* to code
  - they focused on reviews.
 
  The key difference is that every qualified reviewer was employed by
  the same company, and the review day was enforced by their
 management.
  The amount of patches is also significantly lower, and there is less
  specialization effect.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] mid-Icehouse-cycle meet-up (was: [Nova] Interested in a mid-Icehouse-cycle Nova meet-up?)

2013-08-28 Thread Julien Danjou
On Tue, Aug 27 2013, Thierry Carrez wrote:

 Daniel P. Berrange wrote:
 Is openstack looking to have a strong presence at FOSDEM 2014 ? I didn't
 make it to FOSDEM this year, but IIUC, there were quite a few openstack
 contributors  talks in 2013.

 Yes, we are aiming for a devroom again at FOSDEM this year.

 IOW, should we consider holding the meetup in Brussels just before/after
 FOSDEM, so that people who want/need to attend both can try to maximise
 utilization of their often limited travel budgets and/or minimise the
 number of days lost to travelling ?

 I would certainly like that, but I'm not sure the center of gravity for
 Nova contributors is in Europe :)

That could work for Ceilometer folks, I think we could gather half the
core team at FOSDEM ;-)
If people are interested, let's keep it in our mind to try to come up
with something by then.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] Derekh for tripleo core

2013-08-28 Thread James Slagle
+1


On Tue, Aug 27, 2013 at 5:25 PM, Robert Collins
robe...@robertcollins.netwrote:

 http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
 http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

 - Derek is reviewing fairly regularly and has got a sense of the
 culture etc now, I think.

 So - calling for votes for Derek to become a TripleO core reviewer!

 I think we're nearly at the point where we can switch to the 'two
 +2's' model - what do you think?

 Also tsk! to those cores who aren't reviewing as regularly :)

 Cheers,
 Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
-- James Slagle
--
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][tempest][rdo] looking for opinions on change 43298

2013-08-28 Thread David Ripton

On 08/27/2013 03:52 PM, Matt Riedemann wrote:

This change:

_https://review.openstack.org/#/c/43298/_

Is attempting to fix a bug where a tempest test fails when nova-manage
--version is different from nova-manage version when using a RHEL 6
installation rather than devstack.

Pavel points out an RDO bug that was filed back in April to address the
issue: _https://bugzilla.redhat.com/show_bug.cgi?id=952811_

That RDO bug hasn't gotten any attention though (I wasn't aware of it
when I reported the launchpad bug).

So my question is, is this worth changing in Tempest or should we expect
that nova-manage --version will always equal nova-manage version?
  I'm not even really sure how they are getting their values, one
appears to be coming from the python distribution and one from the rpm
(looks like argparse must do something there).


My opinion is that nova-manage version and nova-manage --version 
should return the same thing, because user interfaces should look like 
they were designed on purpose, and nobody would intentionally design 
those two almost identical commands to return different things.  I think 
it's a bug that needs to be fixed in nova-manage rather than something 
Tempest should have to work around.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread David Ripton

On 08/28/2013 05:10 AM, Daniel P. Berrange wrote:

IOW we should
prioritize review of work whose authors submitted earlier to encourage
good practice with early submission.


+1.

Can we reconfigure Gerrit to show oldest first rather than newest first 
by default?


(next-review does this.  next-review is awesome.  Everyone should try 
next-review.  But we should try to make Gerrit do the right thing too, 
just in case some people prefer it to next-review.)


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread David Kranz

On 08/28/2013 10:31 AM, Gary Kotton wrote:

Hi,
I am not sure that there is a good solution. I guess that we all need to 
'vasbyt' (that is Afrikaans for bite the bullet) and wait for the code posted 
to be reviewed. In Neutron when we were heading towards the end of a cycle and 
there were a ton of BP's being added the PTL would ensure that there were at 
least two reviewer on each BP. This would address the problem in two ways:
1. Accountability for the review process in the critical time period
2. The coder was able to have a person that he/she could be in touch with.
The above would enhance the cadence of the reviews.
I personally am spending a few hours a day reviewing code. I hope that it is 
helping move things forwards. A review not only means just looking at the code 
(there are some cases that it is simple), but it means running and testing the 
code. In some cases it is not possible to test (for example a Mellanox vif 
driver).
In cases when a reviewer does not have an option to test the code would a 
tempest run help the reviewer with his/her decision?
Thanks and Alut a continua
Gary


Well, in general almost all of the tempest tests are gating on all 
projects. So if jenkins says +1 then tempest has passed. The unfortunate 
exception is that the jenkins job that runs all tempest tests for a 
neutron configuration has never passed and was non-voting. This week it 
was further demoted to the experimental queue where it will only run 
if some one tells it to :-( . I also suspect that tempest coverage of 
neutron is not as good as for other projects.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Joshua Harlow
Shrinking that rotation granularity would be reasonable to. Rotate once every 2 
weeks or some other time period still seems useful to me.

Sent from my really tiny device...

On Aug 28, 2013, at 3:43 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Wed, Aug 28, 2013 at 10:29:23PM +1200, Robert Collins wrote:
 On 28 August 2013 21:13, Daniel P. Berrange berra...@redhat.com wrote:
 On Wed, Aug 28, 2013 at 03:43:21AM +, Joshua Harlow wrote:
 
 For a big project like nova the workload could be spread out more
 like that.
 
 I don't think any kind of rotation system like that is really
 practical. Core team members need to have the flexibility to balance
 their various conflicting workloads in a way that maximises their
 own productivity.
 
 So does everyone else, surely? Are you saying 'I don't think I can
 commit to regular reviewing', or are you saying 'all reviewers will be
 unable to commit to regular reviewing'? Or something else?
 
 No, IIUC, Joshua was suggesting that core team members spend one cycle
 doing reviews only, with no coding, and then reverse for the next cycle. 
 That is just far too coarse/crude. Core team members need to be free to
 balance their time between reviews and coding work on an ongoing basis,
 just as any other member of the community can.
 
 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Nova hypervisor: Docker

2013-08-28 Thread Sam Alba
Thanks a lot everyone for the nice feedback. I am going to work hard
to get all those new comments addressed to be able to re-submit a new
patchset today or tomorrow (the later).

On Wed, Aug 28, 2013 at 7:02 AM, Russell Bryant rbry...@redhat.com wrote:
 On 08/28/2013 05:18 AM, Daniel P. Berrange wrote:
 On Wed, Aug 28, 2013 at 06:00:50PM +1000, Michael Still wrote:
 On Wed, Aug 28, 2013 at 4:18 AM, Sam Alba sam.a...@gmail.com wrote:
 Hi all,

 We've been working hard during the last couple of weeks with some
 people. Brian Waldon helped a lot designing the Glance integration and
 driver testing. Dean Troyer helped a lot on bringing Docker support in
 Devstack[1]. On top of that, we got several feedback on the Nova code
 review which definitely helped to improve the code.

 The blueprint[2] explains what Docker brings to Nova and how to use it.

 I have to say that this blueprint is a fantastic example of how we
 should be writing design documents. It addressed almost all of my
 questions about the integration.

 Yes, Sam ( any of the other Docker guys involved) have been great at
 responding to reviewers' requests to expand their design document. The
 latest update has really helped in understanding how this driver works
 in the context of openstack from an architectural and functional POV.

 They've been great in responding to my requests, as well.  The biggest
 thing was that I wanted to see devstack support so that it's easily
 testable, both by developers and by CI.  They delivered.

 So, in general, I'm good with this going in.  It's just a matter of
 getting the code review completed in the next week before feature
 freeze.  I'm going to try to help with it this week.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
@sam_alba

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Davanum Srinivas
Shawn,

If each little group had at least one active Nova core member, i think it
would speed things up way faster IMHO.

-- dims


On Wed, Aug 28, 2013 at 9:40 AM, Shawn Hartsock hartso...@vmware.comwrote:

 tl;dr at the end... I can ramble on a bit.

 I agree with Daniel.

 I'm not a core reviewer, but I'm trying to think like one. Over the last
 few weeks I've divested myself of almost all coding tasks, instead trying
 to increase the size of the community that is actively contributing to my
 area of expertise. I have indeed gone batty! I've caught myself a few
 times, and the frustration of feeling like I couldn't contribute code even
 if I wanted to is getting to be a bit too much.

 The common refrain I've heard is, That's OpenSource as if this is a
 natural state of affairs for OpenSource projects. I've been either on or
 around OpenSource projects for nearly 20 years at this point and I really
 feel this doesn't have to be the case. Any project is doomed to have an
 internal structure that mirrors the organization that maintains it. That
 means software beyond a certain scale becomes part engineering and part
 state-craft.

 In OpenSource projects that I have worked on recently, the way scale was
 handled was to break up the project into pieces small enough for teams of 1
 to 5 to handle. The core framework developers worked on exposing API to the
 plugin developers. Each plugin developer would then focus on how their
 plugin could expose both additional API and leverage framework API.
 Feedback went from the application developers to the plugin developers and
 up to the core developers. This whole divide-and-conquer strategy was aided
 by the fact that we could lean heavily on a custom dependency management
 and code/binary distribution system leveraged inside the framework itself.
 It meant that package structure and distribution could be controlled by the
 community directly to suit its needs. That makes a powerful combination for
 building a flexible system but requires a fair amount of infrastructure in
 code and hardware.

 It wasn't a perfect solution. This strategy meant that an application or
 deployment became the coordination of plugins mixed at run-time. While
 efforts were made to test common combinations, it was impossible to test
 all combinations. That often meant people in the field were using
 combinations that nobody on official teams had ever considered. Because the
 plugins weren't on the same release cycle as the core framework (and even
 in different code repositories and release infrastructures) a plugin could
 release weekly or once every few years depending on its needs and
 sub-community.

 There is a separate dysfunction you'll see if you go down this path. Core
 API must necessarily lead plugin implementation ... which means you
 sometimes get nonsense API with no backing. To solve this a few plugins are
 deemed core plugins and march in-step with the API release cycle. Then
 there's the added burden of longer backward compatibility cycles that
 necessarily stretch longer and longer leaving deprecated API lying around
 for years as plugin developers are coaxed into leaving them behind (and
 subsequent plugin users are coaxed to upgrade). Some things slow down while
 others speed up. The core API's evolution slows, the plugin/driver speeds
 up. Is that a fair trade off? It's a judgement call. No right answer.

 In the end you trade one kind of problem for another and one kind of
 coordination for another. There's no clean answer that just works for this
 kind of problem and its why we have so many different kinds of governments
 in the world. Because, ultimately, that's what human coordination becomes
 if you don't watch out and one size does not fit all.

 Based on my experiences this last cycle, I think nova is pretty well
 broken down already. Each driver is practically its own little group, the
 scheduler is surprisingly well fenced off, as are cells. As for our
 sub-team I think we could have moved much faster if we had been able to
 approve our own driver blueprints some of which have been in review since
 May and others which have 30+ revisions updated every few days hoping for
 attention. It's part of why I moved to watching over people's work instead
 of doing my own and I now spend most of my time giving feedback to reviews
 other people are working on and seeking out expert opinions on other
 people's efforts.

 It's not a pleasant place to be and every time I pick up something to work
 on I either get pulled away or someone else picks up the job and finishes
 before I can even get started. I imagine this is much like what it is to be
 a core developer and that this contest of interest is the same strain the
 core-reviewers feel. You end up picking your own work and neglecting others
 or falling on the sword so other people can do their work and doing none of
 your own. Frankly, I don't want to use this strategy next cycle because it
 is far too unsatisfying 

Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread David Kranz

On 08/28/2013 10:31 AM, Gary Kotton wrote:

Hi,
I am not sure that there is a good solution. I guess that we all need to 
'vasbyt' (that is Afrikaans for bite the bullet) and wait for the code posted 
to be reviewed. In Neutron when we were heading towards the end of a cycle and 
there were a ton of BP's being added the PTL would ensure that there were at 
least two reviewer on each BP. This would address the problem in two ways:
1. Accountability for the review process in the critical time period
2. The coder was able to have a person that he/she could be in touch with.
The above would enhance the cadence of the reviews.
I personally am spending a few hours a day reviewing code. I hope that it is 
helping move things forwards. A review not only means just looking at the code 
(there are some cases that it is simple), but it means running and testing the 
code. In some cases it is not possible to test (for example a Mellanox vif 
driver).
In cases when a reviewer does not have an option to test the code would a 
tempest run help the reviewer with his/her decision?
Thanks and Alut a continua
Gary


Just to clarify my last message, there is still a gating job called 
gate-tempest-devstack-vm-neutron 
http://logs.openstack.org/58/43658/4/check/gate-tempest-devstack-vm-neutron/10f1a5a

but it only runs the smoke tests which is a small subset of tempest.

 -David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Russell Bryant
On 08/28/2013 12:25 PM, Davanum Srinivas wrote:
 If each little group had at least one active Nova core member, i think
 it would speed things up way faster IMHO. 

Agreed, in theory.  However, we should not add someone just for the sake
of having someone on the team from a certain area.  They need to be held
to the same standards as the rest of the team.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Davanum Srinivas
+1000 Russell

-- dims


On Wed, Aug 28, 2013 at 1:06 PM, Russell Bryant rbry...@redhat.com wrote:

 On 08/28/2013 12:25 PM, Davanum Srinivas wrote:
  If each little group had at least one active Nova core member, i think
  it would speed things up way faster IMHO.

 Agreed, in theory.  However, we should not add someone just for the sake
 of having someone on the team from a certain area.  They need to be held
 to the same standards as the rest of the team.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] endpoint registration

2013-08-28 Thread Duncan Thomas
The downside of doing version discovery in the client is that it adds
a third round trip... though the client can cache the support versions
I guess.

On 18 August 2013 00:14, Joshua Harlow harlo...@yahoo-inc.com wrote:
 +3

 Sent from my really tiny device...

 On Aug 17, 2013, at 8:33 AM, Dolph Mathews dolph.math...@gmail.com
 wrote:




 On Sat, Aug 17, 2013 at 4:18 AM, Julien Danjou jul...@danjou.info wrote:

 On Fri, Aug 16 2013, Doug Hellmann wrote:

  If you're saying that you want to register URLs without version info
  embedded in them, and let the client work that part out by talking to
  the
  service in question (or getting a version number from the caller), then
  yes, please.

 Yes yes yes. I already started for Swift about that a while back which
 got 0 reply.

 There's no point in registering URL with version suffix and others
 stuff. We should stop doing that in all places, including documentation.


 +2!



 --
 Julien Danjou
 /* Free Software hacker * freelance consultant
http://julien.danjou.info */

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Johannes Erdfelt
On Wed, Aug 28, 2013, Russell Bryant rbry...@redhat.com wrote:
 On 08/28/2013 12:25 PM, Davanum Srinivas wrote:
  If each little group had at least one active Nova core member, i think
  it would speed things up way faster IMHO. 
 
 Agreed, in theory.  However, we should not add someone just for the sake
 of having someone on the team from a certain area.  They need to be held
 to the same standards as the rest of the team.

Do you mean the nova-core standards?

I had a soft understanding that nova-core members were trusted to give
+2 and -2 reviews and that they actually needed to do reviews.

I did a quick search and didn't find anything more than that, but maybe
I missed a web page somewhere.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Scheduler support for PCI passthrough

2013-08-28 Thread Jiang, Yunhong
Gary
 Firstly, thanks for your review very much.
 The pci_stats is calculated in the resource tracker in the compute 
node and is also saved in compute_node, I think currently the scheduler depends 
on the information provided by the compute_node table so this method should fit 
into current framework.
 Please notice that the scheduler only decide the host that can meet 
the requirement, and it's the resource tracker in compute node that will do the 
real device allocation. So if scheduler does not get the latest information, it 
may either can't find the host, or, it finds a host with wrong information and 
the re-try mechanism should work. Anyway, this is same to other compute nodes 
information like free_ram or free_vcpus, right?

 But you does remind me one thing, that if a hot plug happens , after 
the resource tracker select the device and before the instance is really 
created. Possibly the virt driver need check the requirement before create the 
domain. But this race condition chain will not end. After all, there are window 
between the virt driver check and the instance creation, and no idea how can we 
guarantee this.

Thanks
--jyh

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Wednesday, August 28, 2013 2:19 AM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [Nova] Scheduler support for PCI passthrough

Hi,
Whilst reviewing the code I think that I have stumbled on an issue (I hope that 
I am mistaken). The change set (https://review.openstack.org/#/c/35749/) 
expects pci stats to be returned from the host. There are a number of issues 
here that I have concern with an would like to know what the process is for 
addressing the fact that the compute node may not provide these statistics, for 
example, this could be due to the fact that the driver has not been updated to 
return the pci stats or this could be if the scheduler has been upgraded prior 
to the compute node (what is the process for the upgrade).
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About multihost patch review

2013-08-28 Thread Vishvananda Ishaya

On Aug 26, 2013, at 6:14 PM, Maru Newby ma...@redhat.com wrote:

 
 On Aug 26, 2013, at 4:06 PM, Edgar Magana emag...@plumgrid.com wrote:
 
 Hi Developers,
 
 Let me explain my point of view on this topic and please share your thoughts 
 in order to merge this new feature ASAP.
 
 My understanding is that multi-host is nova-network HA  and we are 
 implementing this bp 
 https://blueprints.launchpad.net/neutron/+spec/quantum-multihost for the 
 same reason.
 So, If in neutron configuration admin enables multi-host:
 etc/dhcp_agent.ini
 
 # Support multi host networks
 # enable_multihost = False
 
 Why do tenants needs to be aware of this? They should just create networks 
 in the way they normally do and not by adding the multihost extension.
 
 I was pretty confused until I looked at the nova-network HA doc [1].  The 
 proposed design would seem to emulate nova-network's multi-host HA option, 
 where it was necessary to both run nova-network on every compute node and 
 create a network explicitly as multi-host.  I'm not sure why nova-network was 
 implemented in this way, since it would appear that multi-host is basically 
 all-or-nothing.  Once nova-network services are running on every compute 
 node, what does it mean to create a network that is not multi-host?

Just to add a little background to the nova-network multi-host: The fact that 
the multi_host flag is stored per-network as opposed to a configuration was an 
implementation detail. While in theory this would support configurations where 
some networks are multi_host and other ones are not, I am not aware of any 
deployments where both are used together.

That said, If there is potential value in offering both, it seems like it 
should be under the control of the deployer not the user. In other words the 
deployer should be able to set the default network type and enforce whether 
setting the type is exposed to the user at all.

Also, one final point. In my mind, multi-host is strictly better than single 
host, if I were to redesign nova-network today, I would get rid of the single 
host mode completely.

Vish

 
 So, to Edgar's question - is there a reason other than 'be like nova-network' 
 for requiring neutron multi-host to be configured per-network?
 
 
 m.
 
 1: 
 http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html
 
 
 I could be totally wrong and crazy, so please provide some feedback.
 
 Thanks,
 
 Edgar
 
 
 From: Yongsheng Gong gong...@unitedstack.com
 Date: Monday, August 26, 2013 2:58 PM
 To: Kyle Mestery (kmestery) kmest...@cisco.com, Aaron Rosen 
 aro...@nicira.com, Armando Migliaccio amigliac...@vmware.com, Akihiro 
 MOTOKI amot...@gmail.com, Edgar Magana emag...@plumgrid.com, Maru Newby 
 ma...@redhat.com, Nachi Ueno na...@nttmcl.com, Salvatore Orlando 
 sorla...@nicira.com, Sumit Naiksatam sumit.naiksa...@bigswitch.com, Mark 
 McClain mark.mccl...@dreamhost.com, Gary Kotton gkot...@vmware.com, 
 Robert Kukura rkuk...@redhat.com
 Cc: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: About multihost patch review
 
 Hi,
 Edgar Magana has commented to say:
 'This is the part that for me is confusing and I will need some 
 clarification from the community. Do we expect to have the multi-host 
 feature as an extension or something that will natural work as long as the 
 deployment include more than one Network Node. In my opinion, Neutron 
 deployments with more than one Network Node by default should call DHCP 
 agents in all those nodes without the need to use an extension. If the 
 community has decided to do this by extensions, then I am fine' at
 https://review.openstack.org/#/c/37919/11/neutron/extensions/multihostnetwork.py
 
 I have commented back, what is your opinion about it?
 
 Regards,
 Yong Sheng Gong
 
 
 On Fri, Aug 16, 2013 at 9:28 PM, Kyle Mestery (kmestery) 
 kmest...@cisco.com wrote:
 Hi Yong:
 
 I'll review this and try it out today.
 
 Thanks,
 Kyle
 
 On Aug 15, 2013, at 10:01 PM, Yongsheng Gong gong...@unitedstack.com 
 wrote:
 
 The multihost patch is there for a long long time, can someone help to 
 review?
 https://review.openstack.org/#/c/37919/
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] MultiStrOpt opts can be problematic...

2013-08-28 Thread Dan Prince
So I recently ran into a fun config issue in trying to configure Nova to work 
w/ Ceilometer using Puppet:

  https://bugs.launchpad.net/puppet-ceilometer/+bug/1217867

Today, what you need to do to make Nova work with ceilometer is add this to 
your nova.conf file:

 notification_driver=nova.openstack.common.notifier.rpc_notifier
 notification_driver=ceilometer.compute.nova_notifier

As it turns out Multi-line config entries aren't very fun to deal with in the 
config management world. The puppet nova_config provider doesn't (yet) have a 
good way to support them. The core of the issue is they pose all sorts of 
problems in knowing whether a given tool should modify the existing config 
values.


In the short term we can look into doing one of these Puppet land:

 -Using a conf.d directory for config (would require a change to the 
nova-compute init script to use --config-dir)
 -String together various resources in puppet to make it work, etc. 
(file_line, augeus, etc)

Long term though I'm thinking what if MultiStrOpt's were to go away? They seem 
to be more trouble than they are worth...

Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Marconi

2013-08-28 Thread Joe Gordon
On Thu, Aug 22, 2013 at 12:29 PM, Kurt Griffiths 
kurt.griffi...@rackspace.com wrote:

  What was wrong with qpid, rabbitmq, activemq, zeromq, ${your favorite
  queue here} that required marconi?

 That's a good question. The features supported by AMQP brokers, ZMQ, and
 Marconi certainly do overlap in some areas. At the same time, however, each
 of these options offer distinct features that may or may not align with
 what a web developer is trying to accomplish.

 Here are a few of Marconi's unique features, relative to the other options
 you mentioned:

   *  Multi-tenant
   *  Keystone integration
   *  100% Python
   *  First-class, stateless, firewall-friendly HTTP(S) transport driver
   *  Simple protocol, easy for clients to implement
   *  Scales to an unlimited number of queues and clients
   *  Per-queue stats, useful for monitoring and autoscale
   *  Tag-based message filtering (planned)

 Relative to SQS, Marconi:

   *  Is open-source and community-driven
   *  Supports private and hybrid deployments
   *  Offers hybrid pub-sub and producer-consumer semantics
   *  Provides a clean, modern HTTP API
   *  Can route messages to multiple queues (planned)
   *  Can perform custom message transformations (planned)

 Anyway, that's my $0.02 - others may chime in with their own thoughts.


I assume the rabbitmq vs sqs debate (
http://notes.variogr.am/post/67710296/replacing-amazon-sqs-with-something-faster-and-cheaper)
is the same for rabbitmq vs marconi?



 @kgriffs


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] multiple-scheduler-drivers blueprint

2013-08-28 Thread Joe Gordon
On Wed, Aug 28, 2013 at 9:12 AM, Alex Glikson glik...@il.ibm.com wrote:

 It seems that the main concern was that the overridden scheduler
 properties are taken from the flavor, and not from the aggregate. In fact,
 there was a consensus that this is not optimal.

 I think that we can still make some progress in Havana towards
 per-aggregate overrides, generalizing on the recently merged changes that
 do just that -- for cpu and for memory with FilterScheduler (and leveraging
 a bit from the original multi-sched patch). As follows:
 1. individual filters will call get_config('abc') instead of CONF.abc
 (already implemented in the current version of the multi-sched patch, e.g.,
 https://review.openstack.org/#/c/37407/30/nova/scheduler/filters/core_filter.py
 *
 https://review.openstack.org/#/c/37407/30/nova/scheduler/filters/io_ops_filter.py
 *https://review.openstack.org/#/c/37407/30/nova/scheduler/filters/io_ops_filter.py
 )
 2. get_config() will check whether abc is defined in the aggregate, and if
 so will return the value from the aggregate, and CONF.abc otherwise
 (already implemented in recently merged AggregateCoreFilter and
 AggregateRamFilter -- e.g., *
 https://review.openstack.org/#/c/33949/2/nova/scheduler/filters/core_filter.py
 *https://review.openstack.org/#/c/33949/2/nova/scheduler/filters/core_filter.py
 ).
 3. add a global flag that would enable or disable aggregate-based overrides


Why can't something like this be done with just different filters, see such
as for AggregateRamFilter?



 This seems to be a relatively simple rafactoring of existing code, still
 achieving important portion of the original goals of this blueprint.
 Of course, we should still discuss the longer-term plan around scheduling
 policies at the summit.

 Thoughts?

 Regards,
 Alex




 From:Russell Bryant rbry...@redhat.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:27/08/2013 10:48 PM
 Subject:[openstack-dev] [Nova] multiple-scheduler-drivers
 blueprint
 --



 Greetings,

 One of the important things to strive for in our community is consensus.
 When there's not consensus, we should take a step back and see if we
 need to change directions.

 There has been a lot of iterating on this feature, and I'm afraid we
 still don't have consensus around the design.  Phil Day has been posting
 some really good feedback on the review.  I asked Joe Gordon to take a
 look and provide another opinion.  He agreed with Phil that we really
 need to have scheduler policies be a first class API citizen.

 So, that pushes this feature out to Icehouse, as it doesn't seem
 possible to get this done in the required timeframe for Havana.

 If you'd really like to push to get this into Havana, please make your
 case.  :-)

 Thanks,

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] Derekh for tripleo core

2013-08-28 Thread Adam Young
As an outsider to OOO, but a Keystone core, let me endorse Derek's work 
on provioning in general.  He is an outstanding developer.



On 08/28/2013 10:31 AM, James Slagle wrote:

+1


On Tue, Aug 27, 2013 at 5:25 PM, Robert Collins 
robe...@robertcollins.net mailto:robe...@robertcollins.net wrote:


http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

- Derek is reviewing fairly regularly and has got a sense of the
culture etc now, I think.

So - calling for votes for Derek to become a TripleO core reviewer!

I think we're nearly at the point where we can switch to the 'two
+2's' model - what do you think?

Also tsk! to those cores who aren't reviewing as regularly :)

Cheers,
Rob

--
Robert Collins rbtcoll...@hp.com mailto:rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
-- James Slagle
--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] endpoint registration

2013-08-28 Thread Dean Troyer
On Wed, Aug 28, 2013 at 12:18 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 The downside of doing version discovery in the client is that it adds
 a third round trip... though the client can cache the support versions
 I guess.


That's only for the Identity version discovery.  Add more round trips for
additional APIs to be used.

Also, only Keystone allows unauthenticated access to /vXX endpoints.  To
query any other service you need to either know the root endpoint
beforehand or auth to get one from the service catalog, and then you still
don't know if the version is or is not included in that endpoint (no
parsing-guessing here!) due to backward-compatibility for older deployments.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Russell Bryant
On 08/28/2013 01:22 PM, Johannes Erdfelt wrote:
 On Wed, Aug 28, 2013, Russell Bryant rbry...@redhat.com wrote:
 On 08/28/2013 12:25 PM, Davanum Srinivas wrote:
 If each little group had at least one active Nova core member, i think
 it would speed things up way faster IMHO. 

 Agreed, in theory.  However, we should not add someone just for the sake
 of having someone on the team from a certain area.  They need to be held
 to the same standards as the rest of the team.
 
 Do you mean the nova-core standards?
 
 I had a soft understanding that nova-core members were trusted to give
 +2 and -2 reviews and that they actually needed to do reviews.
 
 I did a quick search and didn't find anything more than that, but maybe
 I missed a web page somewhere.

You're right that much of this has been unwritten rules.  I would really
like to improve that.  I started this page recently, but I'm sure
there's more that could be added:

https://wiki.openstack.org/wiki/Nova/CoreTeam

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][horizon]Backend filtering in Keystone

2013-08-28 Thread Gabriel Hurley
While these are useful steps in isolation, I'm hesitant to just say go for 
it! because I see this as a problem that OpenStack as a whole needs to solve. 
Your implementation here is a good proof-of-concept that's likely worth vetting 
and then emulating elsewhere.

However looking at it a different way it adds further fragmentation to the 
filtering, sorting, paging, etc. methods that Horizon has to attempt to support 
across all the projects.

It's my intention to run a session on this at the summit and probably walk out 
of that summit with a Horizon will support X, so if you want people to have a 
good experience with your project in Horizon you should support X too kind of 
agreement that we can work towards across projects in Icehouse. That X will 
likely reflect what you've done here and the great discussions that happened 
recently about the possibility of doing away with pagination entirely.

If the patch is ready to go then by all means merge it and we can start playing 
with it to see where it shines and where it needs polish. I'm all for it in 
principle.

All the best,


-  Gabriel

From: Henry Nash [mailto:hen...@linux.vnet.ibm.com]
Sent: Wednesday, August 28, 2013 1:58 AM
To: Gabriel Hurley
Cc: OpenStack Development Mailing List; Dolph Mathews; Adam Young
Subject: [keystone][horizon]Backend filtering in Keystone

Hi Gabriel,

Following up on our discussions on filtering and pagination, here's where we 
stand:

1) We have a patch ready to go into H that implements a framework that lets the 
keystone backend drivers implement filters (e.g. would be included in the SQL 
SELECT rather than being a post-prcessed filter on the full list, which is what 
happens today).  See: https://review.openstack.org/#/c/43257/ . It includes the 
SQL drivers fixed up so they work with this, although it's unlikely we can get 
the LDAP one complete for H given the freeze (which just means queries to an 
LDAP backed entity will just work as they do today).
2) The above patch also lets a cloud provider set a limit on the number of rows 
return by a list query, to avoid excessively long responses and data in the 
case where the caller doesn't do a good job of filtering.

We have two other changes ready, but are deferring to IceHouse:

3) The inexact filtering (e.g. GET /v3/users?name__startswitch=Hen) is coded 
and included in 1).  However, since this is an API change we have it turned 
off, and will enable early in IceHouse.  An API review for this is already 
posted (with you as one of the reviewers): 
https://review.openstack.org/#/c/43900/
4) A separate patch is also ready for Pagination 
(https://review.openstack.org/#/c/43581/), using the simple page and per_page 
semantics.  Given the contention over this, we'll discuss this at the HK summit

I wanted to gauge how advantageous 1) and 2) are to you and the Horizon team?  
Some concerns have been raised (given how close we are to the freeze) as to 
whether we should push them in.  Personally I'm OK with it, but wanted to 
balance that with real need (or not if you see these as only minor).

Henry
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Modal form without redirect

2013-08-28 Thread Gabriel Hurley
If you look at the code in the post()[1] method of the base workflow view 
you'll note that a response to a successful workflow POST is always a 
redirect[2] (caveat for when it's specifically adding data back to a field, 
which isn't relevant here).

The reason for this is that in general when you POST via a standard browser 
request you want to send back a redirect so that reloading the page, etc. 
behave correctly and don't potentially result in double-POSTs.

If you're submitting the workflow via a regular HTTP from submit POST then I'd 
say redirecting is correct; you simply want to redirect to the current page. If 
you're doing this via AJAX then you'll want to add some new code to otherwise 
signal a successful response (both to the code and to the user) and to take 
action accordingly.

Hope that helps,

 - Gabriel

[1] 
https://github.com/openstack/horizon/blob/master/horizon/workflows/views.py#L130
[2] 
https://github.com/openstack/horizon/blob/master/horizon/workflows/views.py#L156


 -Original Message-
 From: Toshiyuki Hayashi [mailto:haya...@ntti3.com]
 Sent: Tuesday, August 27, 2013 2:26 PM
 To: OpenStack-dev@lists.openstack.org
 Subject: [openstack-dev] [Horizon] Modal form without redirect
 
 Hi all,
 
 I'm working on custmoizing modal form for topology view, I would like to
 prevent redirecting after submitting.
 https://github.com/openstack/horizon/blob/master/horizon/static/horizon/
 js/horizon.modals.js#L110
 According to this code, if there is a no redirect_header, the modal form won't
 redirect. But I couldn't figure out how to remove redirect information from
 http header.
 For example, if I want to remove redirect from LaunchInstance
 https://github.com/openstack/horizon/blob/master/openstack_dashboard/
 dashboards/project/instances/workflows/create_instance.py#L508
 How should I do that?
 I tried success_url = None, but it doesn't work.
 
 If you have any idea, that would be great.
 
 Regards,
 Toshiyuki
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Calling empty_chain in L3 agent

2013-08-28 Thread Baldwin, Carl (HPCS Neutron)
Salvatore,

Two problems have been found that were caused by calling empty_chain and
then failing to restore some rule in the chain that was just emptied.  The
first problem found was mine in my fix to this bug.

  https://bugs.launchpad.net/neutron/+bug/1209011

I filed a bug on the second problem today.  We discovered it in our
development environment yesterday.

  https://bugs.launchpad.net/neutron/+bug/1218040

Mine broke the gate and got reverted about two days after landing.  Now,
no one seems to want to touch it with a ten foot pole.  It is tainted.  :)
 The second one apparently has gone unnoticed for a while now.  I would
like to propose a strategy for addressing these problems for now and for
the future.

First, I propose that we accept the proposed fixes to these two bugs in
time for H-3.  My patch for the first bug has been up for a while.  It is
a good fix and I have fixed the problem that caused the gate to break and
it has seen more runtime in our test environments.  I'd really like to see
it land.

Second, I would like to discuss a more permanent solution with you since
you and I are the authors of the code implicated in the two problems.  I
think some refactoring and better testing are in order here.  The real
problem is that in order to empty a chain, there has to be some way to
know that we are reconstructing the chain with everything that needs to be
in there.  Maybe we could get this in for Havana final, maybe post-Havana.
 What are your thoughts?

Regards,
Carl Baldwin

PS  Below are the two reviews that I have to address these two scenarios.

https://review.openstack.org/#/c/42412/

https://review.openstack.org/#/c/44133/






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] multiple-scheduler-drivers blueprint

2013-08-28 Thread Alex Glikson
 Why can't something like this be done with just different filters, see 
such as for AggregateRamFilter?

Well, first, at the moment each of these filters today duplicate the code 
that handles aggregate-based overrides. So, it would make sense to have it 
in one place anyway. Second, why duplicating all the filters if this can 
be done with a single flag? 

Regards,
Alex




From:   Joe Gordon joe.gord...@gmail.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   28/08/2013 09:32 PM
Subject:Re: [openstack-dev] [Nova] multiple-scheduler-drivers 
blueprint






On Wed, Aug 28, 2013 at 9:12 AM, Alex Glikson glik...@il.ibm.com wrote:
It seems that the main concern was that the overridden scheduler 
properties are taken from the flavor, and not from the aggregate. In fact, 
there was a consensus that this is not optimal. 

I think that we can still make some progress in Havana towards 
per-aggregate overrides, generalizing on the recently merged changes that 
do just that -- for cpu and for memory with FilterScheduler (and 
leveraging a bit from the original multi-sched patch). As follows: 
1. individual filters will call get_config('abc') instead of CONF.abc 
(already implemented in the current version of the multi-sched patch, 
e.g., 
https://review.openstack.org/#/c/37407/30/nova/scheduler/filters/io_ops_filter.py
) 
2. get_config() will check whether abc is defined in the aggregate, and if 
so will return the value from the aggregate, and CONF.abc otherwise 
(already implemented in recently merged AggregateCoreFilter and 
AggregateRamFilter -- e.g., 
https://review.openstack.org/#/c/33949/2/nova/scheduler/filters/core_filter.py
). 
3. add a global flag that would enable or disable aggregate-based 
overrides 

Why can't something like this be done with just different filters, see 
such as for AggregateRamFilter?
 

This seems to be a relatively simple rafactoring of existing code, still 
achieving important portion of the original goals of this blueprint. 
Of course, we should still discuss the longer-term plan around scheduling 
policies at the summit. 

Thoughts? 

Regards, 
Alex 




From:Russell Bryant rbry...@redhat.com 
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:27/08/2013 10:48 PM 
Subject:[openstack-dev] [Nova] multiple-scheduler-drivers 
blueprint 




Greetings,

One of the important things to strive for in our community is consensus.
When there's not consensus, we should take a step back and see if we
need to change directions.

There has been a lot of iterating on this feature, and I'm afraid we
still don't have consensus around the design.  Phil Day has been posting
some really good feedback on the review.  I asked Joe Gordon to take a
look and provide another opinion.  He agreed with Phil that we really
need to have scheduler policies be a first class API citizen.

So, that pushes this feature out to Icehouse, as it doesn't seem
possible to get this done in the required timeframe for Havana.

If you'd really like to push to get this into Havana, please make your
case.  :-)

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] multiple-scheduler-drivers blueprint

2013-08-28 Thread Joe Gordon
On Wed, Aug 28, 2013 at 3:55 PM, Alex Glikson glik...@il.ibm.com wrote:

  Why can't something like this be done with just different filters, see
 such as for AggregateRamFilter*?*

 Well, first, at the moment each of these filters today duplicate the code
 that handles aggregate-based overrides. So, it would make sense to have it
 in one place anyway. Second, why duplicating all the filters if this can be
 done with a single flag?


* We already have too many flags, and i don't want to introduce one that we
plan on removing / deprecating in the near future if we can help it.

*
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_filter.py
doesn't
duplicate all the code, it uses a base class.  The check the aggregate for
the value logic is duplicated, but that is easy to fix.




 Regards,
 Alex




 From:Joe Gordon joe.gord...@gmail.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:28/08/2013 09:32 PM
 Subject:Re: [openstack-dev] [Nova] multiple-scheduler-drivers
 blueprint
 --






 On Wed, Aug 28, 2013 at 9:12 AM, Alex Glikson 
 *glik...@il.ibm.com*glik...@il.ibm.com
 wrote:
 It seems that the main concern was that the overridden scheduler
 properties are taken from the flavor, and not from the aggregate. In fact,
 there was a consensus that this is not optimal.

 I think that we can still make some progress in Havana towards
 per-aggregate overrides, generalizing on the recently merged changes that
 do just that -- for cpu and for memory with FilterScheduler (and leveraging
 a bit from the original multi-sched patch). As follows:
 1. individual filters will call get_config('abc') instead of CONF.abc
 (already implemented in the current version of the multi-sched patch, e.g.,
 https://review.openstack.org/#/c/37407/30/nova/scheduler/filters/core_filter.py
 *
 https://review.openstack.org/#/c/37407/30/nova/scheduler/filters/io_ops_filter.py
 *https://review.openstack.org/#/c/37407/30/nova/scheduler/filters/io_ops_filter.py
 )
 2. get_config() will check whether abc is defined in the aggregate, and if
 so will return the value from the aggregate, and CONF.abc otherwise
 (already implemented in recently merged AggregateCoreFilter and
 AggregateRamFilter -- e.g., *
 https://review.openstack.org/#/c/33949/2/nova/scheduler/filters/core_filter.py
 *https://review.openstack.org/#/c/33949/2/nova/scheduler/filters/core_filter.py
 ).
 3. add a global flag that would enable or disable aggregate-based overrides

 Why can't something like this be done with just different filters, see
 such as for AggregateRamFilter*?*


 This seems to be a relatively simple rafactoring of existing code, still
 achieving important portion of the original goals of this blueprint.
 Of course, we should still discuss the longer-term plan around scheduling
 policies at the summit.

 Thoughts?

 Regards,
 Alex




 From:Russell Bryant *rbry...@redhat.com* rbry...@redhat.com
 To:OpenStack Development Mailing List *
 openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org,
 Date:27/08/2013 10:48 PM
 Subject:[openstack-dev] [Nova] multiple-scheduler-drivers
 blueprint
  --




 Greetings,

 One of the important things to strive for in our community is consensus.
 When there's not consensus, we should take a step back and see if we
 need to change directions.

 There has been a lot of iterating on this feature, and I'm afraid we
 still don't have consensus around the design.  Phil Day has been posting
 some really good feedback on the review.  I asked Joe Gordon to take a
 look and provide another opinion.  He agreed with Phil that we really
 need to have scheduler policies be a first class API citizen.

 So, that pushes this feature out to Icehouse, as it doesn't seem
 possible to get this done in the required timeframe for Havana.

 If you'd really like to push to get this into Havana, please make your
 case.  :-)

 Thanks,

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list*
 **OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org*
 **http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list*
 **OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org*
 **http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-08-28 Thread Baldwin, Carl (HPCS Neutron)
All,

We've known for a while now that some duplication of work happened with
respect to adding multiple worker processes to the neutron-server.  There
were a few mistakes made which led to three patches being done
independently of each other.

Can we settle on one and accept it?

I have changed my patch at the suggestion of one of the other 2 authors,
Peter Feiner, in attempt to find common ground.  It now uses openstack
common code and therefore it is more concise than any of the original
three and should be pretty easy to review.  I'll admit to some bias toward
my own implementation but most importantly, I would like for one of these
implementations to land and start seeing broad usage in the community
earlier than later.

Carl Baldwin

PS Here are the two remaining patches.  The third has been abandoned.

https://review.openstack.org/#/c/37131/
https://review.openstack.org/#/c/36487/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] multiple-scheduler-drivers blueprint

2013-08-28 Thread Alex Glikson
Joe Gordon joe.gord...@gmail.com wrote on 28/08/2013 11:04:45 PM:
 Well, first, at the moment each of these filters today duplicate the
 code that handles aggregate-based overrides. So, it would make sense
 to have it in one place anyway. Second, why duplicating all the 
 filters if this can be done with a single flag? 
 
 We already have too many flags, and i don't want to introduce one 
 that we plan on removing / deprecating in the near future if we can help 
it.

Wouldn't it make sense to have a flag that enables/disables 
aggregate-based policy overrides anyway?

 https://github.com/openstack/nova/blob/master/nova/scheduler/
 filters/ram_filter.py doesn't duplicate all the code, it uses a base
 class.  The check the aggregate for the value logic is duplicated, 
 but that is easy to fix.

Yep, that's exactly what I'm saying -- the first step would be to put that 
logic in one place (e.g., scheduler/utils.py, like the get_config method 
we have been thinking to introduce originally), and then we can easily 
reuse it in all the other filters (regardless of the decision whether to 
do it within the existing filters, or to add an AggregateXYZ filter for 
each existing filter XYZ. Same potentially for weight functions, etc).

Alex

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] multiple-scheduler-drivers blueprint

2013-08-28 Thread Joe Gordon
On Wed, Aug 28, 2013 at 4:34 PM, Alex Glikson glik...@il.ibm.com wrote:

 Joe Gordon joe.gord...@gmail.com wrote on 28/08/2013 11:04:45 PM:

  Well, first, at the moment each of these filters today duplicate the
  code that handles aggregate-based overrides. So, it would make sense
  to have it in one place anyway. Second, why duplicating all the
  filters if this can be done with a single flag?

 
  We already have too many flags, and i don't want to introduce one
  that we plan on removing / deprecating in the near future if we can help
 it.


 Wouldn't it make sense to have a flag that enables/disables
 aggregate-based policy overrides anyway?


Why?

FWIW that is my default answer to don't we need a flag to do x.



  https://github.com/openstack/nova/blob/master/nova/scheduler/

  filters/ram_filter.py doesn't duplicate all the code, it uses a base
  class.  The check the aggregate for the value logic is duplicated,
  but that is easy to fix.

 Yep, that's exactly what I'm saying -- the first step would be to put that
 logic in one place (e.g., scheduler/utils.py, like the get_config method we
 have been thinking to introduce originally), and then we can easily reuse
 it in all the other filters (regardless of the decision whether to do it
 within the existing filters, or to add an AggregateXYZ filter for each
 existing filter XYZ. Same potentially for weight functions, etc).


Sounds like we are in agreement here



 Alex


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About multihost patch review

2013-08-28 Thread McCann, Jack
 That said, If there is potential value in offering both, it seems like it 
 should
 be under the control of the deployer not the user. In other words the deployer
 should be able to set the default network type and enforce whether setting the
 type is exposed to the user at all.

+1

From my perspective, multi-host is an option that should be in the hands
of the deployer/operator, not exposed to the end user.  My users should
not have to know or care how DHCP, routing and floating IPs are implemented
under the covers.

Also, last I looked, I think this patch required admin role to create a 
multi-host
network.  That may be OK for a single flat network or small-scale multi-network
environment, but it is not viable in a large-scale multi-tenant environment.

- Jack

 -Original Message-
 From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
 Sent: Wednesday, August 28, 2013 1:29 PM
 To: OpenStack Development Mailing List
 Cc: Robert Kukura; Armando Migliaccio; Nachi Ueno; Sumit Naiksatam
 Subject: Re: [openstack-dev] About multihost patch review
 
 
 On Aug 26, 2013, at 6:14 PM, Maru Newby ma...@redhat.com wrote:
 
 
  On Aug 26, 2013, at 4:06 PM, Edgar Magana emag...@plumgrid.com wrote:
 
  Hi Developers,
 
  Let me explain my point of view on this topic and please share your 
  thoughts
 in order to merge this new feature ASAP.
 
  My understanding is that multi-host is nova-network HA  and we are
 implementing this bp https://blueprints.launchpad.net/neutron/+spec/quantum-
 multihost for the same reason.
  So, If in neutron configuration admin enables multi-host:
  etc/dhcp_agent.ini
 
  # Support multi host networks
  # enable_multihost = False
 
  Why do tenants needs to be aware of this? They should just create networks 
  in
 the way they normally do and not by adding the multihost extension.
 
  I was pretty confused until I looked at the nova-network HA doc [1].  The
 proposed design would seem to emulate nova-network's multi-host HA option, 
 where
 it was necessary to both run nova-network on every compute node and create a
 network explicitly as multi-host.  I'm not sure why nova-network was 
 implemented
 in this way, since it would appear that multi-host is basically 
 all-or-nothing.
 Once nova-network services are running on every compute node, what does it 
 mean
 to create a network that is not multi-host?
 
 Just to add a little background to the nova-network multi-host: The fact that 
 the
 multi_host flag is stored per-network as opposed to a configuration was an
 implementation detail. While in theory this would support configurations where
 some networks are multi_host and other ones are not, I am not aware of any
 deployments where both are used together.
 
 That said, If there is potential value in offering both, it seems like it 
 should
 be under the control of the deployer not the user. In other words the deployer
 should be able to set the default network type and enforce whether setting the
 type is exposed to the user at all.
 
 Also, one final point. In my mind, multi-host is strictly better than single
 host, if I were to redesign nova-network today, I would get rid of the single
 host mode completely.
 
 Vish
 
 
  So, to Edgar's question - is there a reason other than 'be like 
  nova-network'
 for requiring neutron multi-host to be configured per-network?
 
 
  m.
 
  1: 
  http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-
 networking-options.html
 
 
  I could be totally wrong and crazy, so please provide some feedback.
 
  Thanks,
 
  Edgar
 
 
  From: Yongsheng Gong gong...@unitedstack.com
  Date: Monday, August 26, 2013 2:58 PM
  To: Kyle Mestery (kmestery) kmest...@cisco.com, Aaron Rosen
 aro...@nicira.com, Armando Migliaccio amigliac...@vmware.com, Akihiro 
 MOTOKI
 amot...@gmail.com, Edgar Magana emag...@plumgrid.com, Maru Newby
 ma...@redhat.com, Nachi Ueno na...@nttmcl.com, Salvatore Orlando
 sorla...@nicira.com, Sumit Naiksatam sumit.naiksa...@bigswitch.com, Mark
 McClain mark.mccl...@dreamhost.com, Gary Kotton gkot...@vmware.com, Robert
 Kukura rkuk...@redhat.com
  Cc: OpenStack List openstack-dev@lists.openstack.org
  Subject: Re: About multihost patch review
 
  Hi,
  Edgar Magana has commented to say:
  'This is the part that for me is confusing and I will need some 
  clarification
 from the community. Do we expect to have the multi-host feature as an 
 extension
 or something that will natural work as long as the deployment include more 
 than
 one Network Node. In my opinion, Neutron deployments with more than one 
 Network
 Node by default should call DHCP agents in all those nodes without the need to
 use an extension. If the community has decided to do this by extensions, then 
 I
 am fine' at
 
 https://review.openstack.org/#/c/37919/11/neutron/extensions/multihostnetwork.py
 
  I have commented back, what is your opinion about it?
 
  Regards,
  Yong Sheng Gong
 
 
  On Fri, Aug 16, 2013 at 9:28 PM, Kyle Mestery 

Re: [openstack-dev] [savanna] migration to pbr completed

2013-08-28 Thread Matthew Farrellee

On 08/27/2013 04:46 PM, Sergey Lukjanov wrote:

Hi folks,

migration of all Savanna sub projects to pbr has been completed.

Please, inform us and/or create bugs for all packaging-related issues.

Thanks.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


Thanks for pushing this forward Sergey.

I can confirm that pbr 0.5.19 works fine for building savanna, 
python-savannaclient and savanna-dashboard, with one minor hiccup in 
savanna that data_files glob'ing doesn't work. It's easily worked around 
in my spec though (sed -i 
's,etc/savanna/\*,etc/savanna/savanna.conf.sample 
etc/savanna/savanna.conf.sample-full,' setup.cfg), and isn't an issue 
with 0.5.21.


Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-28 Thread Murali Balcha
Hello Stackers,
We would like to introduce a new project Raksha, a Data Protection As a Service 
(DPaaS) for OpenStack Cloud.
Raksha's primary goal is to provide a comprehensive Data Protection for 
OpenStack by leveraging Nova, Swift, Glance and Cinder. Raksha has following 
key features:

1.   Provide an enterprise grade data protection for OpenStack based clouds

2.   Tenant administered backups and restores

3.   Application consistent backups

4.   Point In Time(PiT) full and incremental backups and restores

5.   Dedupe at source for efficient backups

6.   A job scheduler for periodic backups

7.   Noninvasive backup solution that does not require service interruption 
during backup window

You will find the rationale behind the need for Raksha in OpenStack in its 
Wiki. The wiki also has the preliminary design and the API description.  Some 
of the Raksha functionality may overlap with Nova and Cinder projects and as a 
community lets work together to coordinate the features among these projects. 
We would like to seek out early feedback so we can address as many issues as we 
can in the first code drop. We are hoping to enlist the OpenStack community 
help in making Raksha a part of OpenStack.
Raksha's project resources:
Wiki: https://wiki.openstack.org/wiki/Raksha
Launchpad: https://launchpad.net/raksha
Github: https://github.com/DPaaS-Raksha/Raksha (We will upload a prototype code 
in few days)
If you want to talk to us, send an email to openstack-...@lists.launchpad.net 
with [raksha] in the subject or use #openstack-raksha irc channel.

Best Regards,
Murali Balcha
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tomorrows meeting at 2000 UTC

2013-08-28 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tomorrow, 
2013-08-29!!!

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Discuss ongoing status of the overall effort and any needed coordination.
- Talk about open reviews (please look over 
https://review.openstack.org/#/q/topic:enp,n,z)
- Discuss any integration problems or suggestions.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, problems, issues, solutions, questions (and 
more).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-28 Thread ZhiQiang Fan
+1


On Thu, Aug 29, 2013 at 6:12 AM, Murali Balcha murali.bal...@triliodata.com
 wrote:

   Hello Stackers,

 We would like to introduce a new project Raksha, a Data Protection As a
 Service (DPaaS) for OpenStack Cloud.

 Raksha’s primary goal is to provide a comprehensive Data Protection for
 OpenStack by leveraging Nova, Swift, Glance and Cinder. Raksha has
 following key features:

  1.   Provide an enterprise grade data protection for OpenStack based
 clouds

  2.   Tenant administered backups and restores

  3.   Application consistent backups

  4.   Point In Time(PiT) full and incremental backups and restores

  5.   Dedupe at source for efficient backups

  6.   A job scheduler for periodic backups

  7.   Noninvasive backup solution that does not require service
 interruption during backup window



 You will find the rationale behind the need for Raksha in OpenStack in its
 Wiki. The wiki also has the preliminary design and the API description.  Some
 of the Raksha functionality may overlap with Nova and Cinder projects and
 as a community lets work together to coordinate the features among these
 projects. We would like to seek out early feedback so we can address as
 many issues as we can in the first code drop. We are hoping to enlist the
 OpenStack community help in making Raksha a part of OpenStack.

 Raksha’s project resources:

 Wiki: https://wiki.openstack.org/wiki/Raksha

 Launchpad: https://launchpad.net/raksha

 Github: https://github.com/DPaaS-Raksha/Raksha (We will upload a
 prototype code in few days)

 If you want to talk to us, send an email to
 openstack-...@lists.launchpad.net with [raksha] in the subject or use
 #openstack-raksha irc channel.



 Best Regards,

 Murali Balcha


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
blog: zqfan.github.com
git: github.com/zqfan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-28 Thread Jay Pipes

Hi Murali, welcome to the OpenStack community. Some comments inline...

On 08/28/2013 06:12 PM, Murali Balcha wrote:

Hello Stackers,

We would like to introduce a new project Raksha, a Data Protection As a
Service (DPaaS) for OpenStack Cloud.

Raksha’s primary goal is to provide a comprehensive Data Protection for
OpenStack by leveraging Nova, Swift, Glance and Cinder. Raksha has
following key features:

1.Provide an enterprise grade data protection for OpenStack based clouds


What is enterprise grade? Any time I hear that term, I think of 
Deloitte and Touche salespeople trying to convince some sucker CIO that 
expensive == good. I'd prefer to just leave the whole enterprise 
thing for the marketing folks and stick to the specific engineering 
features ;)



2.Tenant administered backups and restores

3.Application consistent backups


Can you expand on this a bit? Data is backed up, not applications... 
that's what source control is for :)



4.Point In Time(PiT) full and incremental backups and restores


Cool, very useful.


5.Dedupe at source for efficient backups


Hmmm... this would depend heavily on what is being backed up and the 
level of access that Raksha would have to the tenant's application 
domains. Unless you are going to limit yourself to just backing up and 
restoring instances or volumes? Is that the plan?



6.A job scheduler for periodic backups


Cron?


7.Noninvasive backup solution that does not require service interruption
during backup window


By service, are you referring to the tenant's applications running on 
the instance? Or are you referring to something else?


Also, one thing that is really good to expose/debate/discuss early on in 
the project's incubation is the RESTful API that the project would 
expose. I'd be really interested to see this.


Finally, would be good to include in the wiki page some discussion about 
any interaction with Trove (DBaaS), especially since Trove's API already 
implements a backups/ resource [1].


Best,
-jay

[1] 
https://github.com/openstack/database-api/blob/master/openstack-database-api/src/markdown/database-api-v1.md#backups



You will find the rationale behind the need for Raksha in OpenStack in
its Wiki. The wiki also has the preliminary design and the API
description.Some of the Raksha functionality may overlap with Nova and
Cinder projects and as a community lets work together to coordinate the
features among these projects. We would like to seek out early feedback
so we can address as many issues as we can in the first code drop. We
are hoping to enlist the OpenStack community help in making Raksha a
part of OpenStack.

Raksha’s project resources:

Wiki: https://wiki.openstack.org/wiki/Raksha

Launchpad: https://launchpad.net/raksha

Github: https://github.com/DPaaS-Raksha/Raksha (We will upload a
prototype code in few days)

If you want to talk to us, send an email to
openstack-...@lists.launchpad.net with [raksha] in the subject or use
#openstack-raksha irc channel.

Best Regards,

Murali Balcha



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Why does nova.network.neutronv2.get_client(context, admin=True) drop auth_token?

2013-08-28 Thread Roman Verchikov
Hi stackers!

Sorry for the stupid question, but why does nova.network.neutronv2.get_client() 
[1] drop auth_token for admin? Is it really necessary to make another check for 
username/password when trying to get a list of ports or floating IPs?.. 

When keystone is configured with LDAP backed this leads to a bunch of LDAP 
requests which tend to be quite slow. Plus those LDAP requests could have been 
simply skipped when keystone is configured with token cache enabled.

Thanks,
Roman 

[1] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/__init__.py#L68
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-28 Thread Murali Balcha


 Hi Murali, welcome to the OpenStack community. Some comments inline...
Hi Jay,
Thanks for your comments. I have been involved with OpenStack since Diablo time 
frame, but mostly monitoring the traffic. Always great to be part of the 
community. 

My comments are inline.
 
 On 08/28/2013 06:12 PM, Murali Balcha wrote:
 Hello Stackers,
 
 We would like to introduce a new project Raksha, a Data Protection As a
 Service (DPaaS) for OpenStack Cloud.
 
 Raksha’s primary goal is to provide a comprehensive Data Protection for
 OpenStack by leveraging Nova, Swift, Glance and Cinder. Raksha has
 following key features:
 
 1.Provide an enterprise grade data protection for OpenStack based clouds
 
 What is enterprise grade? Any time I hear that term, I think of Deloitte 
 and Touche salespeople trying to convince some sucker CIO that expensive == 
 good. I'd prefer to just leave the whole enterprise thing for the 
 marketing folks and stick to the specific engineering features ;)
 
I hear you. Will take out the market jargon and stick to technical 
specifications.
 
 2.Tenant administered backups and restores
 
 3.Application consistent backups
 
 Can you expand on this a bit? Data is backed up, not applications... that's 
 what source control is for :)

 I meant a VM and all its associated resources such as data volumes are backed 
up consistently. And if the application is a multi tier application that is 
spread across multiple vms, then all the vms should be backed up together 
consistently too. 
 
 4.Point In Time(PiT) full and incremental backups and restores
 
 Cool, very useful.
 
 5.Dedupe at source for efficient backups
 
 Hmmm... this would depend heavily on what is being backed up and the level of 
 access that Raksha would have to the tenant's application domains. Unless you 
 are going to limit yourself to just backing up and restoring instances or 
 volumes? Is that the plan?

Yes, that is the plan. We don't have plans to look into vms and backup any 
individual resources with in a vm.
 
 6.A job scheduler for periodic backups
 
 Cron?
Yes, similar.
 
 7.Noninvasive backup solution that does not require service interruption
 during backup window
 
 By service, are you referring to the tenant's applications running on the 
 instance? Or are you referring to something else?

A service similar to either nova or cinder. We don't want to run any agents 
inside tenant vms.
 
 Also, one thing that is really good to expose/debate/discuss early on in the 
 project's incubation is the RESTful API that the project would expose. I'd be 
 really interested to see this.

Absolutely. We took a first stab of the list restful API that raksha need to 
support in our wiki @ http://wiki.openstack.org/wiki/raksha.
 
 Finally, would be good to include in the wiki page some discussion about any 
 interaction with Trove (DBaaS), especially since Trove's API already 
 implements a backups/ resource [1].
 
That is a good point.  we will include that in the wiki.

 Best,
 -jay
 
 [1] 
 https://github.com/openstack/database-api/blob/master/openstack-database-api/src/markdown/database-api-v1.md#backups
 
 You will find the rationale behind the need for Raksha in OpenStack in
 its Wiki. The wiki also has the preliminary design and the API
 description.Some of the Raksha functionality may overlap with Nova and
 Cinder projects and as a community lets work together to coordinate the
 features among these projects. We would like to seek out early feedback
 so we can address as many issues as we can in the first code drop. We
 are hoping to enlist the OpenStack community help in making Raksha a
 part of OpenStack.
 
 Raksha’s project resources:
 
 Wiki: https://wiki.openstack.org/wiki/Raksha
 
 Launchpad: https://launchpad.net/raksha
 
 Github: https://github.com/DPaaS-Raksha/Raksha (We will upload a
 prototype code in few days)
 
 If you want to talk to us, send an email to
 openstack-...@lists.launchpad.net with [raksha] in the subject or use
 #openstack-raksha irc channel.
 
 Best Regards,
 
 Murali Balcha
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why does nova.network.neutronv2.get_client(context, admin=True) drop auth_token?

2013-08-28 Thread Yongsheng Gong
For admin, we must use admin token.  In general, the token from API context
is not of role admin.

I think the BP can help
https://blueprints.launchpad.net/keystone/+spec/reuse-token


On Thu, Aug 29, 2013 at 8:12 AM, Roman Verchikov rverchi...@mirantis.comwrote:

 Hi stackers!

 Sorry for the stupid question, but why does
 nova.network.neutronv2.get_client() [1] drop auth_token for admin? Is it
 really necessary to make another check for username/password when trying to
 get a list of ports or floating IPs?..

 When keystone is configured with LDAP backed this leads to a bunch of LDAP
 requests which tend to be quite slow. Plus those LDAP requests could have
 been simply skipped when keystone is configured with token cache enabled.

 Thanks,
 Roman

 [1]
 https://github.com/openstack/nova/blob/master/nova/network/neutronv2/__init__.py#L68
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MultiStrOpt opts can be problematic...

2013-08-28 Thread Matt Riedemann
Ha!  My team totally ran into the same issue.  I was hoping that Padraig's 
crudini would make my dreams come true but I don't think it's handling 
multi-string options.

https://github.com/pixelb/crudini 

So +1 to getting rid of those.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Dan Prince dpri...@redhat.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   08/28/2013 12:54 PM
Subject:[openstack-dev] MultiStrOpt opts can be problematic...



So I recently ran into a fun config issue in trying to configure Nova to 
work w/ Ceilometer using Puppet:

  https://bugs.launchpad.net/puppet-ceilometer/+bug/1217867

Today, what you need to do to make Nova work with ceilometer is add this 
to your nova.conf file:

 notification_driver=nova.openstack.common.notifier.rpc_notifier
 notification_driver=ceilometer.compute.nova_notifier

As it turns out Multi-line config entries aren't very fun to deal with in 
the config management world. The puppet nova_config provider doesn't (yet) 
have a good way to support them. The core of the issue is they pose all 
sorts of problems in knowing whether a given tool should modify the 
existing config values.


In the short term we can look into doing one of these Puppet land:

 -Using a conf.d directory for config (would require a change to the 
nova-compute init script to use --config-dir)
 -String together various resources in puppet to make it work, etc. 
(file_line, augeus, etc)

Long term though I'm thinking what if MultiStrOpt's were to go away? They 
seem to be more trouble than they are worth...

Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack]ASK: Summit talk on OpenStack on OpenStack/ OpenStack as a Service

2013-08-28 Thread Sriram Subramanian
Dear Stackers,

I came across a proposed talk related to OpenStack on OpenStack/ OpenStack
as a Service. Couldn't recollect if there were two different talks or the
same. I am interested in learning more - could the speakers ping me offline
please?

-- 
Thanks,
-Sriram
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance revert to fix image deletes on F19

2013-08-28 Thread Matt Riedemann
Dan,

I saw you abandoned the patch because:

Using Sqlalchemy 0.8.2 or downgrading to 0.7.10 seems to fix this issue.

Given the glance requirement on SQLAlchemy:

https://github.com/openstack/glance/blob/master/requirements.txt#L9 

Why were you at those levels to begin with?



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Dan Prince dpri...@redhat.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   08/26/2013 07:21 AM
Subject:[openstack-dev] Glance revert to fix image deletes on F19



Hi all,

I'd like to highlight this Glance revert for an issue I started seeing on 
Fedora 19 last week.

  https://review.openstack.org/#/c/43542/

A revert like this should be a very safe thing to do so long as we do it 
quickly. Especially given this is a performance improvement...

Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MultiStrOpt opts can be problematic...

2013-08-28 Thread Dean Troyer
[/me grabs popcorn and pulls up a chair to watch...]

On Wed, Aug 28, 2013 at 7:40 PM, Matt Riedemann mrie...@us.ibm.com wrote:

 Ha!  My team totally ran into the same issue.  I was hoping that Padraig's
 crudini would make my dreams come true but I don't think it's handling
 multi-string options.

 *https://github.com/pixelb/crudini* https://github.com/pixelb/crudini

 So +1 to getting rid of those.


Ditto

When we added support for these to DevStack it was for exactly those
options Dan mentioned, and that is still the only place they are used...

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why does nova.network.neutronv2.get_client(context, admin=True) drop auth_token?

2013-08-28 Thread Dolph Mathews
On Wed, Aug 28, 2013 at 7:22 PM, Yongsheng Gong gong...@unitedstack.comwrote:

 For admin, we must use admin token.  In general, the token from API
 context is not of role admin.


So... because the authenticated user making the API request *may not* have
admin access, you're dropping that authorization in favor of using
CONF.neutron_admin_username, etc, to escalate the available privileges?
Yikes.



 I think the BP can help
 https://blueprints.launchpad.net/keystone/+spec/reuse-token


I don't see how?




 On Thu, Aug 29, 2013 at 8:12 AM, Roman Verchikov 
 rverchi...@mirantis.comwrote:

 Hi stackers!

 Sorry for the stupid question, but why does
 nova.network.neutronv2.get_client() [1] drop auth_token for admin? Is it
 really necessary to make another check for username/password when trying to
 get a list of ports or floating IPs?..

 When keystone is configured with LDAP backed this leads to a bunch of
 LDAP requests which tend to be quite slow. Plus those LDAP requests could
 have been simply skipped when keystone is configured with token cache
 enabled.

 Thanks,
 Roman

 [1]
 https://github.com/openstack/nova/blob/master/nova/network/neutronv2/__init__.py#L68
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] cluster scaling on the 0.2 branch

2013-08-28 Thread Jon Maron
Hi,

  I am trying to back port the HDP scaling implementation to the 0.2 branch and 
have run into a number of differences.  At this point I am trying to figure out 
whether what I am observing is intended or symptoms of a bug.

  For a case in which I am adding one instance to an existing node group as 
well as an additional node group with one instance I am seeing the following 
arguments being passed to the scale_cluster method of the plugin:

- A cluster object that contains the following set of node groups:

[savanna.db.models.NodeGroup[object at 10d8bdd90] 
{created=datetime.datetime(2013, 8, 28, 21, 50, 5, 208003), 
updated=datetime.datetime(2013, 8, 28, 21, 50, 5, 208007), 
id=u'd6fadb7a-367b-41ed-989c-af40af2d3e3d', name=u'master', flavor_id=u'3', 
image_id=None, node_processes=[u'NAMENODE', u'SECONDARY_NAMENODE', 
u'GANGLIA_SERVER', u'GANGLIA_MONITOR', u'AMBARI_SERVER', u'AMBARI_AGENT', 
u'JOBTRACKER', u'NAGIOS_SERVER'], node_configs={}, volumes_per_node=0, 
volumes_size=10, volume_mount_prefix=u'/volumes/disk', count=1, 
cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2', 
node_group_template_id=u'15344a5c-5e83-496a-9648-d7b58f40ad1f'}, 
savanna.db.models.NodeGroup[object at 10d8bd950] 
{created=datetime.datetime(2013, 8, 28, 21, 50, 5, 210962), 
updated=datetime.datetime(2013, 8, 28, 22, 5, 1, 728402), 
id=u'672e5597-2a8d-4470-8f5d-8cc43c7bb28e', name=u'slave', flavor_id=u'3', 
image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', u'GANGLIA_MONITOR', 
u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], node_configs={}, 
volumes_per_node=0, volumes_size=10, volume_mount_prefix=u'/volumes/disk', 
count=2, cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2', 
node_group_template_id=u'5dd6aa5a-496c-4dda-b94c-3b3752eb0efb'}, 
savanna.db.models.NodeGroup[object at 10d897f90] 
{created=datetime.datetime(2013, 8, 28, 22, 4, 59, 871379), 
updated=datetime.datetime(2013, 8, 28, 22, 4, 59, 871388), 
id=u'880e1b17-f4e4-456d-8421-31bf8ef1fb65', name=u'slave2', flavor_id=u'1', 
image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', u'GANGLIA_MONITOR', 
u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], node_configs={}, 
volumes_per_node=0, volumes_size=10, volume_mount_prefix=u'/volumes/disk', 
count=1, cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2', 
node_group_template_id=u'd67da924-792b-4558-a5cb-cb97bba4107f'}]
 
  So it appears that the cluster is already configured with the three node 
groups (two original, one new) and the associated counts.

- The list of instances.  However, whereas the master branch was passing me two 
instances (one instance representing the addition to the existing group, one 
representing the new instance associated with the added node group), in the 0.2 
branch I am only seeing one instance being passed (the one instance being added 
to the existing node group):

(Pdb) p instances
[savanna.db.models.Instance[object at 10d8bf050] 
{created=datetime.datetime(2013, 8, 28, 22, 5, 1, 725343), 
updated=datetime.datetime(2013, 8, 28, 22, 5, 47, 286665), extra=None, 
node_group_id=u'672e5597-2a8d-4470-8f5d-8cc43c7bb28e', 
instance_id=u'377694a2-a589-479b-860f-f1541d249624', 
instance_name=u'scale-slave-002', internal_ip=u'192.168.32.4', 
management_ip=u'172.18.3.5', volumes=[]}]
(Pdb) p len(instances)
1

  I am not certain why I am not getting a listing of instances representing the 
instances being added to the cluster as I do in the master branch.  Is this 
intended?  How do I obtain the instance reference for the instance being added 
to the new node group?

-- Jon
-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why does nova.network.neutronv2.get_client(context, admin=True) drop auth_token?

2013-08-28 Thread Morgan Fainberg
On Wed, Aug 28, 2013 at 5:22 PM, Yongsheng Gong gong...@unitedstack.comwrote:

 For admin, we must use admin token.  In general, the token from API
 context is not of role admin.


If this functionality is supposed to be allowed to non-admin users,
wouldn't it be easier to provide access to it to non-admin users, instead
of escalating permissions (maybe RBAC)?  I'll admit not knowing why this
needs escalation, but it stands out as an odd approach in my mind.


 I think the BP can help
 https://blueprints.launchpad.net/keystone/+spec/reuse-token


This isn't likely what you are looking for.  It would still require lookups
to the backend for a number of reasons (not listed, as I don't think it is
relevant for this conversation).
--
Morgan Fainberg

IRC: morganfainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] what's in scope of Ceilometer

2013-08-28 Thread Gordon Chung
so we're in the process of selling Ceilometer to product teams so that 
they'll adopt it and we'll get more funding :).  one item that comes up 
from product teams is 'what will Ceilometer be able to do and where does 
the product takeover and add value?'

the first question is, Ceilometer currently does metering/alarming/maybe a 
few other things... will it go beyond that? specifically: capacity 
planning, optimization, dashboard(i assume this falls under 
horizon/ceilometer plugin work), analytics. 
they're pretty broad items so i would think they would probably end up 
being separate projects?

another question is what metrics will we capture.  some of the product 
teams we have collect metrics on datacenter memory/cpu utilization, 
cluster cpu/memory/vm, and a bunch of other clustered stuff.
i'm a nova-idiot, but is this info possible to retrieve? is the consensus 
that Ceilometer will collect anything and everything the other projects 
allow for?

cheers,
gordon chung

openstack, ibm software standards
email: chungg [at] ca.ibm.com
phone: 905.413.5072___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Christopher Yeoh
On Wed, 28 Aug 2013 09:58:48 -0400
Joe Gordon joe.gord...@gmail.com wrote:
 
 On a related note, I really like when the developer adds a gerrit
 comment saying why the revision, that makes my life as a reviewer
 easier.

+1 - I try to remember to do this and from a reviewer point of view this
is especially useful when there has been a rebase involved.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Resource URL support for more than two levels

2013-08-28 Thread balaji patnala
Hi,

When compared to Nova URL implementations, It is observed that the Neutron
URL support cannot be used for more than TWO levels.

Applications which want to add as PLUG-IN may be restricted with this.

We want to add support for changes required for supporting more than TWO
Levels of URL by adding the support changes required in Core Neutron Files.

Any comments/interest in this.?

Regards,
Balaji.P





On Tue, Aug 27, 2013 at 5:04 PM, B Veera-B37207 b37...@freescale.comwrote:

  Hi,

 ** **

 The current infrastructure provided in Quantum [Grizzly], while building
 Quantum API resource URL using the base function ‘base.create_resource()’
 and RESOURCE_ATTRIBUTE_MAP/SUB_RESOURCE_ATTRIBUTE_MAP, supports only two
 level URI. 

 Example: 

 GET  /lb/pools/pool_id/members/member_id

 ** **

 Some applications may need more than two levels of URL support. Example:
 GET  /lb/pools/pool_id/members/member_id/xyz/xyz_id

 ** **

 If anybody is interested in this, we want to contribute for this as BP and
 make it upstream.

 ** **

 Regards,

 Veera.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-28 Thread Christopher Yeoh
On Wed, 28 Aug 2013 15:56:33 +
Joshua Harlow harlo...@yahoo-inc.com wrote:

 Shrinking that rotation granularity would be reasonable to. Rotate
 once every 2 weeks or some other time period still seems useful to me.
 

I wonder if the quality of reviewing would drop if someone was doing it
all day long though. IIRC the link that Robert pointed to in another
thread seemed to indicate that the ability for someone to pick up bugs
reduces significantly if they are doing code reviews continuously.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Nova hypervisor: Docker

2013-08-28 Thread Sam Alba
On Wed, Aug 28, 2013 at 9:12 AM, Sam Alba sam.a...@gmail.com wrote:
 Thanks a lot everyone for the nice feedback. I am going to work hard
 to get all those new comments addressed to be able to re-submit a new
 patchset today or tomorrow (the later).

 On Wed, Aug 28, 2013 at 7:02 AM, Russell Bryant rbry...@redhat.com wrote:
 On 08/28/2013 05:18 AM, Daniel P. Berrange wrote:
 On Wed, Aug 28, 2013 at 06:00:50PM +1000, Michael Still wrote:
 On Wed, Aug 28, 2013 at 4:18 AM, Sam Alba sam.a...@gmail.com wrote:
 Hi all,

 We've been working hard during the last couple of weeks with some
 people. Brian Waldon helped a lot designing the Glance integration and
 driver testing. Dean Troyer helped a lot on bringing Docker support in
 Devstack[1]. On top of that, we got several feedback on the Nova code
 review which definitely helped to improve the code.

 The blueprint[2] explains what Docker brings to Nova and how to use it.

 I have to say that this blueprint is a fantastic example of how we
 should be writing design documents. It addressed almost all of my
 questions about the integration.

 Yes, Sam ( any of the other Docker guys involved) have been great at
 responding to reviewers' requests to expand their design document. The
 latest update has really helped in understanding how this driver works
 in the context of openstack from an architectural and functional POV.

 They've been great in responding to my requests, as well.  The biggest
 thing was that I wanted to see devstack support so that it's easily
 testable, both by developers and by CI.  They delivered.

 So, in general, I'm good with this going in.  It's just a matter of
 getting the code review completed in the next week before feature
 freeze.  I'm going to try to help with it this week.


If someone wants to take another look at
https://review.openstack.org/#/c/32960/, we answered/fixed all
previous comments.


-- 
@sam_alba

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev