Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Alessandro Pilotti


On Oct 16, 2013, at 05:48 , Dan Smith 
d...@danplanet.commailto:d...@danplanet.com wrote:

The last thing that OpenStack needs ANY more help with is velocity. I
mean, let's be serious - we land WAY more patches in a day than is
even close to sane.

Thanks for saying this -- it doesn't get said enough. I find it totally
amazing that we're merging 34 changes in a day (yesterday) which is like
170 per work week (just on nova). More amazing is that we're talking
about how to make it faster all the time. It's definitely the fastest
moving extremely complex thing I can recall working on.


Dan, nobody puts into discussion how good the work that you guys are doing is. 
You guys are doing an amazing job, that's unquestionable.

The problem is that the review team, in the way in which it is structured 
today, simply does not scale with the review load (which is quite funny in a 
project which is all about scalability :-)).

Here's a quick analysys over the 90 days Nova review stats published by 
Russell: http://russellbryant.net/openstack-stats/nova-reviewers-90.txt

Roughly 2/3 of the reviews are done by 20 people, with the top 10 getting close 
to 50%.
Let's say that we provide out of our sub-team 1 additional Nova core dev that 
will perform in the top 10, which averages 521 reviewes, 4,6% of the total.
This would reduce our stale review queue time by 5%, which is quite far from a 
practical improvement over the current mess IMO.

The picture changes if you put this additional resource to do reviews mostly on 
our sub-project code, but at that point I don't see the difference from having 
somebody with +2 rights on the driver sub-tree only.

This example applies to any sub-project of course, not only the Hyper-V driver.


(I hope that the table formatting will come out decently in the ML email, if 
not please find the data here: http://paste.openstack.org/show/48539/)

ReviewerCoreReviews % over totalPartials
russellbyes 888 7,84%
garyk   856 7,55%
jogoyes 475 4,19%
mikalstill  yes 450 3,97%
danms   yes 447 3,94%   23,55%  TOP 5
ndipanovyes 432 3,81%
klmitch yes 429 3,79%
cbehrensyes 360 3,18%
johngarbutt yes 351 3,10%
cyeoh-0 yes 327 2,89%   44,25%  TOP 10
markmc  yes 304 2,68%
alaski  yes 289 2,55%
mriedem 270 2,38%
cerberusyes 266 2,35%
dripton 261 2,30%
berrangeyes 251 2,21%
jhesketh250 2,21%
philip-day  250 2,21%
xuhj237 2,09%
belliottyes 212 1,87%   67,10%  TOP 20
guohliu 201 1,77%
boris-42170 1,50%
sdague  yes 164 1,45%
p-draigbradyyes 130 1,15%
vishvananda yes 123 1,09%
tracyajones 112 0,99%
JayLau  109 0,96%
hartsocks   108 0,95%
arosen  106 0,94%
dims-v  101 0,89%   78,79%  TOP 30




We MUST continue to be vigilent in getting people to care about more
than their specific part, or else this big complex mess is going to come
crashing down around us.

I totally agree.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler meeting and Icehouse Summit

2013-10-16 Thread Gary Kotton
Hi,
I agree with Phil. This has been on the agenda of the scheduling meetings
for over a month now.
Thanks
Gary

On 10/15/13 2:40 PM, Day, Phil philip@hp.com wrote:

Hi Alex,

My understanding is that the 17th is the deadline and that Russell needs
to be planning the sessions from that point onwards.  If we delay in
giving him our suggestions until the 22nd I think it would be too late.
 We've had weeks if not months now of discussing possible scheduler
sessions, I really don't see why we can't deliver a recommendation on how
best to fit into the 3 committed slots on or before the 17th.

Phil

On Mon, Oct 14, 2013 at 10:56 AM, Alex Glikson glik...@il.ibm.com wrote:
 IMO, the three themes make sense, but I would suggest waiting until
 the submission deadline and discuss at the following IRC meeting on the
22nd.
 Maybe there will be more relevant proposals to consider.

 Regards,
 Alex

 P.S. I plan to submit a proposal regarding scheduling policies, and
 maybe one more related to theme #1 below



 From:Day, Phil philip@hp.com
 To:OpenStack Development Mailing List
 openstack-dev@lists.openstack.org,
 Date:14/10/2013 06:50 PM
 Subject:Re: [openstack-dev] Scheduler meeting and Icehouse
Summit
 



 Hi Folks,

 In the weekly scheduler meeting we've been trying to pull together a
 consolidated list of Summit sessions so that we can find logical
 groupings and make a more structured set of sessions for the limited
 time available at the summit.

 https://etherpad.openstack.org/p/IceHouse-Nova-Scheduler-Sessions

 With the deadline for sessions being this Thursday 17th, tomorrows IRC
 meeting is the last chance to decide which sessions we want to combine /
 prioritize.Russell has indicated that a starting assumption of three
 scheduler sessions is reasonable, with any extras depending on what
 else is submitted.

 I've matched the list on the Either pad to submitted sessions below,
 and added links to any other proposed sessions that look like they are
related.


 1) Instance Group Model and API
Session Proposal:
 http://summit.openstack.org/cfp/details/190

 2) Smart Resource Placement:
   Session Proposal:
 http://summit.openstack.org/cfp/details/33
Possibly related sessions:
Resource
 optimization service for nova
 (http://summit.openstack.org/cfp/details/201)

 3) Heat and Scheduling and Software, Oh My!:
 Session Proposal:
 http://summit.openstack.org/cfp/details/113

 4) Generic Scheduler Metrics and Celiometer:
 Session Proposal:
 http://summit.openstack.org/cfp/details/218
 Possibly related sessions:  Making Ceilometer and Nova
 play nice  http://summit.openstack.org/cfp/details/73

 5) Image Properties and Host Capabilities
 Session Proposal:  NONE

 6) Scheduler Performance:
 Session Proposal:  NONE
 Possibly related Sessions: Rethinking Scheduler Design
 http://summit.openstack.org/cfp/details/34

 7) Scheduling Across Services:
 Session Proposal: NONE

 8) Private Clouds:
 Session Proposal:
 http://summit.openstack.org/cfp/details/228

 9) Multiple Scheduler Policies:
 Session Proposal: NONE


 The proposal from last weeks meeting was to use the three slots for:
 - Instance Group Model and API   (1)
 - Smart Resource Placement (2)
 - Performance (6)

 However, at the moment there doesn't seem to be a session proposed to
 cover the performance work ?

 It also seems to me that the Group Model and Smart Placement are
 pretty closely linked along with (3) (which says it wants to combine 1
  2 into the same topic) , so if we only have three slots available
then these look like
 logical candidates for consolidating into a single session.That
would
 free up a session to cover the generic metrics (4) and Ceilometer -
 where a lot of work in Havana stalled because we couldn't get a
 consensus on the way forward.  The third slot would be kept for
 performance - which based on the lively debate in the scheduler
meetings I'm assuming will still be submitted
 as a session.Private Clouds isn't really a scheduler topic, so I
suggest
 it takes its chances as a general session.  Hence my revised proposal
 for the three slots is:

  i) Group Scheduling / Smart Placement / Heat and Scheduling  (1),
 (2), (3),  (7)
 - How do you schedule something more complex that a
 single VM ?

 ii) Generalized scheduling metrics / celiometer integration (4)
 - How do we extend the set of resources a scheduler
 can use to make its decisions ?
 - How do we make this work with  / compatible with
 Celiometer ?

 iii) Scheduler Performance (6)

 In that way we will at least give airtime to all of the topics. If
a 4th
 scheduler slot becomes available 

Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Alessandro Pilotti


On Oct 16, 2013, at 08:45 , Vishvananda Ishaya vishvana...@gmail.com
 wrote:

 Hi Sean,
 
 I'm going to top post because my response is general. I totally agree that we 
 need people that understand the code base and we should encourage new people 
 to be cross-functional. I guess my main issue is with how we get there. I 
 believe in encouragment over punishment. In my mind giving people autonomy 
 and control encourages them to contribute more. 
 

I couldn't agree more. By having more autonomy we would definitely be able to 
add more contributors to the team and people would definitely feel more 
productive. Nothing frustrates a developer more than having her/his code, 
meaning days or weeks of passionate hard work, sitting for long periods in a 
review queue and not knowing if it will be accepted or not.

Although this won't most probably affect the spirit of seasoned developers used 
to large projects politics and bureaucracy, it will IMO definitely kill the 
occasional contributor's effort or the junior's morale.

 In my opinion giving the driver developers control over their own code will 
 lead to higher quality drivers. Yes, we risk integration issues, lack of test 
 coverage, and buggy implementations, but it is my opinion that the increased 
 velocity that the developers will enjoy will mean faster bug fixes and more 
 opportunity to improve the drivers.
 

Here's my opinion on the driver quality side which comes from personal 
experience and from what I observed in other drivers code taking our Hyper-V 
driver development history as an example:

*1st phase (latest days of Folsom)*

We started our contribution around July 2012 by patching the code that used to 
be in the tree before getting kicked out from Essex (extremely buggy and 
basically a collection of anti-patterns).
Since the above work has been done in 2 weeks, the Folsom code is obviously 
fitting in all the category that Vish points out, but it does not mean that we 
were not aware of it: 

 integration issues, lack of test coverage, and buggy implementations



*2nd phase (Grizzly)*

We refactored heavily the driver, getting rid almost entirely of the pre-Essex 
code, added the missing unit tests and new features to be on pair with the 
other main drivers.
With Grizzly we had a very low bug rate, excellent reviews from the users and 
in general the code stands out for its quality.


*3rd phase (Havana)*

Lots of new features and some minor refactoring. Sadly this process almost 
faltered due to the interaction and review issues with the Nova team that lead 
to the present discussions.
Some of the few bug fixes haven't even been reviewed in time for the Havana 
cut, we'll be force to tell distribution maintainers and users to cherry pick 
them from master or from our fork.


*4th phase (Icehouse, planned)*

The CI gate is the main focus plus blueprints already implemented for Havana 
that didn't make it and a few more new features.


During the process (especially during the first 2 cycles) we learned a lot 
about OpenStack and the Gerrit review system thanks in particular to people in 
the community that helped in getting quickly up to speed with all the details. 
I personally have to thank mainly Vish and Dan Smith for that. A separate 
project would simplify those steps for a new driver today, as it would go 
through stackforge incubation. 

Why do I report all those hystorical project details here? Because the Folsom 
messy code would have never passed today's review criteria, but if you look at 
how things came out in the end, we got an excellent product aligned with 
OpenStack's standards, lots of happy users and a fast expanding community. 

Drivers are IMO not part of the core of Nova, but completely separated and 
decoupled entities, which IMO should be treated that way. As a consequence, we 
frankly don't stricly feel as part of Nova, although some of us have a pretty 
strong understanding of how all the Nova pieces work.

Obliging driver (or other decoupeld sub-components) developers to learn the 
entire Nova project before being able to contribute would just kill their 
effort before the start, resulting in a poorer ecosystem.

My suggestion is to have separate projects for each driver, a versioned Nova 
driver interface contract and separate teams for each driver (completely 
independent from Nova), with new drivers going through an incubation period on 
stackforge like any other new OpenStack project. 


 I also think lowering the amount of code that nova-core has to keep an eye on 
 will improve the review velocity of the rest of the code as well.
 
 Vish
 
 On Oct 15, 2013, at 4:36 PM, Sean Dague s...@dague.net wrote:
 
 On 10/15/2013 04:54 PM, Vishvananda Ishaya wrote:
 Hi Everyone,
 
 I've been following this conversation and weighing the different sides. 
 This is a tricky issue but I think it is important to decouple further and 
 extend our circle of trust.
 
 When nova started it was very easy to do feature 

Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-16 Thread Łukasz Jernaś
On Wed, Oct 16, 2013 at 12:40 AM, Mathew R Odden mrod...@us.ibm.com wrote:
 As requested in the last Oslo meeting, I created a blueprint for further
 discussion on the i18n work in Icehouse here:

 https://blueprints.launchpad.net/oslo/+spec/i18n-messages

 There were some ideas from the meeting that I'm sure I wasn't quite fully
 understanding, so please take a look and let me know if there is any
 feedback.

Hi,

If I may include my 0.02€.

I'm still trying to wrap my head around the need for translating API
messages and log messages as IMHO it adds a lot more problems for app
developers  and log analysis tools, eg. a log analysis tool would be
usable only for the locale it was developed for and break with _every_
update of the translations. As a translator myself I don't really want
to check every system in existence if it uses my messages to do some
sort analysis and often change the strings to more proper form in my
language as reviews and feedback on the translation comes in - even
though the original English string doesn't change at all.
I feel that translating mostly computer facing stuff is just crying
for bugs, weird issues popping up for users, API-s should be readable
by humans, but translating them is a bit too far in my opinion.

If I get it right the Message objects are supposed to move stuff
around internally in a C/en locale, but we will still end up dropping
translated messages to computers if the don't explicitly specify the
locale which the request should use...

Regards,
-- 
Łukasz [DeeJay1] Jernaś

P.S. Of course I may be wrong with all of that ;)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Reviews: tweaking priorities, and continual-deployment approvals

2013-10-16 Thread mar...@redhat.com
On 16/10/13 03:22, Robert Collins wrote:
 Hi, during the TripleO meeting today we had two distinct discussions
 about reviews.
 
 Firstly, our stats have been slipping:
 http://russellbryant.net/openstack-stats/tripleo-openreviews.html
 
 
 Stats since the last revision without -1 or -2 (ignoring jenkins):
 
 Average wait time: 2 days, 16 hours, 18 minutes
 1rd quartile wait time: 0 days, 11 hours, 1 minutes
 Median wait time: 1 days, 9 hours, 37 minutes
 3rd quartile wait time: 5 days, 1 hours, 50 minutes
 
 
 Longest waiting reviews (based on oldest rev without nack, ignoring jenkins):
 
 7 days, 16 hours, 40 minutes https://review.openstack.org/50010 (Fix a
 couple of default config values)
 7 days, 4 hours, 21 minutes https://review.openstack.org/50199
 (Utilizie pypi-mirror from tripleo-cd)
 6 days, 2 hours, 28 minutes https://review.openstack.org/50431 (Make
 pypi-mirror more secure and robust)
 6 days, 1 hours, 36 minutes https://review.openstack.org/50750 (Remove
 obsolete redhat-eventlet.patch)
 5 days, 1 hours, 50 minutes https://review.openstack.org/51032
 (Updated from global requirements)
 
 This is holding everyone up, so we want to fix it. When we discussed
 it we found that there were two distinct issues:
  A - not enough cross-project reviews
  B - folk working on the kanban TripleO Continuous deployment stuff
 had backed off on reviews - and they are among the most prolific
 reviewers.
 
 A: Cross project reviews are super important: even if you are only
 really interested in (say) os-*-config, it's hard to think about
 things in context unless you're also up to date with changing code
 (and the design of code) in the rest of TripleO. *It doesn't matter*
 if you aren't confident enough to do a +2 - the only way you get that
 confidence is by reviewing and reading code so you can come up to
 speed, and the only way we increase our team bandwidth is through folk
 doing that in a consistent fashion.
 
 So please, whether your focus is Python APIs, UI, or system plumbing
 in the heart of diskiamge-builder, please take the time to review
 systematically across all the projects:
 https://wiki.openstack.org/wiki/TripleO#Review_team
 
 B: While the progress we're making on delivering a production cloud is
 hugely cool, we need to keep our other responsibilities in check -
 https://wiki.openstack.org/wiki/TripleO#Team_responsibilities - is a
 new section I've added based on the meeting. Even folk working on the
 pointy end of the continuous delivery story need to keep pulling on
 the common responsibilities. We said in the meeting that we might
 triage it as follows:
  - review reviews for firedrills first. (Critical bugs, things
 breaking the CD cloud)
  - review reviews for the CD cloud
  - then all reviews for the program
 with a goal of driving them all to 0: if we're on top of things, that
 should never be a burden. If we run out of time, we'll have unblocked
 critical things first, unblocked folk working on the pointy edge
 second - bottlenecks are important to unblock. We'll review how this
 looks next week.
 


I can at least promise :) to become a more useful member of the team
over time; to be honest there's a lot of _new_ going on very quickly and
even the workflow is new to me (gerrit, bluprints, bugs, etc). The
discussion about triage on yesterday's call (and your email) definitely
makes it a more immediately accessible task for me to look at and the
clarification around +1/-1 also helps,

thanks, marios





 # The second thing
 
 The second issue was raised during the retrospective (which will be up
 at https://wiki.openstack.org/wiki/TripleO/TripleOCloud/MVP1and2Retrospective
 a little later today. With a production environment, we want to ensure
 that only released code is running on it - running something from a
 pending review is something to avoid. But, the only way we've been
 able to effectively pull things together has been to run ahead of
 reviews :(. A big chunk of that is due to a lack of active +2
 reviewers collaborating with the CD cloud folk - we would get a -core
 putting a patch up, and a +2, but no second +2. We decided in the
 retrospective to try permitting -core to +2 their own patch if it's
 straightforward and part of the current CD story [or a firedrill]. We
 set an explicit 'but be sure you tested this first' criteria on that :
 so folk might try it locally, or even monkey patch it onto the cloud
 for one run to check it really works [until we have gating on the CD
 story/ies].
 
 Cheers,
 Rob
 
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Havana RC3 available

2013-10-16 Thread Thierry Carrez
Hi everyone,

Two critical issues were discovered in Cinder RC2 testing, including one
that affected the ability to upgrade from a Grizzly setup. We fixed
those issues and published a new Havana release candidate for OpenStack
Block Storage (Cinder).

You can find the RC3 tarball and the links to fixed bugs at:

https://launchpad.net/cinder/havana/havana-rc3

At this point it is very unlikely that we will release another RC for
Cinder, unless another last-minute release-critical regression is found.
This RC3 should therefore be formally included in the common OpenStack
2013.2 final release tomorrow. Please give this tarball a round of
last-minute sanity checks.

Alternatively, you can grab the code at:
https://github.com/openstack/cinder/tree/milestone-proposed

If you find a regression that could be considered release-critical,
please file it at https://bugs.launchpad.net/cinder/+filebug and tag
it *havana-rc-potential* to bring it to the release crew's attention.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to get VXLAN Endpoint IP without agent

2013-10-16 Thread Mathieu Rohon
Hi,

can you clarify your question? because without any l2-agent on your
compute host, your VM won't be able to communicate.

regards

On Tue, Oct 15, 2013 at 12:45 PM, B Veera-B37207 b37...@freescale.com wrote:
 Hi,



 Vxlan endpoint ip is configured in
 ‘/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini’ as ‘local_ip’

 While starting openvswitch agent the above local ip is populated in neutron
 database.



 Is there any way to get local_ip of compute node without any agent running?



 Thanks in advance.



 Regards,

 Veera.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Havana RC3 available

2013-10-16 Thread Thierry Carrez
Hi everyone,

One issue was discovered in Keystone RC2 testing, preventing Heat's
usage of trusts from being usable. We decided to fix this specific issue
pre-release and published a new Havana release candidate for OpenStack
Identity (Keystone).

You can find the RC3 tarball and a link to the fixed bug at:

https://launchpad.net/keystone/havana/havana-rc3

At this point it is very unlikely that we will release another RC for
Keystone, unless a last-minute release-critical regression is found.
This RC3 should therefore be formally included in the common OpenStack
2013.2 final release tomorrow. Please give this tarball a round of
last-minute sanity checks.

Alternatively, you can grab the code at:
https://github.com/openstack/keystone/tree/milestone-proposed

If you find a regression that could be considered release-critical,
please file it at https://bugs.launchpad.net/keystone/+filebug and tag
it *havana-rc-potential* to bring it to the release crew's attention.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Introducing Show my network state utility

2013-10-16 Thread Giuliano
Hi all!
I already posted this message in the general mailing list, but I write it
here too.
I've made a simple network utility and I'd like to share it with the
community!
It's called Show my network state and it's a graphical network topology
visualizer for a single host.
I made it to simplify the network management of Openstack nodes. All that
OVS bridges, veth pairs and patch ports were simply a mess to read from
command line so I decided to realize visual dashboard to show them all.
The code is BSD licensed and hosted on GITHUB.


Please let me know if you find it useful!
I'd also like to receive some feedback, errors, etc.

The Website is the following:
https://sites.google.com/site/showmynetworkstate/

Best regards,
Giuliano
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Steven Hardy
On Tue, Oct 15, 2013 at 09:21:12PM -0400, Mike Spreitzer wrote:
 Steve Baker sba...@redhat.com wrote on 10/15/2013 06:48:53 PM:
 
  From: Steve Baker sba...@redhat.com
  To: openstack-dev@lists.openstack.org, 
  Date: 10/15/2013 06:51 PM
  Subject: [openstack-dev] [Heat] HOT Software configuration proposal
  
  I've just written some proposals to address Heat's HOT software 
  configuration needs, and I'd like to use this thread to get some 
 feedback:
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
 
 In that proposal, each component can use a different configuration 
 management tool.
 
  
 https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config
 
 
 In this proposal, I get the idea that it is intended that each Compute 
 instance run only one configuration management tool.  At least, most of 
 the text discusses the support (e.g., the idea that each CM tool supplies 
 userdata to bootstrap itself) in terms appropriate for a single CM tool 
 per instance; also, there is no discussion of combining userdata from 
 several CM tools.

IMO it makes no sense to use more than one CM tool on a particular
instance, apart from the case already mentioned by stevebaker where
cloud-init is used to bootstrap some other tool.

From my discussions with folks so far, they want:
- Something simple and declarative at the template interface
- A way to reuse data and knowledge from existing CM tools
  (Puppet/Chef/...) in a clean an non-complex way

 I agree with the separation of concerns issues that have been raised.  I 
 think all this software config stuff can be handled by a pre-processor 
 that takes an extended template in and outputs a plain template that can 
 be consumed by today's heat engine (no extension to the heat engine 
 necessary).

I think this is the exact opposite of the direction we should be headed.

IMO we should be abstracting the software configuration complexity behind a
Heat resource interface, not pushing it up to a pre-processor (which
implies some horribly complex interfaces at the heat template level)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Notifications from non-local exchanges

2013-10-16 Thread Julien Danjou
On Tue, Oct 15 2013, Sandy Walsh wrote:

 Hmm, I think I see your point. All the rabbit endpoints are determined
 by these switches:
 https://github.com/openstack/nova/blob/master/etc/nova/nova.conf.sample#L1532-L1592

 We will need a way in CM to pull from multiple rabbits.

This is a know limitation of the current oslo RPC implementation. I've
already raised it a while back when oslo messaging was discussed. I
didn't check if that was solved in the Oslo messaging unfortunately, but
I hope it'll be something doable.

  I took another look at the Ceilometer config options...rabbit_hosts
 takes multiple hosts (i.e. rabbit.glance.hpcloud.net:, 
 rabbit.ceil.hpcloud.net:) 
 but it's not clear whether that's for publishing, collection, or both?  The 
 impl_kombu
 module does cycle through that list to create the connection pool, but it's 
 not
 clear to me how it all comes together in the plugin instantiation...

That's for fail-over etc. This is the same thing as every OpenStack
project, as it comes from Oslo.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How does the libvirt domain XML get created?

2013-10-16 Thread Daniel P. Berrange
On Tue, Oct 15, 2013 at 11:07:38PM -0500, Clark Laughlin wrote:
 
 I can see in config.py where VNC gets added (the graphics element),
 but I can't find any place where a video element gets added.  In
 fact, I've grepped the entire nova tree for cirrus or video and
 can only find it here:

It is added automatically by libvirt when an app provides a graphics
element but no explicit video element.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Alessandro Pilotti

On Oct 16, 2013, at 11:19 , Robert Collins 
robe...@robertcollins.netmailto:robe...@robertcollins.net
 wrote:

On 16 October 2013 20:14, Alessandro Pilotti
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com 
wrote:


Drivers are IMO not part of the core of Nova, but completely separated and 
decoupled entities, which IMO should be treated that way. As a consequence, we 
frankly don't stricly feel as part of Nova, although some of us have a pretty 
strong understanding of how all the Nova pieces work.

I don't have a particular view on whether they *should be* separate
decoupled entities, but today I repeatedly hear concerns about the
impact of treating 'internal APIs' as stable things. That's relevant
because *if* nova drivers are to be separate decoupled entities, the
APIs they use - and expose - have to be treated as stable things with
graceful evolution, backwards compatibility etc. Doing anything else
will lead to deployment issues and race conditions in the gate.

Obliging driver (or other decoupeld sub-components) developers to learn the 
entire Nova project before being able to contribute would just kill their 
effort before the start, resulting in a poorer ecosystem.

There are lots of different aspects of contribution: reviews (as
anybody), reviews (as core), code, bug assessment, design input,
translations, documentation etc. Nobody has said that you have to
learn everything before you contribute.

The /only item/ in that list that requires wide knowledge of the code
base is reviews (as core).

The difference between reviews (as anybody) and reviews (as core) is
that core is responsible for identifying things like duplicated
functionality, design and architectural issues - and for only
approving a patch when they are comfortable it doesn't introduce such
issues.

Core review is a bottleneck. When optimising systems with a bottleneck
there are only three things you can do to make the system work better:
- avoid the bottleneck
- increase capacity of the bottleneck
- make the bottleneck more efficient at the work it's given

Anything else will just end up backing up work behind the bottleneck
and make /no/ difference at all.

Your proposal (partition the code  reviewers by core/drivers) is an
'avoid the bottleneck' approach. It will remove some fraction of
reviews from the bottleneck - those that are per driver - at the cost
of losing /all/ of the feedback, architecture and design input that
the drivers currently benefit from, *and* removing from the nova core
any insight into the issues drivers are experiencing (because they
won't be reviewing driver code). Unless you have 'in-tree' and
'out-of-tree' drivers, and then one has to ask 'why are some drivers
out of tree'? The only answer for that so far is 'review latency is
hurting drivers'... but review latency is hurting *everyone*.

To increase bottleneck capacity we could ask -core reviewers to spend
more time reviewing. However that doesn't work because we're human -
we need time to do other things than think hard about other peoples
code. There is an upper bound on effective time spent reviewing by
reviewers. Or we can increase capacity of the bottleneck by training
up more -core reviewers. Thats pretty obvious :). However training up
more reviewers requires more reviewers - so the cost here is that
someone needs to volunteer that time.

The bottleneck - core reviewers - can get through more reviews when
the reviews are easy to do. From prior conversations here is my list:
- keep the change small. Lots of LOC == a hard review, which consumes
-core review time
- get the review reviewed by non-core as soon as you put it up - this
will catch many issues -core would and reduce re-work
- follow the style guides
- make your commit message really clear - there's a style guide for these too!

It seems to me that one can order the choices by their costs:
- provide patches that are easier to review [basically no cost: the
rules are already in place]
- train more -core reviewers [needs volunteer time]
- split the drivers out [lose coherency on tightly coupled things,
have to stabilise more interfaces, lose experienced input into driver
code]

And by their effectiveness [this is more subjective:)]
- train more -core reviewers [essentially linear, very easy to predict]
- provide patches that are easier to review [many patches are good
already, has a low upper bound on effectiveness]
- split the drivers out [won't help *at all* with changes required in
core to support a driver feature]

Finally, there is a key thing to note: As contributors to the project
scales, patch volume scales. As patch volume scales the pressure on
the bottleneck increases: we *have* to scale the -core review team [or
partition the code bases into two that will **both have a solid
reviewer community**].

Remember that every patch added requires *at minimum* 2 core reviews
[and I suspect in reality 3 core reviews - one to catch issues, then a
re-review, then a 

Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Thierry Carrez
Sean Dague wrote:
 The Linux kernel process works for a couple of reasons...
 
 1) the subsystem maintainers have known each other for a solid decade
 (i.e. 3x the lifespan of the OpenStack project), over a history of 10
 years, of people doing the right things, you build trust in their judgment.
 
 *no one* in the Linux tree was given trust first, under the hope that it
 would work out. They had to earn it, hard, by doing community work, and
 not just playing in their corner of the world.
 
 2) This
 http://www.wired.com/wiredenterprise/2012/06/torvalds-nvidia-linux/ is
 completely acceptable behavior. So when someone has bad code, they are
 flamed to within an inch of their life, repeatedly, until they never
 ever do that again. This is actually a time saving measure in code
 review. It's a lot faster to just call people idiots then to help them
 with line by line improvements in their code, 10, 20, 30, or 40
 iterations in gerrit.
 
 We, as a community have decided, I think rightly, that #2 really isn't
 in our culture. But you can't start cherry picking parts of the Linux
 kernel community without considering how all the parts work together.
 The good and the bad are part of why the whole system works.

This is an extremely important point in that discussion.

The Linux kernel model is built on a pyramidal model where Linus, in a
PTL+Release Manager role, has the final ability to refuse whole sections
of the code just because he doesn't like it.

Over two decades, Linus built a solid trust relationship with most
subsystem maintainers, so that he doesn't have to review every single
patch for sanity. In those areas he has a set of people who consistently
proved they would apply the same standards as he does. But for other
less-trusted areas the pyramidal model is still working.

I don't see how we could apply that to OpenStack as the trust
relationships are far from being that advanced (think: not old enough),
and I don't think we want to replicate the personified, pyramidal merge
model to handle the less-trusted relationships in the mean time.

You don't really want to develop the hyper-V driver in a private
subsystem branch all cycle, then at the very end have it rejected from
release by an empowered Russell or Thierry just because we think it's
not tested enough or we don't like the color it's been painted. This is
how the Linux kernel model works with untrusted subsystems -- by
subjecting your work to a final BDFL right to kill it at release time.

The other two alternatives are to accept the delays and work within Nova
(slowly building the trust that will give you more autonomy), or ship it
as a separate add-on that does not come with nova-core's signature on it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] A new blueprint for Nova-scheduler: Policy-based Scheduler

2013-10-16 Thread Khanh-Toan Tran
Dear all,

I've registered a new blueprint for nova-scheduler. The purpose of the 
blueprint is to propose a new scheduler that is based on policy:

   https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler

With current Filter_Scheduler, admin cannot change his placement policy without 
restarting nova-scheduler. Neither can he define local policy for a group of 
resources (say, an aggregate), or a particular client. Thus we propose this 
scheduler to provide admin with the capability of defining/changing his 
placement policy in runtime. The placement policy can be global (concerning all 
resources), local (concerning a group of resources), or tenant-specific.

Please don't hesitate to contact us for discussion, all your comments are 
welcomed!

Best regards,

Khanh-Toan TRAN
Cloudwatt
Email: khanh-toan.tran[at]cloudwatt.com
892 Rue Yves Kermen
92100 BOULOGNE-BILLANCOURT
FRANCE

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] 'ordereddict' requirement

2013-10-16 Thread stuart . mclaren

All,

There was a plan to use pypi's 'ordereddict' in Icehouse, to
replace how we're currently providing that functionality.

However, there are no ordereddict packages for Debian/Ubuntu
and there are no plans to provide them. (See Thomas Goirand's comment
here: https://review.openstack.org/#/c/48475/3/requirements.txt)

I think this means that it makes sense to stick with our current solution until
python 2.6 support is dropped. On that basis I've uploaded a change
to requirements to drop ordereddict: https://review.openstack.org/#/c/52053/

Please jump in with '-1's/'+1's as appropriate.

Thanks,

-Stuart

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Alessandro Pilotti

On Oct 16, 2013, at 13:19 , Thierry Carrez 
thie...@openstack.orgmailto:thie...@openstack.org
 wrote:

Sean Dague wrote:
The Linux kernel process works for a couple of reasons...

1) the subsystem maintainers have known each other for a solid decade
(i.e. 3x the lifespan of the OpenStack project), over a history of 10
years, of people doing the right things, you build trust in their judgment.

*no one* in the Linux tree was given trust first, under the hope that it
would work out. They had to earn it, hard, by doing community work, and
not just playing in their corner of the world.

2) This
http://www.wired.com/wiredenterprise/2012/06/torvalds-nvidia-linux/ is
completely acceptable behavior. So when someone has bad code, they are
flamed to within an inch of their life, repeatedly, until they never
ever do that again. This is actually a time saving measure in code
review. It's a lot faster to just call people idiots then to help them
with line by line improvements in their code, 10, 20, 30, or 40
iterations in gerrit.

We, as a community have decided, I think rightly, that #2 really isn't
in our culture. But you can't start cherry picking parts of the Linux
kernel community without considering how all the parts work together.
The good and the bad are part of why the whole system works.

This is an extremely important point in that discussion.

The Linux kernel model is built on a pyramidal model where Linus, in a
PTL+Release Manager role, has the final ability to refuse whole sections
of the code just because he doesn't like it.

Over two decades, Linus built a solid trust relationship with most
subsystem maintainers, so that he doesn't have to review every single
patch for sanity. In those areas he has a set of people who consistently
proved they would apply the same standards as he does. But for other
less-trusted areas the pyramidal model is still working.

I don't see how we could apply that to OpenStack as the trust
relationships are far from being that advanced (think: not old enough),
and I don't think we want to replicate the personified, pyramidal merge
model to handle the less-trusted relationships in the mean time.


Younger projects at the bottom of the pyramid, especially kernel modules that 
we could consider equivant to drivers, IMO cannot be based on such a long trust 
relationship due to their age.
As an example, well, the Hyper-V linux kernel LIS modules fit pretty well :-)

You don't really want to develop the hyper-V driver in a private
subsystem branch all cycle, then at the very end have it rejected from
release by an empowered Russell or Thierry just because we think it's
not tested enough or we don't like the color it's been painted. This is
how the Linux kernel model works with untrusted subsystems -- by
subjecting your work to a final BDFL right to kill it at release time.

The other two alternatives are to accept the delays and work within Nova
(slowly building the trust that will give you more autonomy), or ship it
as a separate add-on that does not come with nova-core's signature on it.


I never asked for a nova signature on it. My only requirerement is that the 
project would be part of OpenStack and not an external project, even if this 
means passing 2 releases in incubation on stackforge as long as it can become 
part of the OpenStack core group of projects afterwards (if it meets the 
required OpenStack criteria of course).  
https://wiki.openstack.org/wiki/Governance/NewProjects

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Daniel P. Berrange
Alessandro, please fix your email program so that it does not send
HTML email to the list, and correctly quotes text you are replying
to with ' '. Your reply comes out looking like this which makes it
impossible to see who wrote what:

On Wed, Oct 16, 2013 at 10:42:45AM +, Alessandro Pilotti wrote:
 
 On Oct 16, 2013, at 13:19 , Thierry Carrez 
 thie...@openstack.orgmailto:thie...@openstack.org
  wrote:
 
 Sean Dague wrote:
 The Linux kernel process works for a couple of reasons...
 
 1) the subsystem maintainers have known each other for a solid decade
 (i.e. 3x the lifespan of the OpenStack project), over a history of 10
 years, of people doing the right things, you build trust in their judgment.
 
 *no one* in the Linux tree was given trust first, under the hope that it
 would work out. They had to earn it, hard, by doing community work, and
 not just playing in their corner of the world.
 
 2) This
 http://www.wired.com/wiredenterprise/2012/06/torvalds-nvidia-linux/ is
 completely acceptable behavior. So when someone has bad code, they are
 flamed to within an inch of their life, repeatedly, until they never
 ever do that again. This is actually a time saving measure in code
 review. It's a lot faster to just call people idiots then to help them
 with line by line improvements in their code, 10, 20, 30, or 40
 iterations in gerrit.
 
 We, as a community have decided, I think rightly, that #2 really isn't
 in our culture. But you can't start cherry picking parts of the Linux
 kernel community without considering how all the parts work together.
 The good and the bad are part of why the whole system works.
 
 This is an extremely important point in that discussion.
 
 The Linux kernel model is built on a pyramidal model where Linus, in a
 PTL+Release Manager role, has the final ability to refuse whole sections
 of the code just because he doesn't like it.
 
 Over two decades, Linus built a solid trust relationship with most
 subsystem maintainers, so that he doesn't have to review every single
 patch for sanity. In those areas he has a set of people who consistently
 proved they would apply the same standards as he does. But for other
 less-trusted areas the pyramidal model is still working.
 
 I don't see how we could apply that to OpenStack as the trust
 relationships are far from being that advanced (think: not old enough),
 and I don't think we want to replicate the personified, pyramidal merge
 model to handle the less-trusted relationships in the mean time.
 
 
 Younger projects at the bottom of the pyramid, especially kernel modules that 
 we could consider equivant to drivers, IMO cannot be based on such a long 
 trust relationship due to their age.
 As an example, well, the Hyper-V linux kernel LIS modules fit pretty well :-)
 
 You don't really want to develop the hyper-V driver in a private
 subsystem branch all cycle, then at the very end have it rejected from
 release by an empowered Russell or Thierry just because we think it's
 not tested enough or we don't like the color it's been painted. This is
 how the Linux kernel model works with untrusted subsystems -- by
 subjecting your work to a final BDFL right to kill it at release time.
 
 The other two alternatives are to accept the delays and work within Nova
 (slowly building the trust that will give you more autonomy), or ship it
 as a separate add-on that does not come with nova-core's signature on it.
 
 
 I never asked for a nova signature on it. My only requirerement is that the 
 project would be part of OpenStack and not an external project, even if this 
 means passing 2 releases in incubation on stackforge as long as it can become 
 part of the OpenStack core group of projects afterwards (if it meets the 
 required OpenStack criteria of course).  
 https://wiki.openstack.org/wiki/Governance/NewProjects
 
 --
 Thierry Carrez (ttx)


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Thierry Carrez
Alessandro Pilotti wrote:
 On Oct 16, 2013, at 13:19 , Thierry Carrez thie...@openstack.org
 The other two alternatives are to accept the delays and work within Nova
 (slowly building the trust that will give you more autonomy), or ship it
 as a separate add-on that does not come with nova-core's signature on it.
 
 I never asked for a nova signature on it. My only requirerement is that
 the project would be part of OpenStack and not an external project, even
 if this means passing 2 releases in incubation on stackforge as long as
 it can become part of the OpenStack core group of projects afterwards
 (if it meets the required OpenStack criteria of course).
  https://wiki.openstack.org/wiki/Governance/NewProjects

That's a possible outcome of the second alternative I described above.
The separate add-on could apply to the incubation track and potentially
be made a part of the integrated release.

My rant was in answer to Vish's adopt something more similar to the
linux model when dealing with subsystems suggestion, where the
autonomous subsystem is still made part of Nova in the end, and
therefore carries nova-core's signature.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Alessandro Pilotti
Sorry guys about this, my OS X Mail client had no issues
in doing the proper indentation, so I never noticed it. Darn. 

I made a test with Daniel with a private email before spamming 
here for nothing. Hope it worked out here as well.

Thanks for the heads up.


On Oct 16, 2013, at 13:47 , Daniel P. Berrange berra...@redhat.com
 wrote:

 Alessandro, please fix your email program so that it does not send
 HTML email to the list, and correctly quotes text you are replying
 to with ' '. Your reply comes out looking like this which makes it
 impossible to see who wrote what:
 
 On Wed, Oct 16, 2013 at 10:42:45AM +, Alessandro Pilotti wrote:
 
 On Oct 16, 2013, at 13:19 , Thierry Carrez 
 thie...@openstack.orgmailto:thie...@openstack.org
 wrote:
 
 Sean Dague wrote:
 The Linux kernel process works for a couple of reasons...
 
 1) the subsystem maintainers have known each other for a solid decade
 (i.e. 3x the lifespan of the OpenStack project), over a history of 10
 years, of people doing the right things, you build trust in their judgment.
 
 *no one* in the Linux tree was given trust first, under the hope that it
 would work out. They had to earn it, hard, by doing community work, and
 not just playing in their corner of the world.
 
 2) This
 http://www.wired.com/wiredenterprise/2012/06/torvalds-nvidia-linux/ is
 completely acceptable behavior. So when someone has bad code, they are
 flamed to within an inch of their life, repeatedly, until they never
 ever do that again. This is actually a time saving measure in code
 review. It's a lot faster to just call people idiots then to help them
 with line by line improvements in their code, 10, 20, 30, or 40
 iterations in gerrit.
 
 We, as a community have decided, I think rightly, that #2 really isn't
 in our culture. But you can't start cherry picking parts of the Linux
 kernel community without considering how all the parts work together.
 The good and the bad are part of why the whole system works.
 
 This is an extremely important point in that discussion.
 
 The Linux kernel model is built on a pyramidal model where Linus, in a
 PTL+Release Manager role, has the final ability to refuse whole sections
 of the code just because he doesn't like it.
 
 Over two decades, Linus built a solid trust relationship with most
 subsystem maintainers, so that he doesn't have to review every single
 patch for sanity. In those areas he has a set of people who consistently
 proved they would apply the same standards as he does. But for other
 less-trusted areas the pyramidal model is still working.
 
 I don't see how we could apply that to OpenStack as the trust
 relationships are far from being that advanced (think: not old enough),
 and I don't think we want to replicate the personified, pyramidal merge
 model to handle the less-trusted relationships in the mean time.
 
 
 Younger projects at the bottom of the pyramid, especially kernel modules 
 that we could consider equivant to drivers, IMO cannot be based on such a 
 long trust relationship due to their age.
 As an example, well, the Hyper-V linux kernel LIS modules fit pretty well :-)
 
 You don't really want to develop the hyper-V driver in a private
 subsystem branch all cycle, then at the very end have it rejected from
 release by an empowered Russell or Thierry just because we think it's
 not tested enough or we don't like the color it's been painted. This is
 how the Linux kernel model works with untrusted subsystems -- by
 subjecting your work to a final BDFL right to kill it at release time.
 
 The other two alternatives are to accept the delays and work within Nova
 (slowly building the trust that will give you more autonomy), or ship it
 as a separate add-on that does not come with nova-core's signature on it.
 
 
 I never asked for a nova signature on it. My only requirerement is that the 
 project would be part of OpenStack and not an external project, even if this 
 means passing 2 releases in incubation on stackforge as long as it can 
 become part of the OpenStack core group of projects afterwards (if it meets 
 the required OpenStack criteria of course).  
 https://wiki.openstack.org/wiki/Governance/NewProjects
 
 --
 Thierry Carrez (ttx)
 
 
 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] A new blueprint for Nova-scheduler: Policy-based Scheduler

2013-10-16 Thread Alex Glikson
This sounds very similar to 
https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers 

We worked on it in Havana, learned a lot from feedbacks during the review 
cycle, and hopefully will finalize the details at the summit and will be 
able to continue  finish the implementation in Icehouse. Would be great 
to collaborate.

Regards,
Alex





From:   Khanh-Toan Tran khanh-toan.t...@cloudwatt.com
To: openstack-dev@lists.openstack.org, 
Date:   16/10/2013 01:42 PM
Subject:[openstack-dev] [nova][scheduler] A new blueprint for 
Nova-scheduler: Policy-based Scheduler



Dear all,

I've registered a new blueprint for nova-scheduler. The purpose of the 
blueprint is to propose a new scheduler that is based on policy:

   https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler

With current Filter_Scheduler, admin cannot change his placement policy 
without restarting nova-scheduler. Neither can he define local policy for 
a group of resources (say, an aggregate), or a particular client. Thus we 
propose this scheduler to provide admin with the capability of 
defining/changing his placement policy in runtime. The placement policy 
can be global (concerning all resources), local (concerning a group of 
resources), or tenant-specific.

Please don't hesitate to contact us for discussion, all your comments are 
welcomed!

Best regards,

Khanh-Toan TRAN
Cloudwatt
Email: khanh-toan.tran[at]cloudwatt.com
892 Rue Yves Kermen
92100 BOULOGNE-BILLANCOURT
FRANCE

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Christopher Yeoh
On Wed, Oct 16, 2013 at 6:49 PM, Robert Collins
robe...@robertcollins.netwrote:

 On 16 October 2013 20:14, Alessandro Pilotti
 apilo...@cloudbasesolutions.com wrote:
 

  Drivers are IMO not part of the core of Nova, but completely separated
 and decoupled entities, which IMO should be treated that way. As a
 consequence, we frankly don't stricly feel as part of Nova, although some
 of us have a pretty strong understanding of how all the Nova pieces work.

 I don't have a particular view on whether they *should be* separate
 decoupled entities, but today I repeatedly hear concerns about the
 impact of treating 'internal APIs' as stable things. That's relevant
 because *if* nova drivers are to be separate decoupled entities, the
 APIs they use - and expose - have to be treated as stable things with
 graceful evolution, backwards compatibility etc. Doing anything else
 will lead to deployment issues and race conditions in the gate.


+1 - I think we really want to have a strong preference for a stable api if
we start separating parts out (and this has been the case in the past from
what I can see). Otherwise we either end up with lots of pain in making
infrastructure changes or asymmetric gating which is to be avoided wherever
possible.


 And by their effectiveness [this is more subjective:)]
  - train more -core reviewers [essentially linear, very easy to predict]
  - provide patches that are easier to review [many patches are good
 already, has a low upper bound on effectiveness]
  - split the drivers out [won't help *at all* with changes required in
 core to support a driver feature]


I'd like to add to that, better tools (which will help both core and non
core reviewers). For example, rebase hell was mentioned in this thread. I
was in that a fair bit with the Nova v3 API changes where I'd have a long
series of dependent patches which would get fairly even review attention.
This sometimes had the unfortunate result that many in the series would end
up with a single +2. Not enough to merge, and the +2's would  get lost in
the inevitable rebase. Now perhaps as reviewers we should probably know
better to follow the dependency chain on reviews to review the changesets
with the least dependencies first, but we're only human and we don't always
remember to do that. So perhaps it'd be nice if gerrit or some other tool
showed changesets to review as a tree rather than a list. We might get more
changesets merged with the same number of reviews if the tools encouraged
the most efficient behaviour.

Another example is when you review a lot of patches the gerrit dashboard
doesn't seem to show all of the patches that you have reviewed. And I find
I get rather overwhelmed with the volume of email from gerrit with updates
of patches I've reviewed and so I find its not a great source of working
out what to review next. I'm sure I'm guilty of reviewing some patches and
then not getting back to them for a while because I've effectively lost
track of them (which is where an irc ping is appreciated). Perhaps better
tools could help here?

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Christopher Yeoh
On Wed, Oct 16, 2013 at 8:49 PM, Thierry Carrez thie...@openstack.orgwrote:

 Sean Dague wrote:
  The Linux kernel process works for a couple of reasons...
 
  1) the subsystem maintainers have known each other for a solid decade
  (i.e. 3x the lifespan of the OpenStack project), over a history of 10
  years, of people doing the right things, you build trust in their
 judgment.
 
  *no one* in the Linux tree was given trust first, under the hope that it
  would work out. They had to earn it, hard, by doing community work, and
  not just playing in their corner of the world.
 
  2) This
  http://www.wired.com/wiredenterprise/2012/06/torvalds-nvidia-linux/ is
  completely acceptable behavior. So when someone has bad code, they are
  flamed to within an inch of their life, repeatedly, until they never
  ever do that again. This is actually a time saving measure in code
  review. It's a lot faster to just call people idiots then to help them
  with line by line improvements in their code, 10, 20, 30, or 40
  iterations in gerrit.
 
  We, as a community have decided, I think rightly, that #2 really isn't
  in our culture. But you can't start cherry picking parts of the Linux
  kernel community without considering how all the parts work together.
  The good and the bad are part of why the whole system works.

 This is an extremely important point in that discussion.

 The Linux kernel model is built on a pyramidal model where Linus, in a
 PTL+Release Manager role, has the final ability to refuse whole sections
 of the code just because he doesn't like it.

 Over two decades, Linus built a solid trust relationship with most
 subsystem maintainers, so that he doesn't have to review every single
 patch for sanity. In those areas he has a set of people who consistently
 proved they would apply the same standards as he does. But for other
 less-trusted areas the pyramidal model is still working.

 I don't see how we could apply that to OpenStack as the trust
 relationships are far from being that advanced (think: not old enough),
 and I don't think we want to replicate the personified, pyramidal merge
 model to handle the less-trusted relationships in the mean time.


The other thing to note is that it isn't all sunshine and roses in linux
kernel development either. IMO the bar for trusted subsystem maintainers is
much much higher in linux kernel development than for core reviewer status
for openstack projects. Also patches can take a long time to get review
attention on linux-kernel with long gaps between feedback depending on how
busy the maintainer is. With gerrit I think we're actually very good at
keeping track of patches and they are much less likely to get completely
lost. We certainly have much better stats on how responsive we are to
proposed patches.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Sean Dague

On 10/16/2013 01:19 AM, Alessandro Pilotti wrote:
snip


Sean, you got called out in the meeting not because you asked to put a
refernce link to the specs which was perfectly reasonable, but because
after we did what you asked for in a timely manner, you didn't bother to
review the patch again until asked to please review it 6 days later!!!

This is a perfect example about why we need autonomy. We cannot leave a
patch starving in the review queue for a critical bug like that one!!


I -1ed the patch, you caught me on IRC and argued with me that the code 
didn't need to change. You had my undivided attention there for 30 
minutes on this patch, but used the time to argue against change. So I 
moved on to other things. Should I have gotten back around to my Nova 
review queue sooner, sure. However once you made the fix I no longer had 
a -1 on the patch, so I wasn't blocking it. And do I want to give up 30 
minutes of my time every time I try to review your patches because you'd 
rather argue than take feedback? Not really. I still do it. But I'll 
admit, a patch author that gives me less grief is a lot more fun to 
review. I'm only human in that regard.


14 days from bug filing to merge isn't starving - 
(https://bugs.launchpad.net/nova/+bug/1233853). If it's such a critical 
bug, how come it didn't expose until 4 weeks after feature freeze? If it 
was such a critical bug how did it get past your internal review process 
and land in tree in the first place? If it's such a critical bug why 
wasn't it brought up at the weekly *nova* meeting?


I really feel like you continue down the path of special pleading, 
without having used normal channels for things like this, which all 
exist. The nova meeting is a great place to highlight reviews you feel 
are critical that need eyes, and it happens every week on a regular 
schedule.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Zane Bitter

On 16/10/13 06:56, Mike Spreitzer wrote:

What is the difference between what today's heat engine does and a
workflow?  I am interested to hear what you experts think, I hope it
will be clarifying.  I presume the answers will touch on things like
error handling, state tracking, and updates.


(Disclaimer: I'm not an expert, I just work on this stuff ;)

First off, to be clear, it was my understanding from this thread that 
the original proposal to add workflow syntax to HOT is effectively dead. 
(If I'm mistaken, add a giant -1 from me.) Mirantis have since 
announced, I assume not coincidentally, that they will start 
implementing a workflow service (Mistral, based on the original 
Convection proposal from Keith Bray at the Havana summit) for OpenStack, 
backed by the taskflow library. So bringing workflows back in to this 
discussion is confusing the issue.


(FWIW I think that having a workflow service will be a great thing for 
other reasons, but I also hope that all of Stan's original example will 
be possible in Heat *without* resorting to requiring users to define an 
explicit workflow.)


It must be acknowledged that the Heat engine does run a workflow. The 
workflow part can in principle, and probably should, be delegated to the 
taskflow library, and I would be surprised if we did not eventually end 
up doing this (though I'm not looking forward to actually implementing it).


To answer your question, the key thing that Heat does is take in two 
declarative models and generate a workflow to transform one into the 
other. (The general case of this is a stack update, where the two models 
are defined in the previous and new templates. Stack create and delete 
are special cases where one or the other of the models is empty.)


Workflows don't belong in HOT because they are a one-off thing. You need 
a different one for every situation, and this is exactly why Heat exists 
- to infer the correct workflow to reify a model in any given situation.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] 'ordereddict' requirement

2013-10-16 Thread Monty Taylor


On 10/16/2013 07:40 AM, stuart.mcla...@hp.com wrote:
 All,
 
 There was a plan to use pypi's 'ordereddict' in Icehouse, to
 replace how we're currently providing that functionality.
 
 However, there are no ordereddict packages for Debian/Ubuntu
 and there are no plans to provide them. (See Thomas Goirand's comment
 here: https://review.openstack.org/#/c/48475/3/requirements.txt)

I don't think we need them.

Debian/Ubuntu are not packaging Icehouse for their 2.6-based releases
anyway, and 2.6 is the only platform on which you need to install
ordereddict.

We already have 2 such exclusions in pbr that trap things that don't
make sense to install on 2.7 but do on 2.6:

https://git.openstack.org/cgit/openstack-dev/pbr/tree/pbr/packaging.py#n52

adding a third would be less work than writing this email. :)

 I think this means that it makes sense to stick with our current
 solution until
 python 2.6 support is dropped. On that basis I've uploaded a change
 to requirements to drop ordereddict:
 https://review.openstack.org/#/c/52053/
 
 Please jump in with '-1's/'+1's as appropriate.

-1'd... I think moving forward with ordereddict on 2.6 and using
OrderedDict from collections on 2.7 is a fine plan.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Sean Dague

On 10/16/2013 01:45 AM, Vishvananda Ishaya wrote:

Hi Sean,

I'm going to top post because my response is general. I totally agree that we 
need people that understand the code base and we should encourage new people to 
be cross-functional. I guess my main issue is with how we get there. I believe 
in encouragment over punishment. In my mind giving people autonomy and control 
encourages them to contribute more.

In my opinion giving the driver developers control over their own code will 
lead to higher quality drivers. Yes, we risk integration issues, lack of test 
coverage, and buggy implementations, but it is my opinion that the increased 
velocity that the developers will enjoy will mean faster bug fixes and more 
opportunity to improve the drivers.


My experience reviewing a ton of driver code in grizzly makes me 
disagree 
(http://stackalytics.com/?release=grizzlymetric=marksproject_type=coremodule=novacompany=user_id=). 



Driver code tends to drift from the norm because the teams don't mix 
much with the rest of core. This makes it even harder to review their 
code because those teams aren't realizing what makes patches easy to 
review, by getting first hand experience reviewing lots of other 
people's code, and thinking to themselves man, that was hard to wrap my 
head around, how would I do that better if it was my patch?.



I also think lowering the amount of code that nova-core has to keep an eye on 
will improve the review velocity of the rest of the code as well.


I think it would be at best a short term gain, completely offset by the 
fact that there are less eyes in the pool and I don't think would solve 
anything. If that were true the merge rate on smaller projects in 
OpenStack would far exceed Nova, and the numbers don't support that. My 
experience on a bunch of smaller trees that I've got +2 on are that 
review starvation actually hits them much worse.


So, again, it's about perspective.

Can we do better on review turn around? sure.

Would it be better if -core team members were spending more time on 
reviews? yes.


Would it be better if everyone spent time on reviews? yes.

Will driver teams getting real CI results posted back help? definitely.

Will an updated Gerrit that lets us do better dashboards so we don't 
loose reviews help? yes.


But OpenStack still moves crazy fast, and making process changes with 
sweeping future implications is something that needs to not be done 
lightly. And there are lots of other things to be done to make this 
better, which all kinds of people can help with.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What validation feature is necessary for Nova v3 API

2013-10-16 Thread Christopher Yeoh
On Tue, Oct 15, 2013 at 5:44 PM, Kenichi Oomichi
oomi...@mxs.nes.nec.co.jpwrote:


 Hi,

 # I resend this because gmail distinguished my previous mail as spam one..

 I'd like to know what validation feature is really needed for Nova v3 API,
 and I hope this mail will be a kick-off of brain-storming for it.

  Introduction 
 I have submitted a blueprint nova-api-validation-fw.
 The purpose is comprehensive validation of API input parameters.
 32% of Nova v3 API parameters are not validated with any ways[1], and the
 fact would cause an internal error if some clients just send an invalid
 request. If an internal error happens, the error message is output to a
 log file and OpenStack operators should research its reason. It would be
 hard work for the operators.


We have tried to improve this for the V3 API but we still have a way to go.
I believe a validation framework like you have proposed would be very
useful - and cleanup the extension code.


 In Havana development cycle, I proposed the implementation code of the BP
 but it was abandoned. Nova web framework will move to Pecan/WSME, but my
 code depended on WSGI. So the code would have merits in short term, but not
 in long term.
 Now some Pecan/WSME sessions are proposed for Hong-Kong summit, so I feel
 this situation is a good chance for this topic.


I proposed the Nova Pecan/WSME session for the summit, but I do have a few
reservations about whether the transition will be worth the pain I think
will be involved. So I don't think its by any means clear that Pecan/WSME
will be something we will do in Icehouse and your wsgi based implementation
could be what we want to go ahead with.


 For discussing, I have investigated all validation ways of current Nova v3
 API parameters. There are 79 API methods, and 49 methods use API parameters
 of a request body. Totally, they have 148 API parameters. (details: [1])

 Necessary features, what I guess now, are the following:

  Basic Validation Feature 
 Through this investigation, it seems that we need some basic validation
 features such as:
 * Type validation
   str(name, ..), int(vcpus, ..), float(rxtx_factor), dict(metadata, ..),
   list(networks, ..), bool(conbine, ..), None(availability_zone)
 * String length validation
   1 - 255
 * Value range validation
   value = 0(rotation, ..), value  0(vcpus, ..),
   value = 1(os-multiple-create:min_count, os-multiple-create:max_count)
 * Data format validation
   * Pattern:
 uuid(volume_id, ..), boolean(on_shared_storage, ..),
 base64encoded(contents),
 ipv4(access_ip_v4, fixed_ip), ipv6(access_ip_v6)
   * Allowed list:
 'active' or 'error'(state), 'parent' or 'child'(cells.type),
 'MANUAL' or 'AUTO'(os-disk-config:disk_config), ...
   * Allowed string:
 not contain '!' and '.'(cells.name),
 contain [a-zA-Z0-9_.- ] only(flavor.name, flavor.id)
 * Mandatory validation
   * Required: server.name, flavor.name, ..
   * Optional: flavor.ephemeral, flavor.swap, ..


  Auxiliary Validation Feature 
 Some parameters have a dependency between other parameter.
 For example, name or/and availability_zone should be specified when
 updating an
 aggregate. The parameter dependencies are few cases, and the dependency
 validation
 feature would not be mandatory.

 The cases are the following:
 * Required if not specifying other:
   (update aggregate: name or availability_zone), (host: status or
 maintenance_mode),
   (server: os-block-device-mapping:block_device_mapping or image_ref)
 * Should not specify both:
   (interface_attachment: net_id and port_id),
   (server: fixed_ip and port)


These all sound useful.


  API Documentation Feature 
 WSME has a unique feature which generates API documentations from source
 code.
 The documentations(
 http://docs.openstack.org/developer/ceilometer/webapi/v2.html)
 contains:
 * Method, URL (GET /v2/resources/, etc)
 * Parameters
 * Reterun type
 * Parameter samples of both JSON and XML


Do you know if the production of JSON/XML samples and integration of them
into the api documentation
is all autogenerated via wsme?

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Consolidation for Manager and Resource classes

2013-10-16 Thread Endre Karlson
Has anyone looked into doing a effort in consolidating the different
implementations of these classes ?

Doing a short walk-through I see:

Manager
  * Has a typical kind of API (server, lb, network, subnet) which it
interacts with and returns instances of a result as a Resource
Resource
  * Represents a instance of a object.

# Nova
https://github.com/openstack/python-novaclient/

https://github.com/openstack/python-novaclient/blob/master/novaclient/base.py

# Neutron
https://github.com/openstack/python-neutronclient
N/A?

# Glance
https://github.com/openstack/python-glanceclient

https://github.com/openstack/python-glanceclient/blob/master/glanceclient/common/base.py

# Keystone
https://github.com/openstack/python-keystoneclient/

https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/base.py

# Cinder
https://github.com/openstack/python-cinderclient/

https://github.com/openstack/python-cinderclient/blob/master/cinderclient/base.py

# Ceilometer
https://github.com/openstack/python-ceilometerclient/

https://github.com/openstack/python-ceilometerclient/blob/master/ceilometerclient/common/base.py

# Heat
https://github.com/openstack/python-heatclient

https://github.com/openstack/python-heatclient/blob/master/heatclient/common/base.py

# Ironic
https://github.com/openstack/python-ironicclient

https://github.com/openstack/python-ironicclient/blob/master/ironicclient/common/base.py

# Tuskar
https://github.com/openstack/python-tuskarclient

https://github.com/openstack/python-tuskarclient/blob/master/tuskarclient/common/base.py

# Trove
https://github.com/openstack/python-troveclient

https://github.com/openstack/python-troveclient/blob/master/troveclient/base.py

# Marconi
https://github.com/openstack/python-marconiclient
N/A?

# Savanna
https://github.com/openstack/python-savannaclient

https://github.com/openstack/python-savannaclient/blob/master/savannaclient/api/base.py

# Manila
https://github.com/stackforge/python-manilaclient

https://github.com/stackforge/python-manilaclient/blob/master/manilaclient/base.py


They are all doing the same thing, so why not put them into a common place?

Endre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Alessandro Pilotti

On Oct 16, 2013, at 15:16 , Sean Dague s...@dague.net
 wrote:

 On 10/16/2013 01:19 AM, Alessandro Pilotti wrote:
 snip
 
 Sean, you got called out in the meeting not because you asked to put a
 refernce link to the specs which was perfectly reasonable, but because
 after we did what you asked for in a timely manner, you didn't bother to
 review the patch again until asked to please review it 6 days later!!!
 
 This is a perfect example about why we need autonomy. We cannot leave a
 patch starving in the review queue for a critical bug like that one!!
 
 I -1ed the patch, you caught me on IRC and argued with me that the code 
 didn't need to change. You had my undivided attention there for 30 minutes on 
 this patch, but used the time to argue against change. So I moved on to other 
 things. Should I have gotten back around to my Nova review queue sooner, 
 sure. However once you made the fix I no longer had a -1 on the patch, so I 
 wasn't blocking it. And do I want to give up 30 minutes of my time every time 
 I try to review your patches because you'd rather argue than take feedback? 
 Not really. I still do it. But I'll admit, a patch author that gives me less 
 grief is a lot more fun to review. I'm only human in that regard.
 

I beg for forgiveness for not obeying you on the spot and daring to discuss 
your -1, Master! ;-)

Jokes aside, this is actually bringing up another important point in the review 
system:

When somebody (especially a core reviewer) puts a -1 and a new patch is 
committed to address it, 
I noticed that other reviewers wait for the guy that put the -1 to say 
something before +1/+2 it. 

My feeling on this is that if somebody reviews a patch (positively or 
negatively) he/she should also
keep on with it (in a timely manner) until it is merged or clearly stating that 
there's no interest in reviewing it further.
This is especially true for core revs as other reviewers tend to be shy and 
avoid contradicting a core rev,
generating further delays. 

What do you guys think? 

Does it make sense to brainstorm constructively on a way to reduce the review 
lags? 
The review system itself is IMO already providing an excellent starting point, 
we just need to tweak it a bit. :-) 


 14 days from bug filing to merge isn't starving - 
 (https://bugs.launchpad.net/nova/+bug/1233853). If it's such a critical bug, 
 how come it didn't expose until 4 weeks after feature freeze? If it was such 
 a critical bug how did it get past your internal review process and land in 
 tree in the first place? If it's such a critical bug why wasn't it brought up 
 at the weekly *nova* meeting?
 

Because thanks to the gorgeus H3 phase we got all BPs merged together on the H3 
freeze deadline 
and only afterwards people had the opportunity to test that huge amoung of code 
and report bugs?

14 days is IMO a preposterously long wait when you have a dedicated team and a 
fix ready, 
but hey, it's a matter of perspective I guess. 

 I really feel like you continue down the path of special pleading, without 
 having used normal channels for things like this, which all exist. The nova 
 meeting is a great place to highlight reviews you feel are critical that need 
 eyes, and it happens every week on a regular schedule.

#OpenStack-Nova, ML and triaging bugs aren't normal channels?
For how it is going I should bring every single bug and patch to the meeting! 


 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Python-coverage: path change from /usr/bin/coverage to /usr/bin/python-coverage

2013-10-16 Thread Thomas Goirand
Hi there,

It appears that in Debian, python-coverage provides the wrapper in
/usr/bin/python-coverage. I tried to push the current maintainer to
provide /usr/bin/coverage, but he doesn't agree. He believes that
coverage is just too generic to be squatted by the python-coverage
package.

Robert Colins wrote that he sees it ok-ish if all of the OpenStack
projects makes it so that we could also use /usr/bin/python-coverage.
What is the view of others in the project? Could the path be checked,
and then used, so that it works in every cases? Of course, the goal
would be to avoid by hand patching in debian/patches whenever
possible, because this is a major pain.

Your thoughts?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python-coverage: path change from /usr/bin/coverage to /usr/bin/python-coverage

2013-10-16 Thread Alex Gaynor
It seems to me the much easier solution is to just always install
coverage.py into a virtualenv, then we don't have to worry at all about
operating-system politics.

Alex


On Wed, Oct 16, 2013 at 6:05 AM, Thomas Goirand z...@debian.org wrote:

 Hi there,

 It appears that in Debian, python-coverage provides the wrapper in
 /usr/bin/python-coverage. I tried to push the current maintainer to
 provide /usr/bin/coverage, but he doesn't agree. He believes that
 coverage is just too generic to be squatted by the python-coverage
 package.

 Robert Colins wrote that he sees it ok-ish if all of the OpenStack
 projects makes it so that we could also use /usr/bin/python-coverage.
 What is the view of others in the project? Could the path be checked,
 and then used, so that it works in every cases? Of course, the goal
 would be to avoid by hand patching in debian/patches whenever
 possible, because this is a major pain.

 Your thoughts?

 Cheers,

 Thomas Goirand (zigo)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
I disapprove of what you say, but I will defend to the death your right to
say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
The people's good is the highest law. -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Zane Bitter

On 16/10/13 00:48, Steve Baker wrote:

I've just written some proposals to address Heat's HOT software
configuration needs, and I'd like to use this thread to get some feedback:
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config


Wow, nice job, thanks for writing all of this up :)


Please read the proposals and reply to the list with any comments or
suggestions.


For me the crucial question is, how do we define the interface for 
synchronising and passing data from and to arbitrary applications 
running under an arbitrary configuration management system?


Compared to this, defining the actual format in which software 
applications are specified in HOT seems like a Simple Matter of 
Bikeshedding ;)


(BTW +1 for not having the relationships, hosted_on always reminded me 
uncomfortably of INTERCAL[1]. We already have DependsOn for resources 
though, and might well need it here too.)


I'm not a big fan of having Heat::Puppet, Heat::CloudInit, Heat::Ansible 
c. component types insofar as they require your cloud provider to 
support your preferred configuration management system before you can 
use it. (In contrast, it's much easier to teach your configuration 
management system about Heat because you control it yourself, and 
configuration management systems are already designed for plugging in 
arbitrary applications.)


I'd love to be able to put this control in the user's hands by just 
using provider templates - i.e. you designate PuppetServer.yaml as the 
provider for an OS::Nova::Server in your template and it knows how to 
configure Puppet and handle the various components. We could make 
available a library of such provider templates, but users wouldn't be 
limited to only using those.


cheers,
Zane.


[1] https://en.wikipedia.org/wiki/COMEFROM

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Daniel P. Berrange
On Wed, Oct 16, 2013 at 12:59:26PM +, Alessandro Pilotti wrote:
 
 When somebody (especially a core reviewer) puts a -1 and a new patch is 
 committed to address it, 
 I noticed that other reviewers wait for the guy that put the -1 to say 
 something before +1/+2 it. 

I think that depends on the scope of the change the reviewer asked for. It is
normally easy for any other reviewer to identify whether the -1 was properly
addressed and as such there's no need to block on the original reviewer
adding +1. Any core reviewer should be capable of evaluating if a review
point was addressed. Only if the code change was fairly complicated and/or
controversial might it be worth blocking on the original reviewer. I tend to
take such a pragmatic approach when considering whether to wait for the original
reviewer to add a +1 or not.

 My feeling on this is that if somebody reviews a patch (positively or 
 negatively)
 he/she should also keep on with it (in a timely manner) until it is merged or
 clearly stating that there's no interest in reviewing it further. This is 
 especially
 true for core revs as other reviewers tend to be shy and avoid contradicting 
 a core
 rev, generating further delays. 

As above, I don't think we should block on waiting for original reviewers
to review followups, nor require them to, as it is an inefficient way
of working. Any core reviewer should be capable of reviewing any patch
at any stage of its life, unless it is a very controversial change. Forcing
reviewers to keep up with all versions of a patch will never work out in
practice whether we want it or not.

Non-core reviewers should be encouraged to speak up - by doing so it will
improve quality of reviewers and help us identify non-core reviewers who
are good candidates for promotion.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [ml2] Canceling today's ML2 meeting

2013-10-16 Thread Kyle Mestery (kmestery)
Folks:

A few of the key people cannot make today's ML2 meeting, so I'm going to cancel 
it today. If you haven't filed your design summit sessions on ML2, please do so 
by tomorrow! We have a list of them on the ML2 meeting page here [1] which we 
collected over the past two weeks.

Also, Sukhdev has posted a review [2] of the updated Installation Guide which 
includes ML2 plugin information. Getting some eyes on that would be great as 
well!

Thanks, and we'll see everyone at next week's Neutron ML2 meeting!
Kyle

[1] https://wiki.openstack.org/wiki/Meetings/ML2
[2]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Consolidation for Manager and Resource classes

2013-10-16 Thread Lucas Alvares Gomes
+1 to consolidate.

They are all doing the same thing, so why not put them into a common place?


*almost* the same thing, there's some small differences, one e.g is that
Ironic use PATH for the update instead of PUT.

Cheers,
Lucas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ml2] Canceling today's ML2 meeting

2013-10-16 Thread Kyle Mestery (kmestery)
Whoops, hit send too quickly. The Url for [2] is: 
https://review.openstack.org/#/c/51992/

On Oct 16, 2013, at 8:19 AM, Kyle Mestery (kmestery) kmest...@cisco.com wrote:

 Folks:
 
 A few of the key people cannot make today's ML2 meeting, so I'm going to 
 cancel it today. If you haven't filed your design summit sessions on ML2, 
 please do so by tomorrow! We have a list of them on the ML2 meeting page here 
 [1] which we collected over the past two weeks.
 
 Also, Sukhdev has posted a review [2] of the updated Installation Guide which 
 includes ML2 plugin information. Getting some eyes on that would be great as 
 well!
 
 Thanks, and we'll see everyone at next week's Neutron ML2 meeting!
 Kyle
 
 [1] https://wiki.openstack.org/wiki/Meetings/ML2
 [2]
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Consolidation for Manager and Resource classes

2013-10-16 Thread Endre Karlson
I can see though that there is a apiclient thing in oslo-incubator, would
it be an idea to name this oslo.client instead of having to copy this in
like other oslo stuff?

Endre


2013/10/16 Lucas Alvares Gomes lucasago...@gmail.com

 +1 to consolidate.

 They are all doing the same thing, so why not put them into a common place?


 *almost* the same thing, there's some small differences, one e.g is that
 Ironic use PATH for the update instead of PUT.

 Cheers,
 Lucas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Mike Spreitzer
Steven Hardy sha...@redhat.com wrote on 10/16/2013 04:11:40 AM:
 ...
 IMO we should be abstracting the software configuration complexity 
behind a
 Heat resource interface, not pushing it up to a pre-processor (which
 implies some horribly complex interfaces at the heat template level)

I am not sure I follow.  Can you please elaborate on the horrible 
implication?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Dan Smith
 +1 - I think we really want to have a strong preference for a stable 
 api if we start separating parts out

So, as someone who is about to break the driver API all to hell over the
next six months (er, I mean, make some significant changes), I can tell
you that making it stable is the best way to kill velocity right now. We
are a young project with a lot of work yet to do. Making the driver API
stable at this point in the process, especially because just one driver
wants to be out of tree, is going to be a huge problem.

 Otherwise we either end up with lots of pain in making
 infrastructure changes or asymmetric gating which is to be avoided
 wherever possible.

AFAICT, this is pain that would be experienced by the out-of-tree driver
and pain which has been called out specifically by the authors as
better than the alternative.

Seriously, putting the brakes on the virt api right now because one
driver wants to be out of tree is a huge problem. I fully support the
hyper-v taking itself out-of-tree if it wants, but I don't think that
means we can or should eject the others and move to a stable virt api.
At least not anytime soon.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python-coverage: path change from /usr/bin/coverage to /usr/bin/python-coverage

2013-10-16 Thread Robert Collins
I also suggested using 'python -m coverage' which should work
everywhere without issue :)

-Rob

On 17 October 2013 02:05, Thomas Goirand z...@debian.org wrote:
 Hi there,

 It appears that in Debian, python-coverage provides the wrapper in
 /usr/bin/python-coverage. I tried to push the current maintainer to
 provide /usr/bin/coverage, but he doesn't agree. He believes that
 coverage is just too generic to be squatted by the python-coverage
 package.

 Robert Colins wrote that he sees it ok-ish if all of the OpenStack
 projects makes it so that we could also use /usr/bin/python-coverage.
 What is the view of others in the project? Could the path be checked,
 and then used, so that it works in every cases? Of course, the goal
 would be to avoid by hand patching in debian/patches whenever
 possible, because this is a major pain.

 Your thoughts?

 Cheers,

 Thomas Goirand (zigo)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] automatically evacuate instances on compute failure

2013-10-16 Thread Oleg Gelbukh
Tim,

Regarding this discussion, now there is at least a plan in Heat to allow
management of VMs not launched by that service:
https://blueprints.launchpad.net/heat/+spec/adopt-stack

So hopefully in future HARestarter will allow to support medium
availability for all types of instances.

--
Best regards,
Oleg Gelbukh
Mirantis Labs


On Wed, Oct 9, 2013 at 3:28 PM, Tim Bell tim.b...@cern.ch wrote:

 Would the HARestarter approach work for VMs which were not launched by
 Heat ?

 We expect to have some applications driven by Heat but lots of others
 would not be (especially the more 'pet'-like traditional workloads).

 Tim

 From: Oleg Gelbukh [mailto:ogelb...@mirantis.com]
 Sent: 09 October 2013 13:01
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [nova] automatically evacuate instances on
 compute failure

 Hello,

 We have much interest in this discussion (with focus on second scenario
 outlined by Tim), and working on its design at the moment. Thanks to
 everyone for valuable insights in this thread.

 It looks like external orchestration daemon problem is partially solved
 already by Heat with HARestarter resource [1].

 Hypervisor failure detection is also more or less solved problem in Nova
 [2]. There are other candidates for that task as well, like Ceilometer's
 hardware agent [3] (still WIP to my knowledge).

 [1]
 https://github.com/openstack/heat/blob/stable/grizzly/heat/engine/resources/instance.py#L35
 [2]
 http://docs.openstack.org/developer/nova/api/nova.api.openstack.compute.contrib.hypervisors.html#module-nova.api.openstack.compute.contrib.hypervisors
 [3]
 https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices
 --
 Best regards,
 Oleg Gelbukh
 Mirantis Labs

 On Wed, Oct 9, 2013 at 9:26 AM, Tim Bell tim.b...@cern.ch wrote:
 I have proposed the summit design session for Hong Kong (
 http://summit.openstack.org/cfp/details/103) to discuss exactly these
 sort of points. We have the low level Nova commands but need a service to
 automate the process.

 I see two scenarios

 - A hardware intervention needs to be scheduled, please rebalance this
 workload elsewhere before it fails completely
 - A hypervisor has failed, please recover what you can using shared
 storage and give me a policy on what to do with the other VMs (restart,
 leave down till repair etc.)

 Most OpenStack production sites have some sort of script doing this sort
 of thing now. However, each one will be implementing the logic for
 migration differently so there is no agreed best practise approach.

 Tim

  -Original Message-
  From: Chris Friesen [mailto:chris.frie...@windriver.com]
  Sent: 09 October 2013 00:48
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [nova] automatically evacuate instances on
 compute failure
 
  On 10/08/2013 03:20 PM, Alex Glikson wrote:
   Seems that this can be broken into 3 incremental pieces. First, would
   be great if the ability to schedule a single 'evacuate' would be
   finally merged
   (_
 https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance_
 ).
 
  Agreed.
 
   Then, it would make sense to have the logic that evacuates an entire
   host
   (_
 https://blueprints.launchpad.net/python-novaclient/+spec/find-and-evacuate-host_
 ).
   The reasoning behind suggesting that this should not necessarily be in
   Nova is, perhaps, that it *can* be implemented outside Nova using the
   indvidual 'evacuate' API.
 
  This actually more-or-less exists already in the existing nova
 host-evacuate command.  One major issue with this however is that it
  requires the caller to specify whether all the instances are on shared
 or local storage, and so it can't handle a mix of local and shared
  storage for the instances.   If any of them boot off block storage for
  instance you need to move them first and then do the remaining ones as a
 group.
 
  It would be nice to embed the knowledge of whether or not an instance is
 on shared storage in the instance itself at creation time.  I
  envision specifying this in the config file for the compute manager
 along with the instance storage location, and the compute manager
  could set the field in the instance at creation time.
 
   Finally, it should be possible to close the loop and invoke the
   evacuation automatically as a result of a failure detection (not clear
   how exactly this would work, though). Hopefully we will have at least
   the first part merged soon (not sure if anyone is actively working on
   a rebase).
 
  My interpretation of the discussion so far is that the nova maintainers
 would prefer this to be driven by an outside orchestration daemon.
 
  Currently the only way a service is recognized to be down is if
 someone calls is_up() and it notices that the service hasn't sent an update
  in the last minute.  There's nothing in nova actively scanning for
 compute node failures, which is where the outside daemon comes in.
 
  Also, there is 

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Thomas Spatzier
Hi Steve,

thanks a lot taking the effort to write all this down. I had a look at both
wiki pages and have some comments below. This is really from the top of my
head, and I guess I have to spend some more time thinking about it, but I
wanted to provide some feedback anyway.

On components vs. resources:
So the proposal says clearly that the resource concept is only used for
things that get accessed and managed via their APIs, i.e. services provided
by something external to Heat (nova, cinder, etc), while software is
different and therefore modeled as components, which is basically fine (and
I also suggested this in my initial proposal .. but was never quite sure).
Anyway, I think we also need some APIs to access software components (not
the actual installed software, but the provider managing it), so we can get
the state of a component, and probably also manage the state to do
meaningful orchestration. That would bring it close to the resource concept
again, or components (the providers) would have to provide some means for
getting access to state etc.

Why no processing of intrinsic functions in config block?
... that was actually a question that came up when I read this first, but
maybe is resolved by some text further down in the wiki. But I wanted to
ask for clarification. I thought having intrinsic function could be helpful
for passing parameters around, and also implying dependencies. A bit
further down some concept for parameter passing to the config providers is
introduced, and for filling the parameters, intrinsic functions can be
used. So do I get it right that this would enable the dependency building
and data passing?

Regarding pointer from a server's components section to components vs. a
hosted_on relationship:
The current proposal is in fact (or probably) isomorphic to the hosted_on
links from my earlier proposal. However, having pointers from the lower
layer (servers) to the upper layers (software) seems a bit odd to me. It
would be really nice to get clean decoupling of software and infrastructure
and not just the ability to copy and paste the components and then having
to define server resources specifically to point to components. The
ultimate goal would be to have app layer models and infrastructure models
(e.g. using the environments concept and provider resources) and some way
of binding app components to one or multiple servers per deployment (single
server in test, clustered in production).
Maybe some layer in between is necessary, because neither my earlier
hosted_on proposal nor the current proposal does that.

Why no depends_on (or just dependency) between components?
Ordering in components is ok, but I think it should be possible to express
dependencies between components across servers. Whether or not a
depends_on relationship is the right things to express this, or just a
more simple dependency notation can be discussed, but I think we need
something. In my approach I tried to come up with one section
(relationship) that is the place for specifying all sorts of linke,
dependency being one, just to come up with one extensible way of expressing
things.
Anyway, having the ability to manage dependencies by Heat seems necessary.
And I would not pass the ball completely to the other tools outside of
Heat. First of all, doing things in those other tools also gets complicated
(e.g. while chef is good on one server, doing synchronization across
servers can get ugly). And Heat has the ultimate knowledge about what
servers it created, their IP addresses etc, so it should lead
orchestration.

Regarding component_execution: async
Is this necessary? I think in any case, it should be possible to create
infrastructure resources (servers) in parallel. Then only the component
startup should be synchronized once the servers are up, and this should be
the default behavior. I think this actually related to the dependency topic
above. BTW, even component startup inside servers can be done in parallel
unless components have dependencies on each other, so doing a component
startup in the strict order as given in a list in the template is probably
not necessary.

Regarding the wait condition example:
I get the idea and it surely would work, but I think it is still
un-intuitive and we should about a more abstract declarative way for
expressing such use cases.

Regarding the native tool bootstrap config proposal:
I agree with other comments already made on this thread that the sheer
number of different config components seems to much. I guess for users it
will be hard to understand, which one to use when, what combination makes
sense, in which order they have to be combined etc. Especially, when things
are getting combined, my gut feeling is that the likelyhood of templates
breaking whenever some of the underlying implementation changes will
increase.

Steve Baker sba...@redhat.com wrote on 16.10.2013 00:48:53:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 

Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Christopher Yeoh
On Thu, Oct 17, 2013 at 12:21 AM, Dan Smith d...@danplanet.com wrote:

  +1 - I think we really want to have a strong preference for a stable
  api if we start separating parts out

 So, as someone who is about to break the driver API all to hell over the
 next six months (er, I mean, make some significant changes), I can tell
 you that making it stable is the best way to kill velocity right now. We
 are a young project with a lot of work yet to do. Making the driver API
 stable at this point in the process, especially because just one driver
 wants to be out of tree, is going to be a huge problem.


Yes I agree. I just think if the internal API is not yet considered stable
its a sign we should
not be splitting the dependent bits out.


  Otherwise we either end up with lots of pain in making
  infrastructure changes or asymmetric gating which is to be avoided
  wherever possible.

 AFAICT, this is pain that would be experienced by the out-of-tree driver
 and pain which has been called out specifically by the authors as
 better than the alternative.


Yes, most of the pain will be felt by the out of tree driver. There may be
a small amount
on the nova side due to a reduction in the amount of immediate feedback if
a change breaks
something unexpectedly in a driver which is no longer integrated.

If a driver really wants to be out of tree then thats up to them, but it
doesn't mean we should
encourage or endorse it if its worse for the project overall.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Daniel P. Berrange
On Wed, Oct 16, 2013 at 06:51:50AM -0700, Dan Smith wrote:
  +1 - I think we really want to have a strong preference for a stable 
  api if we start separating parts out
 
 So, as someone who is about to break the driver API all to hell over the
 next six months (er, I mean, make some significant changes), I can tell
 you that making it stable is the best way to kill velocity right now. We
 are a young project with a lot of work yet to do. Making the driver API
 stable at this point in the process, especially because just one driver
 wants to be out of tree, is going to be a huge problem.
 
  Otherwise we either end up with lots of pain in making
  infrastructure changes or asymmetric gating which is to be avoided
  wherever possible.
 
 AFAICT, this is pain that would be experienced by the out-of-tree driver
 and pain which has been called out specifically by the authors as
 better than the alternative.
 
 Seriously, putting the brakes on the virt api right now because one
 driver wants to be out of tree is a huge problem. I fully support the
 hyper-v taking itself out-of-tree if it wants, but I don't think that
 means we can or should eject the others and move to a stable virt api.
 At least not anytime soon.

Agreed, it is way too premature to talk about the internal virt
API being declared even remotely stable. Personally I'd say it
should remain liable-to-change for the lifetime of the project,
because the ability to arbitrarily refactor internals of an app
is very valuable for ongoing maintenance IME.

We should be optimizing for what is best for the majority who
are doing their work collaboratively in-tree for OpenStack, not
a minority who wish to go their own way out of tree.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Zane Bitter

On 16/10/13 15:58, Mike Spreitzer wrote:

Zane Bitter zbit...@redhat.com wrote on 10/16/2013 08:25:38 AM:

  To answer your question, the key thing that Heat does is take in two
  declarative models and generate a workflow to transform one into the
  other. (The general case of this is a stack update, where the two models
  are defined in the previous and new templates. Stack create and delete
  are special cases where one or the other of the models is empty.)
 
  Workflows don't belong in HOT because they are a one-off thing. You need
  a different one for every situation, and this is exactly why Heat exists
  - to infer the correct workflow to reify a model in any given situation.

Thanks for a great short sharp answer.  In that light, I see a concern.
  Once a workflow has been generated, the system has lost the ability to
adapt to changes in either model.  In a highly concurrent and dynamic
environment, that could be problematic.


I think you're referring to the fact if reality diverges from the model 
we have no way to bring it back in line (and even when doing an update, 
things can and usually will go wrong if Heat's idea of the existing 
template does not reflect reality any more). If so, then I agree that we 
are weak in this area. You're obviously aware of 
http://summit.openstack.org/cfp/details/95 so it is definitely on the radar.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Issue with MutliNode openstack installation with devstack

2013-10-16 Thread Vikash Kumar
Hi,

   I am trying to install openstack on mutli node with help of devstack (
http://devstack.org/guides/multinode-lab.html).

   I got some issues:

My setup details: One controller and One compute node (both VM).
  OS - Ubuntu 13.04
   Memory: 2G
   *a. VM's are going in paused state.*

   I tried to launch VM from horizon, and all the VM's goes in*
Paused *state.
VM's are scheduled on *compute node.  *

   When controller node come up, *nova-manage service list* shows the
controller node as VM also. Reason by default nova compute services also
come up.

After compute node installation, *nova-manage service list *shows
compute node as compute node only and not the

   There is one nova -error:

   * ERROR nova.openstack.common.periodic_task [-] Error during
ComputeManager.update_available_resource: Compute host oc-vm could not be
found.#012Traceback (most recent call last):#012#012  File
/opt/stack/nova/nova/openstack/common/rpc/common.py, line 420, in
catch_client_exception#012return func(*args, **kwargs)#012#012  File
/opt/stack/nova/nova/conductor/manager.py, line 419, in
service_get_all_by#012result =
self.db.service_get_by_compute_host(context, host)#012#012  File
/opt/stack/nova/nova/db/api.py, line 140, in
service_get_by_compute_host#012return
IMPL.service_get_by_compute_host(context, host)#012#012  File
/opt/stack/nova/nova/db/sqlalchemy/api.py, line 107, in wrapper#012
return f(*args, **kwargs)#012#012  File
/opt/stack/nova/nova/db/sqlalchemy/api.py, line 441, in
service_get_by_compute_host#012raise
exception.ComputeHostNotFound(host=host)#012#012ComputeHostNotFound:
Compute host oc-vm could not be found.#0122013-10-16 06:25:27.358 7143
TRACE nova.openstack.common.periodic_task Traceback (most recent call
last):#0122013-10-16 06:25:27.358 7143 TRACE
nova.openstack.common.periodic_task   File
/opt/stack/nova/nova/openstack/common/periodic_task.py, line 180, in
run_periodic_tasks#0122013-10-16 06:25:27.358 7143 TRACE
nova.openstack.common.periodic_task task(self, context)#0122013-10-16
06:25:27.358 7143 TRACE nova.openstack.common.periodic_task   File
/opt/stack/nova/nova/compute/manager.py, line 4872, in
update_available_resource#0122013-10-16 06:25:27.358 7143 TRACE
nova.openstack.common.periodic_task compute_nodes_in_db =
self._get_compute_nodes_in_db(context)#0122013-10-16 06:25:27.358 7143
TRACE nova.openstack.common.periodic_task   File
/opt/stack/nova/nova/compute/manager.py, line 4883, in
_get_compute_nodes_in_db#0122013-10-16 06:25:27.358 7143 TRACE
nova.openstack.common.periodic_task context, self.host)#0122013-10-16
06:25:27.358 7143 TRACE nova.openstack.common.periodic_task   File
/opt/stack/nova/nova/condu*

  * b. g-api was flagging issue. *

*ERROR glance.store.sheepdog [-] Error in store configuration:
Unexpected error while running command.#012Command: collie#012Exit code:
127#012Stdout: ''#012Stderr: '/bin/sh: 1: collie: not found\n'

WARNING glance.store.base [-] Failed to configure store correctly:
Store sheepdog could not be configured correctly. Reason: Error in store
configuration: Unexpected error while running command.#012Command:
collie#012Exit code: 127#012Stdout: ''#012Stderr: '/bin/sh: 1: collie: not
found\n' Disabling add method.

WARNING glance.store.base [-] Failed to configure store correctly:
Store cinder could not be configured correctly. Reason: Cinder storage
requires a context. Disabling add method
*
I think this is a bug and also reported by other developers. I resolved
it by installing *sheepdog *explicitly on compute node. After installation
, i didn't saw that error.


 *My localrc file:

*
* Controller:*
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
SERVICE_TOKEN=secret
HOST_IP=192.168.0.66
FLAT_INTERFACE=eth0
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=128
FLOATING_RANGE=192.168.0.22/24
MULTI_HOST=1
Q_PLUGIN=openvswitch
ENABLE_TENANT_TUNNELS=True
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
enable_service q-lbaas
DEST=/opt/stack
LOGFILE=stack.sh.log
RECLONE=yes
SCREEN_LOGDIR=/opt/stack/logs/screen
SYSLOG=True

**I have enabled GRE tunneling.*

  *Compute Node:*

ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
SERVICE_TOKEN=secret
HOST_IP=192.168.0.103
FLAT_INTERFACE=eth0
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=128
FLOATING_RANGE=192.168.0.22/24
MULTI_HOST=1
DATABASE_TYPE=mysql
SERVICE_HOST=192.168.0.66
MYSQL_HOST=192.168.0.66
RABBIT_HOST=192.168.0.66
GLANCE_HOSTPORT=192.168.0.66:9292
Q_HOST=192.168.0.66
MATCHMAKER_REDIS_HOST=192.168.0.66
ENABLE_TENANT_TUNNELS=True
disable_service n-net
enable_service n-cpu rabbit q-agt neutron
Q_PLUGIN=openvswitch
DEST=/opt/stack
LOGFILE=stack.sh.log
RECLONE=yes

[openstack-dev] [Neutron] QoS API Extension update

2013-10-16 Thread Sean M. Collins
Hello,

Just a quick update on the QoS API Extension - I plan on attending the
summit in Hong Kong and have registered a summit proposal to discuss the
current work that has been done.

Currently I have two reviews under way:

API Extension  Database models:

https://review.openstack.org/#/c/28313/

Agent  OVS implementation:

https://review.openstack.org/#/c/45232/

Our current environment uses provider networking  VLANs, so I've been
targeting the work to what we currently are deploying inside of Comcast,
so the work on the OVS agent is a bit narrow - I haven't done any work
on the GRE side.

Both are currently WIP - I need better test coverage in places, handle
some edge cases (Agent restarts for example) and code quality
improvements.

If people would be so kind as to look over the Agent  OVS
implementation and give me some feedback, I would really appreciate it.

I plan to study up on the ML2 plugin and add support for the QoS
extension, so we can transition away from the OVS plugin when it is
deprecated.

-- 
Sean M. Collins


pgpMxoF9m8RkP.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Mike Spreitzer
Zane Bitter zbit...@redhat.com wrote on 10/16/2013 10:30:44 AM:

 On 16/10/13 15:58, Mike Spreitzer wrote:
 ...
  Thanks for a great short sharp answer.  In that light, I see a 
concern.
Once a workflow has been generated, the system has lost the ability 
to
  adapt to changes in either model.  In a highly concurrent and dynamic
  environment, that could be problematic.
 
 I think you're referring to the fact if reality diverges from the model 
 we have no way to bring it back in line (and even when doing an update, 
 things can and usually will go wrong if Heat's idea of the existing 
 template does not reflect reality any more). If so, then I agree that we 

 are weak in this area. You're obviously aware of 
 http://summit.openstack.org/cfp/details/95 so it is definitely on the 
radar.

Actually, I am thinking of both of the two models you mentioned.  We are 
only in the midst of implementing an even newer design (heat based), but 
for my group's old code we have a revised design in which the 
infrastructure orchestrator can react to being overtaken by later updates 
to the model we call target state (origin source is client) as well as 
concurrent updates to the model we call observed state (origin source is 
hardware/hypervisor).  I haven't yet decided what to recommend to the heat 
community, so I'm just mentioning the issue as a possible concern.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [devstack] Re: Issue with MutliNode openstack installation with devstack

2013-10-16 Thread Vikash Kumar
On Wed, Oct 16, 2013 at 8:15 PM, Vikash Kumar 
vikash.ku...@oneconvergence.com wrote:

 Hi,

I am trying to install openstack on mutli node with help of devstack (
 http://devstack.org/guides/multinode-lab.html).

I got some issues:

 My setup details: One controller and One compute node (both VM).
   OS - Ubuntu 13.04
Memory: 2G
*a. VM's are going in paused state.*

I tried to launch VM from horizon, and all the VM's goes in*Paused
 *state. VM's are scheduled on *compute node.  *

When controller node come up, *nova-manage service list* shows the
 controller node as VM also. Reason by default nova compute services also
 come up.

 After compute node installation, *nova-manage service list *shows
 compute node as compute node only and not the

There is one nova -error:

* ERROR nova.openstack.common.periodic_task [-] Error during
 ComputeManager.update_available_resource: Compute host oc-vm could not be
 found.#012Traceback (most recent call last):#012#012  File
 /opt/stack/nova/nova/openstack/common/rpc/common.py, line 420, in
 catch_client_exception#012return func(*args, **kwargs)#012#012  File
 /opt/stack/nova/nova/conductor/manager.py, line 419, in
 service_get_all_by#012result =
 self.db.service_get_by_compute_host(context, host)#012#012  File
 /opt/stack/nova/nova/db/api.py, line 140, in
 service_get_by_compute_host#012return
 IMPL.service_get_by_compute_host(context, host)#012#012  File
 /opt/stack/nova/nova/db/sqlalchemy/api.py, line 107, in wrapper#012
 return f(*args, **kwargs)#012#012  File
 /opt/stack/nova/nova/db/sqlalchemy/api.py, line 441, in
 service_get_by_compute_host#012raise
 exception.ComputeHostNotFound(host=host)#012#012ComputeHostNotFound:
 Compute host oc-vm could not be found.#0122013-10-16 06:25:27.358 7143
 TRACE nova.openstack.common.periodic_task Traceback (most recent call
 last):#0122013-10-16 06:25:27.358 7143 TRACE
 nova.openstack.common.periodic_task   File
 /opt/stack/nova/nova/openstack/common/periodic_task.py, line 180, in
 run_periodic_tasks#0122013-10-16 06:25:27.358 7143 TRACE
 nova.openstack.common.periodic_task task(self, context)#0122013-10-16
 06:25:27.358 7143 TRACE nova.openstack.common.periodic_task   File
 /opt/stack/nova/nova/compute/manager.py, line 4872, in
 update_available_resource#0122013-10-16 06:25:27.358 7143 TRACE
 nova.openstack.common.periodic_task compute_nodes_in_db =
 self._get_compute_nodes_in_db(context)#0122013-10-16 06:25:27.358 7143
 TRACE nova.openstack.common.periodic_task   File
 /opt/stack/nova/nova/compute/manager.py, line 4883, in
 _get_compute_nodes_in_db#0122013-10-16 06:25:27.358 7143 TRACE
 nova.openstack.common.periodic_task context, self.host)#0122013-10-16
 06:25:27.358 7143 TRACE nova.openstack.common.periodic_task   File
 /opt/stack/nova/nova/condu*

   * b. g-api was flagging issue. *

 *ERROR glance.store.sheepdog [-] Error in store configuration:
 Unexpected error while running command.#012Command: collie#012Exit code:
 127#012Stdout: ''#012Stderr: '/bin/sh: 1: collie: not found\n'

 WARNING glance.store.base [-] Failed to configure store correctly:
 Store sheepdog could not be configured correctly. Reason: Error in store
 configuration: Unexpected error while running command.#012Command:
 collie#012Exit code: 127#012Stdout: ''#012Stderr: '/bin/sh: 1: collie: not
 found\n' Disabling add method.

 WARNING glance.store.base [-] Failed to configure store correctly:
 Store cinder could not be configured correctly. Reason: Cinder storage
 requires a context. Disabling add method
 *
 I think this is a bug and also reported by other developers. I
 resolved it by installing *sheepdog *explicitly on compute node. After
 installation , i didn't saw that error.


  *My localrc file:

 *
 * Controller:*
 ADMIN_PASSWORD=secret
 MYSQL_PASSWORD=secret
 RABBIT_PASSWORD=secret
 SERVICE_PASSWORD=secret
 SERVICE_TOKEN=secret
 HOST_IP=192.168.0.66
 FLAT_INTERFACE=eth0
 FIXED_RANGE=10.0.0.0/24
 FIXED_NETWORK_SIZE=128
 FLOATING_RANGE=192.168.0.22/24
 MULTI_HOST=1
 Q_PLUGIN=openvswitch
 ENABLE_TENANT_TUNNELS=True
 disable_service n-net
 enable_service q-svc
 enable_service q-agt
 enable_service q-dhcp
 enable_service q-l3
 enable_service q-meta
 enable_service neutron
 enable_service q-lbaas
 DEST=/opt/stack
 LOGFILE=stack.sh.log
 RECLONE=yes
 SCREEN_LOGDIR=/opt/stack/logs/screen
 SYSLOG=True

 **I have enabled GRE tunneling.*

   *Compute Node:*

 ADMIN_PASSWORD=secret
 MYSQL_PASSWORD=secret
 RABBIT_PASSWORD=secret
 SERVICE_PASSWORD=secret
 SERVICE_TOKEN=secret
 HOST_IP=192.168.0.103
 FLAT_INTERFACE=eth0
 FIXED_RANGE=10.0.0.0/24
 FIXED_NETWORK_SIZE=128
 FLOATING_RANGE=192.168.0.22/24
 MULTI_HOST=1
 DATABASE_TYPE=mysql
 SERVICE_HOST=192.168.0.66
 MYSQL_HOST=192.168.0.66
 RABBIT_HOST=192.168.0.66
 GLANCE_HOSTPORT=192.168.0.66:9292
 Q_HOST=192.168.0.66
 

Re: [openstack-dev] How does the libvirt domain XML get created?

2013-10-16 Thread Clark Laughlin
Ok - thank you, that helps.

- Clark

On Oct 16, 2013, at 3:21 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Tue, Oct 15, 2013 at 11:07:38PM -0500, Clark Laughlin wrote:
 
 I can see in config.py where VNC gets added (the graphics element),
 but I can't find any place where a video element gets added.  In
 fact, I've grepped the entire nova tree for cirrus or video and
 can only find it here:
 
 It is added automatically by libvirt when an app provides a graphics
 element but no explicit video element.
 
 Daniel
 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] QoS API Extension update

2013-10-16 Thread Alan Kavanagh
Hi Sean

Just an FYI, we are also planning a QoS API Extension Blueprint for the 
Icehouse Design Summit. Will hopefully submit that really soon. Perhaps we can 
look at combining both of them and discuss this in Hong Kong as I have looked 
over your BP and I can see some benefit in combining them both.

BR
Alan

-Original Message-
From: Sean M. Collins [mailto:s...@coreitpro.com] 
Sent: October-16-13 10:55 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] QoS API Extension update

Hello,

Just a quick update on the QoS API Extension - I plan on attending the summit 
in Hong Kong and have registered a summit proposal to discuss the current work 
that has been done.

Currently I have two reviews under way:

API Extension  Database models:

https://review.openstack.org/#/c/28313/

Agent  OVS implementation:

https://review.openstack.org/#/c/45232/

Our current environment uses provider networking  VLANs, so I've been 
targeting the work to what we currently are deploying inside of Comcast, so the 
work on the OVS agent is a bit narrow - I haven't done any work on the GRE side.

Both are currently WIP - I need better test coverage in places, handle some 
edge cases (Agent restarts for example) and code quality improvements.

If people would be so kind as to look over the Agent  OVS implementation and 
give me some feedback, I would really appreciate it.

I plan to study up on the ML2 plugin and add support for the QoS extension, so 
we can transition away from the OVS plugin when it is deprecated.

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] i18n Message improvements

2013-10-16 Thread John S Warren
With regard to
https://blueprints.launchpad.net/oslo/+spec/i18n-messages

I think it would be helpful to define the high-level design
alternatives, before delving into technical details such as
magic.  To that end, here are the alternatives as I see them:

1. Attempt as much as possible to isolate the complexity of lazy
   translation to the shared oslo code.  This would entail keeping the
   current approach of an alternative _ method implementation, but
   altering it so that it produces objects that mimic as much as
   possible six.text_type objects (all existing behaviors supported,
   i.e. only new behaviors added), so that developers who are not
   concerned with internationalization can continue to work with objects
   created by this method as they have in the past, blissfully unaware
   that anything has changed.  All indications are that this approach
   would entail the need to extend the six.text_type (unicode in py27)
   class and would bring magic into play.
2. Limit the complexity of the shared oslo code and explicitly shift
   some of it to the consuming projects.  This would entail abandoning
   the approach of replacing the _ method, and instead creating in
   olso a very simple class or set of classes to handle
   internationalization concerns.  In all cases where lazy translation
   needs to occur in the consuming projects, existing use of the _
   method would need to be replaced with explicit use of the shared oslo
   logic, so developers are aware that they are dealing with special,
   translatable objects that do not behave the way six.text_type
   objects do.  This would likely be extremely disruptive to the
   consuming projects and it would likely not be possible to implement
   a simple switch to enable or disable lazy translation.
3. Limit the complexity of the shared olso code and shift some of it to
   the consuming projects, but still keep the approach of a replacement
   _ method implementation.  This would mean that it is not clear to
   developers who are not fully versed in the lazy translation efforts
   that they are not dealing with six.text_type objects when working with
   the objects returned by the _ method.  This is the approach that has
   been used so far and one consequence was a last-minute need to disable
   lazy translation, because of key differences in how six.text_type
   instances behave compared to other types.  Problems caused by
   these differences are likely to surface again, especially when
   passing these objects to external libraries.

Hope that's useful.

John Warren___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread David Ripton

On 10/16/2013 08:59 AM, Alessandro Pilotti wrote:


When somebody (especially a core reviewer) puts a -1 and a new patch is 
committed to address it,
I noticed that other reviewers wait for the guy that put the -1 to say 
something before +1/+2 it.

My feeling on this is that if somebody reviews a patch (positively or 
negatively) he/she should also
keep on with it (in a timely manner) until it is merged or clearly stating that 
there's no interest in reviewing it further.
This is especially true for core revs as other reviewers tend to be shy and 
avoid contradicting a core rev,
generating further delays.

What do you guys think?


Yeah, it's no fun when someone gives you a -1 then goes away.

But the people who do a lot of reviews do a lot of reviews, so they 
can't be immediately responsive to every change to every patch they've 
reviewed, or they'd never be able to do anything else.


The fundamental problem is that the ratio of patches to reviewers, and 
especially patches to core reviewers, is too high.  We either need 
people to submit fewer patches or do more reviewing.


I'm tempted to submit a patch to next-review to give priority to patches 
from authors who do a lot of reviews.  That would provide an incentive 
for everyone to review more.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Havana RC4 available !

2013-10-16 Thread Thierry Carrez
What fun would it be without late RCs...

We discovered that Keystone RC3 was still using lazy translations mode,
which could trigger errors in specific locales. Since all the other
projects in 2013.2 disabled this mode, we decided to fix this
pre-release and published a new Havana release candidate for OpenStack
Identity (Keystone).

You can find the RC4 tarball and a link to the fixed bug at:

https://launchpad.net/keystone/havana/havana-rc4

This RC4 should be formally included in the common OpenStack 2013.2
final release tomorrow. Please give this tarball a round of last-minute
sanity checks.

Alternatively, you can grab the code at:
https://github.com/openstack/keystone/tree/milestone-proposed

If you find a regression that could be considered release-critical,
please file it at https://bugs.launchpad.net/keystone/+filebug and tag
it *havana-rc-potential* to bring it to the release crew's attention.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] QoS API Extension update

2013-10-16 Thread Sean M. Collins
On Wed, Oct 16, 2013 at 03:45:29PM +, Alan Kavanagh wrote:
 Will hopefully submit that really soon. Perhaps we can look at combining both 
 of them and discuss this in Hong Kong as I have looked over your BP and I can 
 see some benefit in combining them both.

Hi Alan,

That sounds great - the objective of my BP was to try and make a QoS API
extension that was flexible enough that everyone could make their own
implementation. At this point, this is accomplished through storing
key/value pairs that are linked back to a QoS object, via the policies
attribute (which maps to the qos_policies table), that stores
implementation specific behavior/configuration.

There is also a wiki page, that has some useful links:

https://wiki.openstack.org/wiki/Neutron/QoS


-- 
Sean M. Collins


pgpFaQTxkBTIG.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] design summit session proposal deadline

2013-10-16 Thread Dolph Mathews
I'll be finalizing the design summit schedule [1] for keystone
following the weekly meeting [2] on Tuesday, October 22nd 18:00 UTC.
Please have your proposals submitted before then.

So far I think everyone has done a GREAT job self-organizing the
proposed sessions to avoid overlap, but we currently have two more
proposals than we do slots. During the meeting, we'll review which
sessions should be split, combined or cut.

Lastly, if you have comments on a particular session regarding scope
or scheduling, *please* take advantage of the new comments section at
the bottom of each session proposal. Such feedback is highly
appreciated!

[1]: http://summit.openstack.org/cfp/topic/10
[2]: https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting

Thanks!

-Dolph

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Tuskar UI - Resource Class Creation Wireframes - updated

2013-10-16 Thread Jaromir Coufal

Hey folks,

I am sending an updated version of wireframes for Resource Class 
Creation. Thanks everybody for your feedback, I tried to cover most of 
your concerns and I am sending updated version for your reviews. If you 
have any concerns, I am happy to discuss it with you.


http://people.redhat.com/~jcoufal/openstack/tuskar/2013-10-16_tuskar_resource_class_creation_wireframes.pdf

Thanks
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar UI - Resource Class Creation Wireframes - updated

2013-10-16 Thread Jay Pipes

On 10/16/2013 12:31 PM, Jaromir Coufal wrote:

Hey folks,

I am sending an updated version of wireframes for Resource Class
Creation. Thanks everybody for your feedback, I tried to cover most of
your concerns and I am sending updated version for your reviews. If you
have any concerns, I am happy to discuss it with you.

http://people.redhat.com/~jcoufal/openstack/tuskar/2013-10-16_tuskar_resource_class_creation_wireframes.pdf


Hi Jarda, wireframes look really nice :)

One tiny suggestion:

Assistant (proper halfing of resources)

If I understand correctly, I think a better layout/UI/wording might be 
to have a radio button instead of a checkbox, and have the two options be:


Assisted (automatically ensure resources add up to 100%) and
Manual

This would correspond to the already-familiar diskdruid-like 
partitioning helper.


Thoughts?
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] design summit session proposal deadline

2013-10-16 Thread Dolph Mathews
On Wed, Oct 16, 2013 at 11:23 AM, Dolph Mathews dolph.math...@gmail.com wrote:
 I'll be finalizing the design summit schedule [1] for keystone
 following the weekly meeting [2] on Tuesday, October 22nd 18:00 UTC.
 Please have your proposals submitted before then.

 So far I think everyone has done a GREAT job self-organizing the
 proposed sessions to avoid overlap, but we currently have two more
 proposals than we do slots. During the meeting, we'll review which
 sessions should be split, combined or cut.

 Lastly, if you have comments on a particular session regarding scope
 or scheduling, *please* take advantage of the new comments section at
 the bottom of each session proposal. Such feedback is highly
 appreciated!

 [1]: http://summit.openstack.org/cfp/topic/10

Apparently this link returns a 403, however it's just a filtered list
of proposals by topic, so you can sort the main page to the same
effect (thanks for the heads up, gyee and morganfainberg!):

  http://summit.openstack.org/

 [2]: https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting

 Thanks!

 -Dolph



-- 

-Dolph

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] design summit session proposal deadline

2013-10-16 Thread Adam Young

On 10/16/2013 12:23 PM, Dolph Mathews wrote:

I'll be finalizing the design summit schedule [1] for keystone
following the weekly meeting [2] on Tuesday, October 22nd 18:00 UTC.
Please have your proposals submitted before then.

So far I think everyone has done a GREAT job self-organizing the
proposed sessions to avoid overlap, but we currently have two more
proposals than we do slots. During the meeting, we'll review which
sessions should be split, combined or cut.

Lastly, if you have comments on a particular session regarding scope
or scheduling, *please* take advantage of the new comments section at
the bottom of each session proposal. Such feedback is highly
appreciated!

[1]: http://summit.openstack.org/cfp/topic/10
[2]: https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting

Thanks!

-Dolph

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Some suggestions:

V3 API Domain scoped tokens, and Henry Nash's purification of 
Assignments proposal are both dealing with the scoped and binding of 
authorization decisions.



Internal API stabilization and Extensions are both about code 
management, and can be combined.  I think


Auditing is going to be bigger than just Keystone, as it happens based 
on Policy enforcement.  I suspect that this session should be where we 
discuss the Keystone side of Policy.


Token Revocation and the client and auth_token middleware are all 
related topics.


We discussed Quota storage in Keystone last summit.  We have pretty good 
progress on the blueprint.  Do we really need to discuss this again, or 
do we just need to implement


The HTML talk should probably pull in members from the Horizon team.  I 
would almost want to merge it with 
http://summit.openstack.org/cfp/details/3 UX and Future Direction of 
OpenStack Dashboard  or
http://summit.openstack.org/cfp/details/161 Separate Horizon and 
OpenStack Dashboard  as we can discuss how we will split responsibility 
for managing administration and customization.  If they have an open 
slot, we might be able to move this to a Horizon talk.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] design summit session proposal deadline

2013-10-16 Thread Tim Bell

A partial slot to discuss how to achieve the vision in 
https://wiki.openstack.org/wiki/DomainQuotaManagementAndEnforcement would be 
useful.

The goal is to avoid duplicate effort which could be focused on a single theme. 
 With the various collaborations between HP, CERN and BARC along with Mirantis' 
work in this area, we could benefit from community discussion on the way 
forward.

A full summit session may not be required but an agreement on the roadmap would 
avoid waste.

Tim


 
 We discussed Quota storage in Keystone last summit.  We have pretty good 
 progress on the blueprint.  Do we really need to discuss this
 again, or do we just need to implement
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-16 Thread Mathew R Odden

Łukasz Jernaś deej...@srem.org wrote on 10/16/2013 02:26:28 AM:

 I'm still trying to wrap my head around the need for translating API
 messages and log messages as IMHO it adds a lot more problems for app
 developers  and log analysis tools, eg. a log analysis tool would be
 usable only for the locale it was developed for and break with _every_
 update of the translations. As a translator myself I don't really want
 to check every system in existence if it uses my messages to do some
 sort analysis and often change the strings to more proper form in my
 language as reviews and feedback on the translation comes in - even
 though the original English string doesn't change at all.
 I feel that translating mostly computer facing stuff is just crying
 for bugs, weird issues popping up for users, API-s should be readable
 by humans, but translating them is a bit too far in my opinion.

The original reason I got involved with the i18n stuff in OpenStack was
because we had a need to be able to debug issues on system running a
non-English locale, and therefore spitting out logs of translated messages.
This makes it extremely difficult to debug for two reasons, the inline
messages in code are English, and if the person doing the debugging doesn't
understand the language the logs are in, it is obviously harder to find
issues on the system.

 If I get it right the Message objects are supposed to move stuff
 around internally in a C/en locale, but we will still end up dropping
 translated messages to computers if the don't explicitly specify the
 locale which the request should use...

Messages are designed to wrap the original inline English string, so
that at a later time, such as outputting to a log handler or at the
API layer when outputting an HTTP response, we can translate the
internal string to a requested locale we may know at that time.
The way the _() gettext function works is to translate everything
immediately to the system locale.

This leads to the two problems Messages is supposed to address:
 1. localizable messages coming out of the API will be
translated to the system locale (which doesn't really make sense IMO)
 2. log records are translated to the system locale as well, making
debugging issues that much more difficult on a non-English locale

These could also be addressed by removing the original gettext
immediate translation of log/API messages.

Removing translation from the OpenStack server projects actually
makes more sense to me, because, as you pointed out, they are services
with APIs that are meant to utilized by other programs, not users
directly. Also, there is the matter of log messages being translated,
which makes auditing and debugging difficult, which is the primary use
of logging facilities IMHO.

I have been told several times that we can't remove the i18n functionality
from the projects because some users want it still. I would be interested
in hearing more from the users and developers that would like to keep
the functionality around.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-16 Thread Ben Nemec
 

On 2013-10-16 12:24, John S Warren wrote: 

 Just to clarify, the reason I believe it is important to lay out 
 high-level design alternatives and their implications is because it 
 will help in making decisions about how the Message class is to be 
 changed. In other words, the requirements for a class's behavior 
 might be drastically different, depending on whether it is a 
 replacement for an existing type (in which case all the possible 
 use-cases of the existing vs. new type need to be taken into 
 account) or it is going to be used in a new context, where adherence 
 to existing behaviors is not a factor. 
 
 My hope was that the design alternatives might be discussed, 
 possibly other alternatives proposed, and finally an alternative 
 chosen before proceeding with discussions about implementation 
 details.

Hi John, 

Have you seen the discussion from the last Oslo meeting?
http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-11-14.00.log.txt


I think a lot of these issues came up there. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What validation feature is necessary for Nova v3 API

2013-10-16 Thread Doug Hellmann
On Wed, Oct 16, 2013 at 1:19 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.comwrote:

 Hi Chris,

 Thank you for your response.

 2013/10/16 Christopher Yeoh cbky...@gmail.com

 On Tue, Oct 15, 2013 at 5:44 PM, Kenichi Oomichi 
 oomi...@mxs.nes.nec.co.jp wrote:


 Hi,

 # I resend this because gmail distinguished my previous mail as spam
 one..

 I'd like to know what validation feature is really needed for Nova v3
 API,
 and I hope this mail will be a kick-off of brain-storming for it.

  Introduction 
 I have submitted a blueprint nova-api-validation-fw.
 The purpose is comprehensive validation of API input parameters.
 32% of Nova v3 API parameters are not validated with any ways[1], and the
 fact would cause an internal error if some clients just send an invalid
 request. If an internal error happens, the error message is output to a
 log file and OpenStack operators should research its reason. It would be
 hard work for the operators.


 We have tried to improve this for the V3 API but we still have a way to
 go. I believe a validation framework like you have proposed would be very
 useful - and cleanup the extension code.


 I'm really glad about this comment :-)


  In Havana development cycle, I proposed the implementation code of the BP
 but it was abandoned. Nova web framework will move to Pecan/WSME, but my
 code depended on WSGI. So the code would have merits in short term, but
 not
 in long term.
 Now some Pecan/WSME sessions are proposed for Hong-Kong summit, so I feel
 this situation is a good chance for this topic.


 I proposed the Nova Pecan/WSME session for the summit, but I do have a
 few reservations about whether the transition will be worth the pain I
 think will be involved. So I don't think its by any means clear that
 Pecan/WSME will be something we will do in Icehouse and your wsgi based
 implementation could be what we want to go ahead with.


 For discussing, I have investigated all validation ways of current Nova
 v3
 API parameters. There are 79 API methods, and 49 methods use API
 parameters
 of a request body. Totally, they have 148 API parameters. (details: [1])

 Necessary features, what I guess now, are the following:

  Basic Validation Feature 
 Through this investigation, it seems that we need some basic validation
 features such as:
 * Type validation
   str(name, ..), int(vcpus, ..), float(rxtx_factor), dict(metadata, ..),
   list(networks, ..), bool(conbine, ..), None(availability_zone)
 * String length validation
   1 - 255
 * Value range validation
   value = 0(rotation, ..), value  0(vcpus, ..),
   value = 1(os-multiple-create:min_count, os-multiple-create:max_count)
 * Data format validation
   * Pattern:
 uuid(volume_id, ..), boolean(on_shared_storage, ..),
 base64encoded(contents),
 ipv4(access_ip_v4, fixed_ip), ipv6(access_ip_v6)
   * Allowed list:
 'active' or 'error'(state), 'parent' or 'child'(cells.type),
 'MANUAL' or 'AUTO'(os-disk-config:disk_config), ...
   * Allowed string:
 not contain '!' and '.'(cells.name),
 contain [a-zA-Z0-9_.- ] only(flavor.name, flavor.id)
 * Mandatory validation
   * Required: server.name, flavor.name, ..
   * Optional: flavor.ephemeral, flavor.swap, ..


  Auxiliary Validation Feature 
 Some parameters have a dependency between other parameter.
 For example, name or/and availability_zone should be specified when
 updating an
 aggregate. The parameter dependencies are few cases, and the dependency
 validation
 feature would not be mandatory.

 The cases are the following:
 * Required if not specifying other:
   (update aggregate: name or availability_zone), (host: status or
 maintenance_mode),
   (server: os-block-device-mapping:block_device_mapping or image_ref)
 * Should not specify both:
   (interface_attachment: net_id and port_id),
   (server: fixed_ip and port)


 These all sound useful.


  API Documentation Feature 
 WSME has a unique feature which generates API documentations from source
 code.
 The documentations(
 http://docs.openstack.org/developer/ceilometer/webapi/v2.html)
 contains:
 * Method, URL (GET /v2/resources/, etc)
 * Parameters
 * Reterun type
 * Parameter samples of both JSON and XML


 Do you know if the production of JSON/XML samples and integration of them
 into the api documentation
 is all autogenerated via wsme?


 I'm not good at this feature.
 but Ceilometer's document(
 http://docs.openstack.org/developer/ceilometer/webapi/v2.html) would be
 generated from
 https://github.com/openstack/ceilometer/blob/master/ceilometer/api/controllers/v2.py#L891
  etc.
 API samples also would be autogenerated from sample() method.
 I hope some experts will help us about this feature.


The ceilometer developer docs are processed with sphinx, and the sample
objects are turned from Python objects to XML or JSON and output in the
HTML by sphinxcontrib-pecanwsme (naming packages is hard :-). There is also
some work going on to generate the samples in the format that the doc team
uses for the 

Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-16 Thread Doug Hellmann
On Wed, Oct 16, 2013 at 2:04 PM, Mathew R Odden mrod...@us.ibm.com wrote:

 Łukasz Jernaś deej...@srem.org wrote on 10/16/2013 02:26:28 AM:


  I'm still trying to wrap my head around the need for translating API
  messages and log messages as IMHO it adds a lot more problems for app
  developers  and log analysis tools, eg. a log analysis tool would be
  usable only for the locale it was developed for and break with _every_
  update of the translations. As a translator myself I don't really want
  to check every system in existence if it uses my messages to do some
  sort analysis and often change the strings to more proper form in my
  language as reviews and feedback on the translation comes in - even
  though the original English string doesn't change at all.
  I feel that translating mostly computer facing stuff is just crying
  for bugs, weird issues popping up for users, API-s should be readable
  by humans, but translating them is a bit too far in my opinion.


 The original reason I got involved with the i18n stuff in OpenStack was
 because we had a need to be able to debug issues on system running a
 non-English locale, and therefore spitting out logs of translated messages.
 This makes it extremely difficult to debug for two reasons, the inline
 messages in code are English, and if the person doing the debugging doesn't
 understand the language the logs are in, it is obviously harder to find
 issues on the system.


  If I get it right the Message objects are supposed to move stuff
  around internally in a C/en locale, but we will still end up dropping
  translated messages to computers if the don't explicitly specify the
  locale which the request should use...

 Messages are designed to wrap the original inline English string, so
 that at a later time, such as outputting to a log handler or at the
 API layer when outputting an HTTP response, we can translate the
 internal string to a requested locale we may know at that time.
 The way the _() gettext function works is to translate everything
 immediately to the system locale.

 This leads to the two problems Messages is supposed to address:
  1. localizable messages coming out of the API will be
 translated to the system locale (which doesn't really make sense IMO)
  2. log records are translated to the system locale as well, making
 debugging issues that much more difficult on a non-English locale

 These could also be addressed by removing the original gettext
 immediate translation of log/API messages.

 Removing translation from the OpenStack server projects actually
 makes more sense to me, because, as you pointed out, they are services
 with APIs that are meant to utilized by other programs, not users
 directly. Also, there is the matter of log messages being translated,
 which makes auditing and debugging difficult, which is the primary use
 of logging facilities IMHO.

 I have been told several times that we can't remove the i18n functionality
 from the projects because some users want it still. I would be interested
 in hearing more from the users and developers that would like to keep
 the functionality around.


Error messages from the server processes are presented to the user through
the command line interface and through Horizon, so they do need to be
translated.

Log message translation is meant to help deployers, rather than end-users.
Not all deployers are going to have strong English reading skills, so
having messages translated into their native language makes it easier to
administer OpenStack.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-16 Thread Russell Bryant
On 10/16/2013 06:59 AM, Thierry Carrez wrote:
 Alessandro Pilotti wrote:
 On Oct 16, 2013, at 13:19 , Thierry Carrez thie...@openstack.org
 The other two alternatives are to accept the delays and work within Nova
 (slowly building the trust that will give you more autonomy), or ship it
 as a separate add-on that does not come with nova-core's signature on it.

 I never asked for a nova signature on it. My only requirerement is that
 the project would be part of OpenStack and not an external project, even
 if this means passing 2 releases in incubation on stackforge as long as
 it can become part of the OpenStack core group of projects afterwards
 (if it meets the required OpenStack criteria of course).
  https://wiki.openstack.org/wiki/Governance/NewProjects
 
 That's a possible outcome of the second alternative I described above.
 The separate add-on could apply to the incubation track and potentially
 be made a part of the integrated release.

Yep, it's certainly a possible outcome.

You could ask the soon to be elected TC to give an opinion.  But
honestly, if I am on the TC, I would vote against it.  It doesn't make
any sense for Nova to include a bunch of drivers, but one driver be
separate but still an official project.

I think drivers need to be treated equally in this regard, and I think
the majority consensus is that it's best overall to have them in the tree.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2013-10-16 06:16:33 -0700:
 On 16/10/13 00:48, Steve Baker wrote:
  I've just written some proposals to address Heat's HOT software
  configuration needs, and I'd like to use this thread to get some feedback:
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
  https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config
 
 Wow, nice job, thanks for writing all of this up :)
 
  Please read the proposals and reply to the list with any comments or
  suggestions.
 
 For me the crucial question is, how do we define the interface for 
 synchronising and passing data from and to arbitrary applications 
 running under an arbitrary configuration management system?
 
 Compared to this, defining the actual format in which software 
 applications are specified in HOT seems like a Simple Matter of 
 Bikeshedding ;)
 

Agreed. This is one area where juju excels (making cross-node message
passing simple). So perhaps we should take a look at what works from the
juju model and copy it.

 (BTW +1 for not having the relationships, hosted_on always reminded me 
 uncomfortably of INTERCAL[1]. We already have DependsOn for resources 
 though, and might well need it here too.)


Also agreed. The way the new proposal has it, it feels more like
composing a workflow.

 I'm not a big fan of having Heat::Puppet, Heat::CloudInit, Heat::Ansible 
 c. component types insofar as they require your cloud provider to 
 support your preferred configuration management system before you can 
 use it. (In contrast, it's much easier to teach your configuration 
 management system about Heat because you control it yourself, and 
 configuration management systems are already designed for plugging in 
 arbitrary applications.)
 

Also agree. Having Heat know too much about anything that has many
implementations is a bit of a layer violation. Heat has an interface,
and we should make it really easy for the popular tools to consume
said interface. I could see us having a separate set of shims, like
heat-puppet, and heat-salt, that are helpers for those systems to consume
data from Heat more smoothly.

 I'd love to be able to put this control in the user's hands by just 
 using provider templates - i.e. you designate PuppetServer.yaml as the 
 provider for an OS::Nova::Server in your template and it knows how to 
 configure Puppet and handle the various components. We could make 
 available a library of such provider templates, but users wouldn't be 
 limited to only using those.
 

This I don't think I understand well enough to pass judgement on. My
understanding of providers is that they are meant to make templates more
portable between clouds that have different capabilities. Mapping that
onto different CM systems feels like a stretch and presents a few
questions for me. How do I have two OS::Nova::Server's using different
CM systems when I decide to deploy something that uses salt instead of
chef?

As long as components can be composed of other components, then it seems
to me that you would just want the CM system to be a component. If you
find yourself including the same set of components constantly.. just
make a bigger component out of them.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-16 Thread Doug Hellmann
On Wed, Oct 16, 2013 at 11:55 AM, John S Warren jswar...@us.ibm.com wrote:

 With regard to
 https://blueprints.launchpad.net/oslo/+spec/i18n-messages

 I think it would be helpful to define the high-level design
 alternatives, before delving into technical details such as
 magic.  To that end, here are the alternatives as I see them:

 1. Attempt as much as possible to isolate the complexity of lazy
translation to the shared oslo code.  This would entail keeping the
current approach of an alternative _ method implementation, but
altering it so that it produces objects that mimic as much as
possible six.text_type objects (all existing behaviors supported,
i.e. only new behaviors added), so that developers who are not
concerned with internationalization can continue to work with objects
created by this method as they have in the past, blissfully unaware
that anything has changed.  All indications are that this approach
would entail the need to extend the six.text_type (unicode in py27)
class and would bring magic into play.
 2. Limit the complexity of the shared oslo code and explicitly shift
some of it to the consuming projects.  This would entail abandoning
the approach of replacing the _ method, and instead creating in
olso a very simple class or set of classes to handle
internationalization concerns.  In all cases where lazy translation
needs to occur in the consuming projects, existing use of the _
method would need to be replaced with explicit use of the shared oslo
logic, so developers are aware that they are dealing with special,
translatable objects that do not behave the way six.text_type
objects do.  This would likely be extremely disruptive to the
consuming projects and it would likely not be possible to implement
a simple switch to enable or disable lazy translation.
 3. Limit the complexity of the shared olso code and shift some of it to
the consuming projects, but still keep the approach of a replacement
_ method implementation.  This would mean that it is not clear to
developers who are not fully versed in the lazy translation efforts
that they are not dealing with six.text_type objects when working with
the objects returned by the _ method.  This is the approach that has
been used so far and one consequence was a last-minute need to disable
lazy translation, because of key differences in how six.text_type
instances behave compared to other types.  Problems caused by
these differences are likely to surface again, especially when
passing these objects to external libraries.


This doesn't quite match my understanding.

IIUC, approach 1 was taken during havana and the resulting class did not
behave enough like a string to work everywhere (specifically, with logging
for locales that did not use UTF-8 or ASCII encoding), so the feature was
disabled at the last minute.

Option 3 is closer to the new plan for Icehouse, which is to have _()
return a Message, allow Message to work in a few contexts like a string (so
that, for example, log calls and exceptions can be left alone, even if they
use % to combine a translated string with arguments), but then have the
logging and API code explicitly handle the translation of Message instances
so we can always pass unicode objects outside of OpenStack code (to logging
or to web frameworks). Since the logging code is part of Oslo and the API
code can be, this seemed to provide isolation while removing most of the
magic.

Doug



 Hope that's useful.

 John Warren
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-16 Thread John S Warren
Ben Nemec wrote:

 Have you seen the discussion from the last Oslo meeting?
 
http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-11-14.00.log.txt

Unfortunately, I missed that meeting.  I don't want to dwell on this
too much, but I would like add some thoughts that may not have
been considered.

So I guess the choice there was option #3.  For what it's worth,
getting lazy translation to work in Glance was a bit messier, because
_() is being used in creating objects that are converted into json
and this fails because the json library does not know how to handle
Message objects.  It's also possible that the output of _() ends up
being used for other things as well.

I don't quite understand why one wouldn't go with option #1, thereby
avoiding any issues with Message objects being used where unicode
objects are normally used.  The code may be more complicated, but the
complications are limited to one location (oslo) and it avoids other
potential issues later on by people assuming _() produces
unicode objects--this stuff is not fun to debug even if you're aware
of how _() has been tweaked.  Option #3 seems more
like a tactical choice, not a strategic one.  Why not get it right in
one place, rather than possibly having to make several accommodations
in the consuming projects or elsewhere in oslo?  Keep in mind that the
logging fix is not necessary if Message extends six.text_type. Also
keep in mind that the messiest part of Message is the __mod__ method
and the translation mechanism, i.e. getting rid of the __unicode__ and
__str__ methods does not reduce the complexity of the Message class
significantly, yet it makes consuming it more difficult.  A lot of the
messiness of making Message behave like a string type goes away
if it extends such a type.

Hope that's useful.

John Warren___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-16 Thread John S Warren
Doug Hellmann doug.hellm...@dreamhost.com wrote on 10/16/2013 03:11:12 
PM:


 
 This doesn't quite match my understanding.
 
 IIUC, approach 1 was taken during havana and the resulting class did
 not behave enough like a string to work everywhere (specifically, 
 with logging for locales that did not use UTF-8 or ASCII encoding), 
 so the feature was disabled at the last minute.

Approach 1 includes extending the built-in text type (e.g. unicode),
which is not what was done in Havana, and is an alternative way of
addressing the logging issue.  In addition to fixing the logging
issue, extending the built-in text type would eliminate the need to
override a lot of the standard string-manipulation methods that are
being overridden in the current Message implementation.  I'm not sure
if that's what the term magic referred to in the meeting
discussion, but it's something that bothers me about the status quo.

Thanks for your reply,

John Warren___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Lakshminaraya Renganarayana


Clint Byrum cl...@fewbar.com wrote on 10/16/2013 03:02:13 PM:


 Excerpts from Zane Bitter's message of 2013-10-16 06:16:33 -0700:

 
  For me the crucial question is, how do we define the interface for
  synchronising and passing data from and to arbitrary applications
  running under an arbitrary configuration management system?
 
  Compared to this, defining the actual format in which software
  applications are specified in HOT seems like a Simple Matter of
  Bikeshedding ;)
 

 Agreed. This is one area where juju excels (making cross-node message
 passing simple). So perhaps we should take a look at what works from the
 juju model and copy it.

Actually, this exactly the point

how do we define the interface for  synchronising and passing data
from and to arbitrary applications running under an arbitrary
configuration management system?

I was addressing in my message/proposal a couple of days back on the
mailing list :-) Glad to see that echoed again. I am proposing that
Heat should have a higher (than current wait-conditions/signals) level
abstraction for synchronization and data exchange. I do not mind it
being message passing as in JuJu. Based on our experience I am proposing
a zookeeper style global data space with blocking-reads, and non-blocking
writes.


Thanks,
LN
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS IceHouse summit prep and IRC meeting

2013-10-16 Thread Sumit Naiksatam
Hi All,

We had the FWaaS IRC meeting today, please check the logs if you could not
attend:

http://eavesdrop.openstack.org/meetings/networking_fwaas/2013/networking_fwaas.2013-10-16-18.01.log.html

We will have the next one same day/time (Wednesday 18:00 UTC/11 AM PDT)
next week, hope you can join.

Thanks,
~Sumit.


On Sun, Oct 13, 2013 at 1:56 PM, Sumit Naiksatam
sumitnaiksa...@gmail.comwrote:

 Hi All,

 For the next of phase of FWaaS development we will be considering a number
 of features. I am proposing an IRC meeting on Oct 16th Wednesday 18:00 UTC
 (11 AM PDT) to discuss this.

 The etherpad for the summit session proposal is here:
 https://etherpad.openstack.org/p/icehouse-neutron-fwaas

 and has a high level list of features under consideration.

 Thanks,
 ~Sumit.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-16 Thread John S Warren
John S Warren/Raleigh/IBM@IBMUS wrote on 10/16/2013 03:38:05 PM:

 From: John S Warren/Raleigh/IBM@IBMUS
 To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
 Date: 10/16/2013 03:42 PM
 Subject: Re: [openstack-dev] [oslo] i18n Message improvements
 
 Doug Hellmann doug.hellm...@dreamhost.com wrote on 10/16/2013 03:11:12 
PM:
 
 
  
  This doesn't quite match my understanding. 
  
  IIUC, approach 1 was taken during havana and the resulting class did
  not behave enough like a string to work everywhere (specifically, 
  with logging for locales that did not use UTF-8 or ASCII encoding), 
  so the feature was disabled at the last minute. 
 
 Approach 1 includes extending the built-in text type (e.g. unicode), 
 which is not what was done in Havana, and is an alternative way of 
 addressing the logging issue.  In addition to fixing the logging 
 issue, extending the built-in text type would eliminate the need to 
 override a lot of the standard string-manipulation methods that are 
 being overridden in the current Message implementation.  I'm not sure 
 if that's what the term magic referred to in the meeting 
 discussion, but it's something that bothers me about the status quo. 
 
 Thanks for your reply, 
 
 John Warren___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I should have mentioned that approach 1 is implemented here:
https://review.openstack.org/#/c/46553 and has been tested in Glance
and Nova, so it is known to solve the logging problem that caused
lazy translation to be shut off.  I'm not claiming it couldn't do
with some polishing, but it does work.

Thanks,

John Warren___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-16 Thread Irena Berezovsky
Hi,
As one of the next steps for PCI pass-through I would like to discuss is the 
support for PCI pass-through vNIC.
While nova takes care of PCI pass-through device resources  management and VIF 
settings, neutron should manage their networking configuration.
I would like to register a summit proposal to discuss the support for PCI 
pass-through networking.
I am not sure what would be the right topic to discuss the PCI pass-through 
networking, since it involve both nova and neutron.
There is already a session registered by Yongli on nova topic to discuss the 
PCI pass-through next steps.
I think PCI pass-through networking is quite a big topic and it worth to have a 
separate discussion.
Is there any other people who are interested to discuss it and share their 
thoughts and experience?

Regards,
Irena

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Voting in the Technical Committee election in now open

2013-10-16 Thread Thierry Carrez
Thierry Carrez wrote:
 TC elections are underway and will remain open for you to cast your vote
 until at least 23:59 UTC on Thursday, October 17.

Reminder: voting closes in less than 27 hours.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Steve Baker
On 10/17/2013 02:16 AM, Zane Bitter wrote:
 On 16/10/13 00:48, Steve Baker wrote:
 I've just written some proposals to address Heat's HOT software
 configuration needs, and I'd like to use this thread to get some
 feedback:
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
 https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config


 Wow, nice job, thanks for writing all of this up :)

 Please read the proposals and reply to the list with any comments or
 suggestions.

 For me the crucial question is, how do we define the interface for
 synchronising and passing data from and to arbitrary applications
 running under an arbitrary configuration management system?

Agreed, but I wanted to remove that from the scope of these blueprints,
with the hope that what is done here will be an enabler for a new
sync/messaging mechanism - or at least not make it harder.


 I'm not a big fan of having Heat::Puppet, Heat::CloudInit,
 Heat::Ansible c. component types insofar as they require your cloud
 provider to support your preferred configuration management system
 before you can use it. (In contrast, it's much easier to teach your
 configuration management system about Heat because you control it
 yourself, and configuration management systems are already designed
 for plugging in arbitrary applications.)

 I'd love to be able to put this control in the user's hands by just
 using provider templates - i.e. you designate PuppetServer.yaml as the
 provider for an OS::Nova::Server in your template and it knows how to
 configure Puppet and handle the various components. We could make
 available a library of such provider templates, but users wouldn't be
 limited to only using those.


I agree you've identified a problem worth solving. The only thing a
component type does in heat-engine is build cloud-init chunks. I'll have
a think about how this can be implemented as something resembling
component providers.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python-coverage: path change from /usr/bin/coverage to /usr/bin/python-coverage

2013-10-16 Thread Pádraig Brady
On 10/16/2013 02:05 PM, Thomas Goirand wrote:
 Hi there,
 
 It appears that in Debian, python-coverage provides the wrapper in
 /usr/bin/python-coverage. I tried to push the current maintainer to
 provide /usr/bin/coverage, but he doesn't agree. He believes that
 coverage is just too generic to be squatted by the python-coverage
 package.
 
 Robert Colins wrote that he sees it ok-ish if all of the OpenStack
 projects makes it so that we could also use /usr/bin/python-coverage.
 What is the view of others in the project? Could the path be checked,
 and then used, so that it works in every cases? Of course, the goal
 would be to avoid by hand patching in debian/patches whenever
 possible, because this is a major pain.
 
 Your thoughts?

I agree, coverage is too generic.
But I also see /usr/bin/coverage is used in Red Hat land too :(
I've logged a request for /usr/bin/python-coverage to be
used and to have coverage as a symlink for compat, but to
be phased out over a certain period:
https://bugzilla.redhat.com/1020046

thanks,
Pádraig.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Havana RC3 available !!

2013-10-16 Thread Thierry Carrez
Ad... last but not least, the Horizon respin!

We discovered a critical bug in Horizon RC2, preventing operation of
booted-from-volume instances. We decided to fix this issue pre-release
and published a new Havana release candidate for OpenStack Dashboard
(Horizon).

You can find the RC3 tarball a link to the fixed bug at:

https://launchpad.net/horizon/havana/havana-rc3

This RC3 should be formally included in the common OpenStack 2013.2
final release tomorrow (or is it later today ?). Please give this
tarball a round of last-second sanity checks.

Alternatively, you can grab the code at:
https://github.com/openstack/horizon/tree/milestone-proposed

If you find a regression that could be considered release-critical,
it's probably a bit late to get it fixed at that point. Still, please
file it at https://bugs.launchpad.net/horizon/+filebug and tag
it *havana-rc-potential* so that it's properly documented in our release
notes as a known bug.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Steve Baker
On 10/17/2013 03:11 AM, Thomas Spatzier wrote:
 Hi Steve,

 thanks a lot taking the effort to write all this down. I had a look at both
 wiki pages and have some comments below. This is really from the top of my
 head, and I guess I have to spend some more time thinking about it, but I
 wanted to provide some feedback anyway.

 On components vs. resources:
 So the proposal says clearly that the resource concept is only used for
 things that get accessed and managed via their APIs, i.e. services provided
 by something external to Heat (nova, cinder, etc), while software is
 different and therefore modeled as components, which is basically fine (and
 I also suggested this in my initial proposal .. but was never quite sure).
 Anyway, I think we also need some APIs to access software components (not
 the actual installed software, but the provider managing it), so we can get
 the state of a component, and probably also manage the state to do
 meaningful orchestration. That would bring it close to the resource concept
 again, or components (the providers) would have to provide some means for
 getting access to state etc.
As I've mentioned, messaging/sync is out of scope for these blueprints
but it would be good to start exploring the options now.
 Why no processing of intrinsic functions in config block?
 ... that was actually a question that came up when I read this first, but
 maybe is resolved by some text further down in the wiki. But I wanted to
 ask for clarification. I thought having intrinsic function could be helpful
 for passing parameters around, and also implying dependencies. A bit
 further down some concept for parameter passing to the config providers is
 introduced, and for filling the parameters, intrinsic functions can be
 used. So do I get it right that this would enable the dependency building
 and data passing?
Some reasons for not allowing intrinsic functions in components:
- it allows component config to be purely represented in the syntax of
the CM tool without any reference to Heat constructs
- it allows the CM's native variable handling to be used instead of
bypassing it by doing config substitutions
- it simplifies the implementation of components in heat engine - no
resource or component dependencies, no function evaluation
 Regarding pointer from a server's components section to components vs. a
 hosted_on relationship:
 The current proposal is in fact (or probably) isomorphic to the hosted_on
 links from my earlier proposal. However, having pointers from the lower
 layer (servers) to the upper layers (software) seems a bit odd to me. It
 would be really nice to get clean decoupling of software and infrastructure
 and not just the ability to copy and paste the components and then having
 to define server resources specifically to point to components. The
 ultimate goal would be to have app layer models and infrastructure models
 (e.g. using the environments concept and provider resources) and some way
 of binding app components to one or multiple servers per deployment (single
 server in test, clustered in production).
 Maybe some layer in between is necessary, because neither my earlier
 hosted_on proposal nor the current proposal does that.
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config#Composability

To me, this addresses the separation of architecture from application
configuration. Components don't make any reference to the architecture
of the stack. Server resources only refer to components by name. I would
say that specifying what pieces of configuration run where is in the
domain of architecture. You could even choose component names which make
no reference to the specific application - making the OS::Nova::Server
resource definitions a more pure representation of architecture.
 Why no depends_on (or just dependency) between components?
 Ordering in components is ok, but I think it should be possible to express
 dependencies between components across servers. Whether or not a
 depends_on relationship is the right things to express this, or just a
 more simple dependency notation can be discussed, but I think we need
 something. In my approach I tried to come up with one section
 (relationship) that is the place for specifying all sorts of linke,
 dependency being one, just to come up with one extensible way of expressing
 things.
 Anyway, having the ability to manage dependencies by Heat seems necessary.
 And I would not pass the ball completely to the other tools outside of
 Heat. First of all, doing things in those other tools also gets complicated
 (e.g. while chef is good on one server, doing synchronization across
 servers can get ugly). And Heat has the ultimate knowledge about what
 servers it created, their IP addresses etc, so it should lead
 orchestration.
See the second depends_on bullet point here
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config#Why_not_relationships_hosted_on.2C_depends_on.2C_connects_to.3F
I'll paste the 

[openstack-dev] Announce of Rally - benchmarking system for OpenStack

2013-10-16 Thread Boris Pavlovic
Hi Stackers,


We are thrilled to present to you Rally, the benchmarking system for
OpenStack.


It is not a secret that we have performance  scaling issues and that
OpenStack won’t scale out of box. It is also well known that if you get
your super big DC (5k-15k servers) you are able to find  fix all OpenStack
issues in few months (like Rackspace, BlueHost  others have proved). So
the problem with performance at scale is solvable.


The main blocker to fix such issues in community is that there is no simple
way to get relevant and repeatable “numbers” that represent OpenStack
performance at scale. It is not enough to tune an individual OpenStack
component, because its performance at scale is no guarantee that it will
not introduce a bottleneck somewhere else.


The correct approach to comprehensively test OpenStack scalability, in our
opinion, consists of the following four steps:

1)  Deploy OpenStack
2)  Create load by simultaneously making OpenStack API calls
3)  Collect performance and profile data
4)  Make data easy to consume by presenting it in a humanly readable form


Rally is the system that implements all the steps above plus it maintains
an extendable repository of standard performance tests. To use Rally, a
user has to specify where to deploy OS, select the deployment mechanism
(DevStack, Triple-O, Fuel, Etc.) and the set of benchmarking tests to run.

For more details and how to use it take a look at our wiki
https://wiki.openstack.org/wiki/Rally it should already work out of box.


Happy hunting!


Links:

1. Code: https://github.com/stackforge/rally

2. Wiki: https://wiki.openstack.org/wiki/Rally

2. Launchpad: https://launchpad.net/rally

3. Statistics:
http://stackalytics.com/?release=havanaproject_type=Allmodule=rally

4. RoadMap: https://wiki.openstack.org/wiki/Rally/RoadMap


Best regards,
Boris Pavlovic
---
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Tenant's info from plugin/services

2013-10-16 Thread Ivar Lazzaro
Hello Everyone,

I was wondering if there's a standard way within Neutron to retrieve tenants' 
specific information (e.g. name) from a plugin/service.
The call context already provides the tenant's UUID, but that may be useful 
to have some extra info in certain cases.

Thanks,
Ivar.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Tenant's info from plugin/services

2013-10-16 Thread Yongsheng Gong
I think this should be done in keystone. maybe you need a CLI command:
keystone tenant-get


On Thu, Oct 17, 2013 at 6:55 AM, Ivar Lazzaro i...@embrane.com wrote:

  Hello Everyone,

 ** **

 I was wondering if there’s a “standard” way within Neutron to retrieve
 tenants’ specific information (e.g. “name”) from a plugin/service.

 The call “context” already provides the tenant’s UUID, but that may be
 useful to have some extra info in certain cases.

 ** **

 Thanks,

 Ivar.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler meeting and Icehouse Summit

2013-10-16 Thread Mike Wilson
I need to understand better what holistic scheduling means, but I agree
with you that this is not exactly what Boris has raised as an issue. I
don't have a rock solid design for what I want to do, but at least the
objectives I want to achieve are that spinning up more schedulers increases
your response time and ability to schedule perhaps at the cost of the
accuracy of the answer (just good enough) and the need to retry your
request against several scheduler threads. I will try to look for more
resources to understand holistic scheduling a quick google search takes
me to a bunch of EE and manufacturing engineering type papers. I'll do more
research on this.

However, this does fit under performance for sure, it is not unrelated at
all. If there is a chance to incorporate this into a performance session I
think this is where it belongs.

-Mike Wilson


On Mon, Oct 14, 2013 at 9:53 PM, Mike Spreitzer mspre...@us.ibm.com wrote:

 Yes, Rethinking Scheduler Design
 http://summit.openstack.org/cfp/details/34 is not the same as the
 performance issue that Boris raised.  I think the former would be a natural
 consequence of moving to an optimization-based joint decision-making
 framework, because such a thing necessarily takes a good enough attitude.
  The issue Boris raised is more efficient tracking of the true state of
 resources, and I am interested in that issue too.  A holistic scheduler
 needs such tracking, in addition to the needs of the individual services.
  Having multiple consumers makes the issue more interesting :-)

 Regards,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Tenant's info from plugin/services

2013-10-16 Thread Ravi Chunduru
As Yongsheng said, use keystone tenant-list. We overload keystone tenant
with lot more tenant specific information as metadata and use it in other
openstack services.


On Wed, Oct 16, 2013 at 4:11 PM, Yongsheng Gong gong...@unitedstack.comwrote:

 I think this should be done in keystone. maybe you need a CLI command:
 keystone tenant-get


 On Thu, Oct 17, 2013 at 6:55 AM, Ivar Lazzaro i...@embrane.com wrote:

  Hello Everyone,

 ** **

 I was wondering if there’s a “standard” way within Neutron to retrieve
 tenants’ specific information (e.g. “name”) from a plugin/service.

 The call “context” already provides the tenant’s UUID, but that may be
 useful to have some extra info in certain cases.

 ** **

 Thanks,

 Ivar.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tomorrow meeting at 2000 UTC

2013-10-16 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tomorrow, 
2013-10-17!!!

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Discuss ongoing status of the overall effort and any needed coordination.
- Declare 0.1 release and clap/pat on back!
- Continue discussion as for integration next steps for icehouse.
- Continue discussion as for HK summit speaker ideas.
-   https://etherpad.openstack.org/p/TaskflowHKIdeas (ongoing)
- If time provides discuss possible interest in a FSM pattern.
-   https://etherpad.openstack.org/p/CinderTaskFlowFSM (ongoing)
- If time provides discuss mistral and its use-cases.
-   https://wiki.openstack.org/wiki/Mistral (just announced!)
- Discuss about any other ideas, problems, open-reviews, issues, solutions, 
questions (and more!).

Any other topics are welcome :-)

See you all soon!

--
PS: A special thanks to all those that made 0.1 release possible!! :-)
--

Joshua Harlow

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] QoS API Extension update

2013-10-16 Thread Itsuro ODA
Hi,

I'd like to support linuxbridge QoS. 

I submmited the following BP:
https://blueprints.launchpad.net/neutron/+spec/ml2-qos-linuxbridge

I won't attend the summit but Toshihiro, my co-worker will attend
the summit and attend the QoS session.

Thanks.
Itsuro Oda

On Wed, 16 Oct 2013 17:21:36 +
Alan Kavanagh alan.kavan...@ericsson.com wrote:

 Cheers Sean
 
 I will take a look at the wiki and update accordingly. I took a look at your 
 BP, its right along the lines of what I feel is also needed and what we are 
 planning to submit (being finalised as I write this email) though we are also 
 adding some additional QoS attributes to be supported based on OVS as one 
 source.  I took a look at your API and the BP we are going to submit is very 
 much inline and complementary to yours hence why I think we can actually 
 combine them and do a joint pitch on thisat least that’s my thinking on 
 it!
 
 Will send BP as soon as its finalised ;-)
 
 BR
 Alan
 
 -Original Message-
 From: Sean M. Collins [mailto:s...@coreitpro.com] 
 Sent: October-16-13 12:08 PM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Neutron] QoS API Extension update
 
 On Wed, Oct 16, 2013 at 03:45:29PM +, Alan Kavanagh wrote:
  Will hopefully submit that really soon. Perhaps we can look at combining 
  both of them and discuss this in Hong Kong as I have looked over your BP 
  and I can see some benefit in combining them both.
 
 Hi Alan,
 
 That sounds great - the objective of my BP was to try and make a QoS API 
 extension that was flexible enough that everyone could make their own 
 implementation. At this point, this is accomplished through storing key/value 
 pairs that are linked back to a QoS object, via the policies attribute (which 
 maps to the qos_policies table), that stores implementation specific 
 behavior/configuration.
 
 There is also a wiki page, that has some useful links:
 
 https://wiki.openstack.org/wiki/Neutron/QoS
 
 
 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Itsuro ODA o...@valinux.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler meeting and Icehouse Summit

2013-10-16 Thread Mike Spreitzer
Mike Wilson geekinu...@gmail.com wrote on 10/16/2013 07:13:17 PM:

 I need to understand better what holistic scheduling means, ...

By holistic I simply mean making a joint decision all at once about a 
bunch of related resources of a variety of types.  For example, making a 
joint decision about where to place a set of VMs and the Cinder volumes 
that will be attached to the VMs.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] design summit session proposal deadline

2013-10-16 Thread Morgan Fainberg
I agree that likely extensions and internal API discussions can become one
session. I think the API side of that won't need to fill a whole session.

On Wednesday, October 16, 2013, Adam Young wrote:

 On 10/16/2013 12:23 PM, Dolph Mathews wrote:

 I'll be finalizing the design summit schedule [1] for keystone
 following the weekly meeting [2] on Tuesday, October 22nd 18:00 UTC.
 Please have your proposals submitted before then.

 So far I think everyone has done a GREAT job self-organizing the
 proposed sessions to avoid overlap, but we currently have two more
 proposals than we do slots. During the meeting, we'll review which
 sessions should be split, combined or cut.

 Lastly, if you have comments on a particular session regarding scope
 or scheduling, *please* take advantage of the new comments section at
 the bottom of each session proposal. Such feedback is highly
 appreciated!

 [1]: 
 http://summit.openstack.org/**cfp/topic/10http://summit.openstack.org/cfp/topic/10
 [2]: 
 https://wiki.openstack.org/**wiki/Meetings/KeystoneMeetinghttps://wiki.openstack.org/wiki/Meetings/KeystoneMeeting

 Thanks!

 -Dolph

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Some suggestions:

 V3 API Domain scoped tokens, and Henry Nash's purification of Assignments
 proposal are both dealing with the scoped and binding of authorization
 decisions.


 Internal API stabilization and Extensions are both about code management,
 and can be combined.  I think

 Auditing is going to be bigger than just Keystone, as it happens based on
 Policy enforcement.  I suspect that this session should be where we discuss
 the Keystone side of Policy.

 Token Revocation and the client and auth_token middleware are all related
 topics.

 We discussed Quota storage in Keystone last summit.  We have pretty good
 progress on the blueprint.  Do we really need to discuss this again, or do
 we just need to implement

 The HTML talk should probably pull in members from the Horizon team.  I
 would almost want to merge it with http://summit.openstack.org/**
 cfp/details/3 http://summit.openstack.org/cfp/details/3 UX and Future
 Direction of OpenStack Dashboard  or
 http://summit.openstack.org/**cfp/details/161http://summit.openstack.org/cfp/details/161Separate
  Horizon and OpenStack Dashboard  as we can discuss how we will
 split responsibility for managing administration and customization.  If
 they have an open slot, we might be able to move this to a Horizon talk.

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev