Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-18 Thread Joe Gordon
On Thu, Jan 19, 2017 at 10:27 AM, Matt Riedemann  wrote:

> On 1/18/2017 4:53 AM, Jens Rosenboom wrote:
>
>> To me it looks like the times of 2G are long gone, Nova is using
>> almost 2G all by itself. And 8G may be getting tight if additional
>> stuff like Ceph is being added.
>>
>>
> I'm not really surprised at all about Nova being a memory hog with the
> versioned object stuff we have which does it's own nesting of objects.
>
> What tools to people use to be able to profile the memory usage by the
> types of objects in memory while this is running?


objgraph and guppy/heapy

http://smira.ru/wp-content/uploads/2011/08/heapy.html

https://www.huyng.com/posts/python-performance-analysis

You can also use gc.get_objects() (
https://docs.python.org/2/library/gc.html#gc.get_objects) to get a list of
all objects in memory and go from there.

Slots (https://docs.python.org/2/reference/datamodel.html#slots) are useful
for reducing the memory usage of objects.


>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my concerns

2015-09-11 Thread Joe Gordon
On Fri, Sep 11, 2015 at 2:30 PM, Jim Meyer  wrote:

> On Sep 11, 2015, at 12:45 PM, Shamail Tahir  wrote:
>
> On Fri, Sep 11, 2015 at 3:26 PM, Joshua Harlow 
> wrote:
>
>> Hi all,
>>
>> I was reading over the TC IRC logs for this week (my weekly reading) and
>> I just wanted to let my thoughts and comments be known on:
>>
>>
>> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309
>>
>> I feel it's very important to send a positive note for new/upcoming
>> projects and libraries... (and for everyone to remember that most projects
>> do start off with a small set of backers). So I just wanted to try to
>> ensure that we send a positive note with any tag like this that gets
>> created and applied and that we all (especially the TC) really really
>> considers the negative connotations of applying that tag to a project (it
>> may effectively ~kill~ that project).
>>
>> I would really appreciate that instead of just applying this tag (or
>> other similarly named tag to projects) that instead the TC try to actually
>> help out projects with those potential tags in the first place (say perhaps
>> by actively listing projects that may need more contributors from a variety
>> of companies on the openstack blog under say a 'HELP WANTED' page or
>> something). I'd much rather have that vs. any said tags, because the latter
>> actually tries to help projects, vs just stamping them with a 'you are bad,
>> figure out how to fix yourself, because you are not diverse' tag.
>>
>> I believe it is the TC job (in part) to help make the community better,
>> and not via tags like this that IMHO actually make it worse; I really hope
>> that folks on the TC can look back at their own projects they may have
>> created and ask how would their own project have turned out if they were
>> stamped with a similar tag…
>
>
> First, strongly agree:
>
> *Tags should be positive attributes or encouragement, not negative or
> discouraging. *I think they should also be as objectively true as
> possible. Which Monty Taylor said later[1] in the discussion and Jay Pipes
> reiterated[2].
>
> I agree with Josh and, furthermore, maybe a similar "warning" could be
> implicitly made by helping the community understand why the
> "diverse-affiliation" tag matters.  If we (through education on tags in
> general) stated that the reason diverse-affiliation matters, amongst other
> things, is because it shows that the project can potentially survive a
> single contributor changing their involvement then wouldn't that achieve
> the same purpose of showing stability/mindshare/collaboration for projects
> with diverse-affiliation tag (versus those that don't have it) and make
> them more "preferred" in a sense?
>
>
> I think I agree with others, most notably Doug Hellman[3] in the TC
> discussion; we need a marker of the other end of the spectrum. The absence
> of information is only significant if you know what’s missing and it’s
> importance.
>
> Separately, I agree that more education around tags and their importance
> is needed.
>
> I understand the concern is that we want to highlight the need for
> diversity, and I believe that instead of “danger-not-diverse” we’d be
> better served by “increase-diversity” or “needs-diversity” as the other end
> of the spectrum from “diverse-affiliation.” And I’ll go rant on the review
> now[4]. =]
>

Thank you for actually providing a review of the patch. I will respond to
the feedback in gerrit.



>
> —j
>
> [1]
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-378
> [2]
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-422
> [3]
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-330
> [4] https://review.openstack.org/#/c/218725/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 9/4 state of the gate

2015-09-05 Thread Joe Gordon
On Fri, Sep 4, 2015 at 6:43 PM, Matt Riedemann 
wrote:

>
>
> On 9/4/2015 3:13 PM, Matt Riedemann wrote:
>
>> There are a few things blowing up in the last 24 hours so might as well
>> make people aware.
>>
>> 1. gate-tempest-dsvm-large-ops was failing at a decent rate:
>>
>> https://bugs.launchpad.net/nova/+bug/1491949
>>
>> Turns out devstack was changed to run multihost=true and that doesn't
>> work so well with the large-ops job that's creating hundreds of fake
>> instances on a single node.  We reverted the devstack change so things
>> should be good there now.
>>
>>
>> 2. gate-tempest-dsvm-cells was regressed because nova has an in-tree
>> blacklist regex of tests that don't work with cells and renaming some of
>> those in tempest broke the regex.
>>
>> https://bugs.launchpad.net/nova/+bug/1492255
>>
>> There is a patch in the gate but it's getting bounced on #3.  Long-term
>> we want to bring that blacklist regex down to 0 and instead use feature
>> toggles in Tempest for the cells job, we just aren't there yet.  Help
>> wanted...
>>
>>
>> 3. gate-tempest-dsvm-full-ceph is broken with glance-store 0.9.0:
>>
>> https://bugs.launchpad.net/glance-store/+bug/1492432
>>
>> It looks like the gate-tempest-dsvm-full-ceph-src-glance_store job was
>> not actually testing trunk glance_store code because of a problem in the
>> upper-constraints.txt file in the requirements repo - pip was capping
>> glance_store at 0.8.0 in the src job so we actually haven't been testing
>> latest glance-store.  dhellmann posted a fix:
>>
>> https://review.openstack.org/#/c/220648/
>>
>> But I'm assuming glance-store 0.9.0 is still busted. I've posted a
>> change which I think might be related:
>>
>> https://review.openstack.org/#/c/220646/
>>
>> If ^ fixes the issue we'll need to blacklist 0.9.0 from
>> global-requirements.
>>
>> --
>>
>> As always, it's fun to hit this stuff right before the weekend,
>> especially a long US holiday weekend. :)
>>
>>
> I haven't seen the elastic-recheck bot comment on any changes in awhile
> either so I'm wondering if that's not running.
>

Looks like there was a suspicious 4 day gap in elastic-recheck, but it
appears to be running again?

$ ./lastcomment.py
Checking name: Elastic Recheck
[0] 2015-09-06 01:12:40 (0:35:54 old) https://review.openstack.org/220386
'Reject the cell name include '!', '.' and '@' for Nova API'
[1] 2015-09-02 00:54:54 (4 days, 0:53:40 old)
https://review.openstack.org/218781 'Remove the unnecassary
volume_api.get(context, volume_id)'


>
> Also, here is another new(ish) gate bug I'm just seeing tonight (bumped a
> fix for #3 above):
>
> https://bugs.launchpad.net/keystonemiddleware/+bug/1492508
>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] latest microversion considered dangerous

2015-08-28 Thread Joe Gordon
On Aug 28, 2015 6:49 AM, Sean Dague s...@dague.net wrote:

 On 08/28/2015 09:32 AM, Alex Meade wrote:
  I don't know if this is really a big problem. IMO, even with
  microversions you shouldn't be implementing things that aren't backwards
  compatible within the major version. I thought the benefit of
  microversions is to know if a given feature exists within the major
  version you are using. I would consider a breaking change to be a major
  version bump. If we only do a microversion bump for a backwards
  incompatible change then we are just using microversions as major
versions.

 In the Nova case, Microversions aren't semver. They are content
 negotiation. Backwards incompatible only means something if time's arrow
 only flows in one direction. But when connecting to a bunch of random
 OpenStack clouds, there is no forced progression into the future.

 While each service is welcome to enforce more compatibility for the sake
 of their users, one should not assume that microversions are semver as a
 base case.

 I agree that 'latest' is basically only useful for testing. The

Sounds like we need to update the docs for this.

 python-novaclient code requires a microversion be specified on the API
 side, and on the CLI side negotiates to the highest version of the API
 that it understands which is supported on the server -

https://github.com/openstack/python-novaclient/blob/d27568eab50b10fc022719172bc15666f3cede0d/novaclient/__init__.py#L23

Considering how unclear these two points appear to be, are they clearly
documented somewhere? So that as more projects embrace microversions, they
don't end up having the same discussion.


 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] CI for reliable live-migration

2015-08-26 Thread Joe Gordon
On Wed, Aug 26, 2015 at 8:18 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 8/26/2015 3:21 AM, Timofei Durakov wrote:

 Hello,

 Here is the situation: nova has live-migration feature but doesn't have
 ci job to cover it by functional tests, only
 gate-tempest-dsvm-multinode-full(non-voting, btw), which covers
 block-migration only.
 The problem here is, that live-migration could be different, depending
 on how instance was booted(volume-backed/ephemeral), how environment is
 configured(is shared instance directory(NFS, for example), or RBD used
 to store ephemeral disk), or for example user don't have that and is
 going to use --block-migrate flag. To claim that we have reliable
 live-migration in nova, we should check it at least on envs with rbd or
 nfs as more popular than envs without shared storages at all.
 Here is the steps for that:

  1. make  gate-tempest-dsvm-multinode-full voting, as it looks OK for
 block-migration testing purposes;


When we are ready to make multinode voting we should remove the equivalent
single node job.



 If it's been stable for awhile then I'd be OK with making it voting on
 nova changes, I agree it's important to have at least *something* that
 gates on multi-node testing for nova since we seem to break this a few
 times per release.


Last I checked it isn't as stable is single node yet:
http://jogo.github.io/gate/multinode [0].  The data going into graphite is
a bit noisy so this may be a red herring, but at the very least it needs to
be investigated. When I was last looking into this there were at least two
known bugs:

https://bugs.launchpad.net/nova/+bug/1445569
https://bugs.launchpad.net/nova/+bug/1445569
https://bugs.launchpad.net/nova/+bug/1462305


[0]
http://graphite.openstack.org/graph/?from=-36hoursheight=500until=nowwidth=800bgcolor=fffgcolor=00yMax=100yMin=0target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.{SUCCESS,FAILURE})),%275hours%27),%20%27gate-tempest-dsvm-full%27),%27orange%27)target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.{SUCCESS,FAILURE})),%275hours%27),%20%27gate-tempest-dsvm-multinode-full%27),%27brown%27)title=Check%20Failure%20Rates%20(36%20hours)_t=0.48646087432280183
http://graphite.openstack.org/graph/?from=-36hoursheight=500until=nowwidth=800bgcolor=fffgcolor=00yMax=100yMin=0target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.%7BSUCCESS,FAILURE%7D)),%275hours%27),%20%27gate-tempest-dsvm-full%27),%27orange%27)target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.%7BSUCCESS,FAILURE%7D)),%275hours%27),%20%27gate-tempest-dsvm-multinode-full%27),%27brown%27)title=Check%20Failure%20Rates%20(36%20hours)_t=0.48646087432280183



  2. contribute to tempest to cover volume-backed instances live-migration;


 jogo has had a patch up for this for awhile:

 https://review.openstack.org/#/c/165233/

 Since it's not full time on openstack anymore I assume some help there in
 picking up the change would be appreciated.


yes please



  3. make another job with rbd for storing ephemerals, it also requires
 changing tempest config;


 We already have a voting ceph job for nova - can we turn that into a
 multi-node testing job and run live migration with shared storage using
 that?


  4. make job with nfs for ephemerals.


 Can't we use a multi-node ceph job (#3) for this?


 These steps should help us to improve current situation with
 live-migration.

 --
 Timofey.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-26 Thread Joe Gordon
On Fri, Jun 26, 2015 at 7:39 AM, Dmitry Tantsur dtant...@redhat.com wrote:

 On 06/26/2015 04:08 PM, Sean Dague wrote:

 On 06/26/2015 07:43 AM, Dmitry Tantsur wrote:

 On 06/26/2015 01:14 PM, Sean Dague wrote:

 On 06/16/2015 09:51 AM, Dmitry Tantsur wrote:

 On 06/16/2015 08:56 AM, Dmitry Tantsur wrote:

 To sum this long post up, I'm seeing that hiding new features based on
 microversions brings much more problems, than it solves (I'm not aware
 of the latter at all). I'm very opposed to continuing doing it in
 Ironic, and I'm going to propose patch stopping gating Kilo changes
 (non-breaking obviously).


 I'm talking about this patch: https://review.openstack.org/#/c/192196/
 We have to do it right now, as otherwise we can't test inspection in
 tempest (it does not seem to be microversion-aware).


 Dmitry,

 How do you solve for the following situation?

 2 Clouds (A and B), both being Continuously Deployed.

 Assume both clouds start at same revision of code. At point in time T a
 new compatable change is added to the API. For instance, another field
 returned by some resource.

 Cloud B upgrades to that change.

 Brand new developer shows up. Starts writing application against Cloud
 B. Sees that change is available at version 1.4. Hard codes her
 application to use this parameter.

 Then she points her application at Cloud A. And it explodes.


 I clearly agree that my solutions do not solve this situation. Neither
 does yours. Now let us see:

 A compatible change is getting added and is guarded behind version 1.5.
 A new developer starts requiring this new version, because she needs
 this new feature (that's your assumption).

 Then she points her application at cloud A. And it explodes. But with
 different error message and probably a bit earlier. But explodes.

 Which, by the way, means we still must know precisely which API version
 is served by all clouds we might apply our utility to.


 But it fails with Your application is attempting to use API 1.5 which
 is not supported in this cloud. Because if you don't specify a version,
 you get the base 1.0, and never had new features.

 That's an error which is extremely clear to understand what went wrong.
 Has docs to figure out if the application could work at an earlier
 release version, and provides the ability for the client to do some
 selection logic based on supported versions.


 I agree that error is clear. I'm not sure it's worth doing despite all the
 problems that I mentioned in the beginning of this thread. In particular,
 situation around official CLI and testing.

 E.g. do you agree with Jim that we should make API version a required
 argument for both CLI and Python library?



 -Sean


 So I agree, hitting API version error could make a person realize that a
 change to the utility is no longer compatible with the current version
 of API. So if this change wasn't intended - fine. If it was (which is
 the most likely situation), it won't help you.

 By the way, I've heard some people wanting to deprecate API version with
 time. If we do so, it will open some new horizons for breakages.
 If we don't, we'll soon end put with dozens of version to support and
 test. How to solve it? (in Ironic IIRC we just only gate test one
 version, namely Kilo aka 1.6, which is very, very bad IMO).



 I feel like in your concerns there has been an assumption that the
 operation across all clouds effectively always goes forwards. But
 because we're trying to encourage and create a multicloud ecosystem a
 user application might experience the world the following way.

 Cloud A - Cloud A' - Cloud A'' - Cloud D - Cloud B' - Cloud C -
 Cloud C'.


 I think that versioning actually encourages this (wrong) assumption.
 Because versions grow with time naturally, and we, the developers, like
 new and shiny stuff so much :) see below for my alternative idea.


 While no individual cloud is likely to downgrade code (though we can't
 fully rule that out), the fact that we'd like applications to work
 against a wide range of deployments means effectively applications are
 going to experience the world as if the code bases both upgrade and
 downgrade over time.

 Which means that a change is only compatable if the inverse of the
 change is also compatable. So a field add is only compatable if the
 field delete is also considered compatible, because people are going to
 experience that when they hop to another cloud.

 Which is also why feature hiding is a thing. Because we don't control
 when every cloud is going to upgrade, and what they'll upgrade to. So
 the best idea so far about getting this right is that API 1.4 is
 *exactly* a specific surface. Features added in 1.5 are not visibile if
 you ask for 1.4. Because if they were, and you now wrote applications
 that used that bleed through, when you jump to a cloud without 1.5 yet
 deployed, all your application code breaks.


 Note that if you rely on version 1.4 with feature hiding, your
 

Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-06-26 Thread Joe Gordon
On Wed, Jun 24, 2015 at 11:44 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Wed, Jun 24, 2015 at 11:03 AM, Sean Dague s...@dague.net wrote:

 On 06/24/2015 01:41 PM, Russell Bryant wrote:
  On 06/24/2015 01:31 PM, Joe Gordon wrote:
 
 
  On Tue, Jun 16, 2015 at 9:58 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  Back when Nova first wanted to test partial upgrade, we did a
 bunch of
  slightly odd conditionals inside of grenade and devstack to make
 it so
  that if you were very careful, you could just not stop some of the
 old
  services on a single node, upgrade everything else, and as long as
 the
  old services didn't stop, they'd be running cached code in memory,
 and
  it would look a bit like a 2 node worker not upgraded model. It
 worked,
  but it was weird.
 
  There has been some interest by the Nova team to expand what's not
 being
  touched, as well as the Neutron team to add partial upgrade testing
  support. Both are great initiatives, but I think going about it
 the old
  way is going to add a lot of complexity in weird places, and not
 be as
  good of a test as we really want.
 
  Nodepool now supports allocating multiple nodes. We have a
 multinode job
  in Nova regularly testing live migration using this.
 
  If we slice this problem differently, I think we get a better
  architecture, a much easier way to add new configs, and a much more
  realistic end test.
 
  Conceptually, use devstack-gate multinode support to set up 2
 nodes, an
  all in one, and a worker. Let grenade upgrade the all in one,
 leave the
  worker alone.
 
  I think the only complexity here is the fact that grenade.sh
 implicitly
  drives stack.sh. Which means one of:
 
  1) devstack-gate could build the worker first, then run grenade.sh
 
  2) we make it so grenade.sh can execute in parts more easily, so
 it can
  hand something else running stack.sh for it.'
 
  3) we make grenade understand the subnode for partial upgrade, so
 it
  will run the stack phase on the subnode itself (given credentials).
 
  This kind of approach means deciding which services you don't want
 to
  upgrade doesn't require devstack changes, it's just a change of the
  services on the worker.
 
  We need a volunteer for taking this on, but I think all the follow
 on
  partial upgrade support will be much much easier to do after we
 have
  this kind of mechanism in place.
 
 
  I think this is a great approach for the future of partial upgrade
  support in grenade. I would like to point out step 0 here, is to get
  tempest passing consistently in multinode.
 
  Currently the neutron job is failing consistently, and nova-network
  fails roughly 10% of the time due
  to https://bugs.launchpad.net/nova/+bug/1462305
  and https://bugs.launchpad.net/nova/+bug/1445569
 
  If multi-node isn't reliable more generally yet, do you think the
  simpler implementation of partial-upgrade testing could proceed?  I've
  already done all of the patches to do it for Neutron.  That way we could
  quickly get something in place to help block regressions and work on the
  longer-term multinode refactoring without as much time pressure.

 The thing is, these partial service bits are sneaker than one realizes
 over time. There have been all kinds of edge conditions that crept up on
 the n-cpu one that are really subtle because code is running in memory
 on stale versions of dependencies which are no longer on disk. And the
 number of people that have this model in their head is basically down to
 a SPOF.


 I agree, As the author of the current multinode job it is definitely a
 ugly hack (but one that has worked surprisingly well until now).



 The fact that neutron-grenade is at a 40% fail rate right now (and has
 been for over a week) is not preventing anyone from just rechecking to
 get past it. So I think assuming additional failing grenade tests are
 going to keep folks from landing bugs is probably not a good assumption.
 Making the whole path more complicated for other people to debug is an
 explosion waiting to happen.

 So I do want to take a hard line on doing this right, because the debt
 here is higher than you might think. The partial code was always very
 conceptually fragile, and fails in really funny ways some times, because
 of the fact that old is not isolated from new in a way that would be
 expected.


 Assuming the smoke jobs work, I don't think making grenade do mulitnode
 should take very long. In which case we get a much more realistic upgrade
 situation.



Good news, it looks like both smoke jobs are working (ignoring failures
from https://review.openstack.org/#/c/195748/).



 I -1ed the n-net partial upgrade changes for the same reason.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack

Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-06-26 Thread Joe Gordon
No

On Fri, Jun 26, 2015 at 10:15 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Wed, Jun 24, 2015 at 11:44 AM, Joe Gordon joe.gord...@gmail.com
 wrote:



 On Wed, Jun 24, 2015 at 11:03 AM, Sean Dague s...@dague.net wrote:

 On 06/24/2015 01:41 PM, Russell Bryant wrote:
  On 06/24/2015 01:31 PM, Joe Gordon wrote:
 
 
  On Tue, Jun 16, 2015 at 9:58 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  Back when Nova first wanted to test partial upgrade, we did a
 bunch of
  slightly odd conditionals inside of grenade and devstack to make
 it so
  that if you were very careful, you could just not stop some of
 the old
  services on a single node, upgrade everything else, and as long
 as the
  old services didn't stop, they'd be running cached code in
 memory, and
  it would look a bit like a 2 node worker not upgraded model. It
 worked,
  but it was weird.
 
  There has been some interest by the Nova team to expand what's
 not being
  touched, as well as the Neutron team to add partial upgrade
 testing
  support. Both are great initiatives, but I think going about it
 the old
  way is going to add a lot of complexity in weird places, and not
 be as
  good of a test as we really want.
 
  Nodepool now supports allocating multiple nodes. We have a
 multinode job
  in Nova regularly testing live migration using this.
 
  If we slice this problem differently, I think we get a better
  architecture, a much easier way to add new configs, and a much
 more
  realistic end test.
 
  Conceptually, use devstack-gate multinode support to set up 2
 nodes, an
  all in one, and a worker. Let grenade upgrade the all in one,
 leave the
  worker alone.
 
  I think the only complexity here is the fact that grenade.sh
 implicitly
  drives stack.sh. Which means one of:
 
  1) devstack-gate could build the worker first, then run grenade.sh
 
  2) we make it so grenade.sh can execute in parts more easily, so
 it can
  hand something else running stack.sh for it.'
 
  3) we make grenade understand the subnode for partial upgrade, so
 it
  will run the stack phase on the subnode itself (given
 credentials).
 
  This kind of approach means deciding which services you don't
 want to
  upgrade doesn't require devstack changes, it's just a change of
 the
  services on the worker.
 
  We need a volunteer for taking this on, but I think all the
 follow on
  partial upgrade support will be much much easier to do after we
 have
  this kind of mechanism in place.
 
 
  I think this is a great approach for the future of partial upgrade
  support in grenade. I would like to point out step 0 here, is to get
  tempest passing consistently in multinode.
 
  Currently the neutron job is failing consistently, and nova-network
  fails roughly 10% of the time due
  to https://bugs.launchpad.net/nova/+bug/1462305
  and https://bugs.launchpad.net/nova/+bug/1445569
 
  If multi-node isn't reliable more generally yet, do you think the
  simpler implementation of partial-upgrade testing could proceed?  I've
  already done all of the patches to do it for Neutron.  That way we
 could
  quickly get something in place to help block regressions and work on
 the
  longer-term multinode refactoring without as much time pressure.

 The thing is, these partial service bits are sneaker than one realizes
 over time. There have been all kinds of edge conditions that crept up on
 the n-cpu one that are really subtle because code is running in memory
 on stale versions of dependencies which are no longer on disk. And the
 number of people that have this model in their head is basically down to
 a SPOF.


 I agree, As the author of the current multinode job it is definitely a
 ugly hack (but one that has worked surprisingly well until now).



 The fact that neutron-grenade is at a 40% fail rate right now (and has
 been for over a week) is not preventing anyone from just rechecking to
 get past it. So I think assuming additional failing grenade tests are
 going to keep folks from landing bugs is probably not a good assumption.
 Making the whole path more complicated for other people to debug is an
 explosion waiting to happen.

 So I do want to take a hard line on doing this right, because the debt
 here is higher than you might think. The partial code was always very
 conceptually fragile, and fails in really funny ways some times, because
 of the fact that old is not isolated from new in a way that would be
 expected.


 Assuming the smoke jobs work, I don't think making grenade do mulitnode
 should take very long. In which case we get a much more realistic upgrade
 situation.



 Good news, it looks like both smoke jobs are working (ignoring failures
 from https://review.openstack.org/#/c/195748/).


So next step is to teach grenade to do multinode.





 I -1ed the n-net partial upgrade changes for the same reason

Re: [openstack-dev] [Nova] The unbearable lightness of specs

2015-06-25 Thread Joe Gordon
On Thu, Jun 25, 2015 at 1:39 AM, Nikola Đipanov ndipa...@redhat.com wrote:

 On 06/24/2015 10:17 PM, Joe Gordon wrote:
 
 
  On Wed, Jun 24, 2015 at 11:42 AM, Kashyap Chamarthy kcham...@redhat.com
  mailto:kcham...@redhat.com wrote:
 
  On Wed, Jun 24, 2015 at 10:02:27AM -0500, Matt Riedemann wrote:
  
  
   On 6/24/2015 9:09 AM, Kashyap Chamarthy wrote:
   On Wed, Jun 24, 2015 at 02:51:38PM +0100, Nikola Đipanov wrote:
   On 06/24/2015 02:33 PM, Matt Riedemann wrote:
 
  [. . .]
 
   This is one of the _baffling_ aspects -- that a so-called super
 core
   has to approve specs with *no* obvious valid reasons.  As Jay
 Pipes
   mentioned once, this indeed seems like a vestigial remnant from
 old
   times.
   
   FWIW, I agree with others on this thread, Nova should get rid of
 this
   specific senseless non-process.  At least a couple of cycles ago.
  
   Specs were only added a couple of cycles ago... :)  And they were
 added to
   fill a gap, which has already been pointed out in this thread.  So
 if we
   remove them without a replacement for that gap, we regress.
 
  Oops, I didn't mean to say that Specs as a concept should be gone.
  Sorr for poor phrasing.
 
  My question was answred by Joe Gordon with this review:
 
  https://review.openstack.org/#/c/184912/
 
 
 
  A bit more context:
 
  We discussed the very issue of adjusting the review rules for nova-specs
  to give all cores +2 power. But in the end we decided not to in the end.
 

 I was expecting to also read a why here, since I was not at the summit.


The why is below. But I think its definitely worth revisiting (I am
personally for the patch).



  As someone who does a lot of spec reviews, I take +1s from the right
  people (not always nova-cores) to mean a lot, so much that I regularly
  will simply skim the spec myself before +2ing it. If a subject matter
  expert who I trust +1s a spec, that is usually all I need.
 
  * +1/-1s from the right people have a lot of power on specs. So the
  review burden isn't just on the people with '+W' power.  We may not have
  done a great job of making this point clear.
  * There are many subject matter experts outside nova-core who's vote
  means a lot. For example PTL's of other projects the spec impacts.
 

 This is exactly the kind of cognitive dissonance I find hard to not get
 upset about :)

 Code is what matters ultimately - the devil _is_ in the details, and I
 can bet you that it is extremely unlikely that a PTL of any other
 project is also going to go and do a detailed review of a feature branch
 in Nova, and have the understanding of the state of the surrounding
 codebase needed to do it properly. That's what's up to the nova
 reviewers to deal with. I too wish Nova code was in a place where it was
 possible to just do architecture, but I think we all agree it's
 nowhere near that.


This goes back to the point(s) that was brought up over and over again in
this thread. I guess we have to agree to disagree.

I think saying 'code' is what ultimately matters is misleading.  'Code' is
the implementation of an idea. If an idea is bad, so what if the code is
good?

I wouldn't ask the PTL of say Keystone to review the implementation of some
idea in nova, but I do want their opinion on an idea that impacts how nova
and keystone interact. Why make someone write a bunch of code, only to find
out that the design itself was fundamentally flawed and they have to go
back to the drawing board and throw everything out. On top of that now the
reviewers has to mentally decouple the idea and the code (unless the
feature has good documentation explaining that -- sort of like a spec).

That being said, I do think there is definitely room for improvement.


With all due respect to you Joe (and I do have a lot of respect for you)
 - I can't get behind how Nova specs puts process and documents over
 working and maintainable code. I will never be able to get behind that!



So what are you proposing ultimately?  It sounds like the broad consensus
here is: specs have made things better, but there is room for improvement
(this is my opinion as well). Are you saying just drop specs all together?
Because based on the discussion here, there isn't anything near consensus
for doing that. So if we aren't going to just revert to how things were
before specs, what do you think we should do?



 I honestly think Nova is today worse off then it could have been, just
 because of that mindset. You can't process away the hard things in
 coding, sorry.

 N.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing

Re: [openstack-dev] [Nova] The unbearable lightness of specs

2015-06-24 Thread Joe Gordon
On Wed, Jun 24, 2015 at 10:04 AM, Ed Leafe e...@leafe.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 06/24/2015 08:38 AM, Nikola Đipanov wrote:

  I urge people to reply to this instead of my original email as the
  writing is more detailed and balanced.

 OK, I've read what others have written, and want to throw in my own
 0.8316 BTC.

 The spec process is invaluable, but that is not to say that it can't be
 improved - a *lot*.

 Other emails have touched on the biggest disconnect in the process: that
 an approved spec magically becomes unapproved on a particular calendar
 date. This makes no sense whatsoever. If it was a good idea yesterday,
 it will almost always be a good idea tomorrow.


I don' think this is a accurate summary of the status quo.

We currently have the fast track process, where if a spec was previously
approved we will quickly re-approve it. (I do a git diff between the
previous version and make sure the diff is trivial). By my count in liberty
we successfully used this procedure around 14 times. So yes things do
magically become unapproved on a somewhat random date, but I don't think
this is realistically a major pain point. (Side note we were able to
approve a lot of those specs before the summit).

Secondly nova is moves fast. For example in Kilo we had: 4752 files
changed, 299,275 insertions(+), 309,689 deletions(-) [0].  What is amazing
about this is nova kilo only had 251,965 lines [1].  So specs that we
approved 6 months ago are often not valid anymore, I have seen this happen
time and time again.


[0] git diff --stat 0358c9afb5af6697be88b5b69d96096a59a2148e
2439e97c42d99926bc85ee93799006c380073c8d
[1] git ls-files | xargs wc -l



 The other obvious disconnect is the gap between nova-core and
 nova-spec-core. If someone is knowledgeable enough and trusted enough to
 commit code to Nova, they should be trusted to approve a spec.

 Those two changes aren't controversial, are they? If not, let's do them
 ASAP, and then iterate from there.

 - --

 - -- Ed Leafe
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 Comment: GPGTools - https://gpgtools.org
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQIcBAEBCgAGBQJViuMJAAoJEKMgtcocwZqLp2EQAIFOeEJgfVZoVOeFIpEkr7wy
 zmDzmJdTTGxLeG1fguLUimb72Vp1CfS/iL+bMSuFC+dSgam+w+BZuewuwbDMdhdO
 RzjCuRpOaPqm6h/fhEUYAeLH9jLBcX7NJrwJHJOKFQMVhVSNVZrvEOwGk//KYg47
 U8GrFsI7tvYKF25b2siv3WhiFRG+WoGhakBgeP+6fv91jEYwhVgV0OW98ZZao6sV
 aAmbRbfxAhjIjqywLASi0LobFFdeqWyXq8rMVQd1e/4dK4r38OxOZKP907RQIyW2
 181w1kMwUpYvzSJd7CQjp8Zb337XRZWNEcXuESqrsqmB4LNPN1MKzbx2N0+xtyFq
 IoSki5E4khnIvFNWA2CrUXE599piUV2BRP5hXKSgEdqnaBN51g0AgPMsoMQZFu5c
 xnDaHSW5v285ukiHxFp88XA6JHcCxd5tGOj9WmE5BeWHP6nXjzFbiv+HTxyBp0vZ
 ph5fqNpK4sA+z7dO81ji2yQBtSJbI2kTYIbgc5ylkRpKXKRjm4FDgTgp1asEj8Je
 POstxWSbduGWEYvCkkWnRC3s45vKYaulNt699OcrnuE/2v9nmQat2K7mevZF+JAz
 YIAF32WDN27lcgWRLd3OxPpRy2P3a/5MwYcvwKHGyaYKAKXNtXUzs7EriMqrUXkV
 1B2kPORGp8nUNKEQobP5
 =UaLL
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] The unbearable lightness of specs

2015-06-24 Thread Joe Gordon
On Wed, Jun 24, 2015 at 11:42 AM, Kashyap Chamarthy kcham...@redhat.com
wrote:

 On Wed, Jun 24, 2015 at 10:02:27AM -0500, Matt Riedemann wrote:
 
 
  On 6/24/2015 9:09 AM, Kashyap Chamarthy wrote:
  On Wed, Jun 24, 2015 at 02:51:38PM +0100, Nikola Đipanov wrote:
  On 06/24/2015 02:33 PM, Matt Riedemann wrote:

 [. . .]

  This is one of the _baffling_ aspects -- that a so-called super core
  has to approve specs with *no* obvious valid reasons.  As Jay Pipes
  mentioned once, this indeed seems like a vestigial remnant from old
  times.
  
  FWIW, I agree with others on this thread, Nova should get rid of this
  specific senseless non-process.  At least a couple of cycles ago.
 
  Specs were only added a couple of cycles ago... :)  And they were added
 to
  fill a gap, which has already been pointed out in this thread.  So if we
  remove them without a replacement for that gap, we regress.

 Oops, I didn't mean to say that Specs as a concept should be gone.
 Sorr for poor phrasing.

 My question was answred by Joe Gordon with this review:

 https://review.openstack.org/#/c/184912/



A bit more context:

We discussed the very issue of adjusting the review rules for nova-specs to
give all cores +2 power. But in the end we decided not to in the end.

As someone who does a lot of spec reviews, I take +1s from the right people
(not always nova-cores) to mean a lot, so much that I regularly will simply
skim the spec myself before +2ing it. If a subject matter expert who I
trust +1s a spec, that is usually all I need.

* +1/-1s from the right people have a lot of power on specs. So the review
burden isn't just on the people with '+W' power.  We may not have done a
great job of making this point clear.
* There are many subject matter experts outside nova-core who's vote means
a lot. For example PTL's of other projects the spec impacts.




 --
 /kashyap

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-06-24 Thread Joe Gordon
On Wed, Jun 24, 2015 at 10:45 AM, Sean Dague s...@dague.net wrote:

 On 06/24/2015 01:31 PM, Joe Gordon wrote:
 
 
  On Tue, Jun 16, 2015 at 9:58 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  Back when Nova first wanted to test partial upgrade, we did a bunch
 of
  slightly odd conditionals inside of grenade and devstack to make it
 so
  that if you were very careful, you could just not stop some of the
 old
  services on a single node, upgrade everything else, and as long as
 the
  old services didn't stop, they'd be running cached code in memory,
 and
  it would look a bit like a 2 node worker not upgraded model. It
 worked,
  but it was weird.
 
  There has been some interest by the Nova team to expand what's not
 being
  touched, as well as the Neutron team to add partial upgrade testing
  support. Both are great initiatives, but I think going about it the
 old
  way is going to add a lot of complexity in weird places, and not be
 as
  good of a test as we really want.
 
  Nodepool now supports allocating multiple nodes. We have a multinode
 job
  in Nova regularly testing live migration using this.
 
  If we slice this problem differently, I think we get a better
  architecture, a much easier way to add new configs, and a much more
  realistic end test.
 
  Conceptually, use devstack-gate multinode support to set up 2 nodes,
 an
  all in one, and a worker. Let grenade upgrade the all in one, leave
 the
  worker alone.
 
  I think the only complexity here is the fact that grenade.sh
 implicitly
  drives stack.sh. Which means one of:
 
  1) devstack-gate could build the worker first, then run grenade.sh
 
  2) we make it so grenade.sh can execute in parts more easily, so it
 can
  hand something else running stack.sh for it.'
 
  3) we make grenade understand the subnode for partial upgrade, so it
  will run the stack phase on the subnode itself (given credentials).
 
  This kind of approach means deciding which services you don't want to
  upgrade doesn't require devstack changes, it's just a change of the
  services on the worker.
 
  We need a volunteer for taking this on, but I think all the follow on
  partial upgrade support will be much much easier to do after we have
  this kind of mechanism in place.
 
 
  I think this is a great approach for the future of partial upgrade
  support in grenade. I would like to point out step 0 here, is to get
  tempest passing consistently in multinode.
 
  Currently the neutron job is failing consistently, and nova-network
  fails roughly 10% of the time due
  to https://bugs.launchpad.net/nova/+bug/1462305
  and https://bugs.launchpad.net/nova/+bug/1445569

 Grenade is only running tempest smoke, which is a quite small number of
 tests (and not the shelve/unshelve one for instance). I would expect
 it's pass rate to be much higher.


One way to find out. Want to get a multinode tempest smoke job running and
see how it looks after running for a few days.


 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-06-24 Thread Joe Gordon
On Wed, Jun 24, 2015 at 11:02 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Wed, Jun 24, 2015 at 11:01 AM, Joe Gordon joe.gord...@gmail.com
 wrote:



 On Wed, Jun 24, 2015 at 10:45 AM, Sean Dague s...@dague.net wrote:

 On 06/24/2015 01:31 PM, Joe Gordon wrote:
 
 
  On Tue, Jun 16, 2015 at 9:58 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  Back when Nova first wanted to test partial upgrade, we did a
 bunch of
  slightly odd conditionals inside of grenade and devstack to make
 it so
  that if you were very careful, you could just not stop some of the
 old
  services on a single node, upgrade everything else, and as long as
 the
  old services didn't stop, they'd be running cached code in memory,
 and
  it would look a bit like a 2 node worker not upgraded model. It
 worked,
  but it was weird.
 
  There has been some interest by the Nova team to expand what's not
 being
  touched, as well as the Neutron team to add partial upgrade testing
  support. Both are great initiatives, but I think going about it
 the old
  way is going to add a lot of complexity in weird places, and not
 be as
  good of a test as we really want.
 
  Nodepool now supports allocating multiple nodes. We have a
 multinode job
  in Nova regularly testing live migration using this.
 
  If we slice this problem differently, I think we get a better
  architecture, a much easier way to add new configs, and a much more
  realistic end test.
 
  Conceptually, use devstack-gate multinode support to set up 2
 nodes, an
  all in one, and a worker. Let grenade upgrade the all in one,
 leave the
  worker alone.
 
  I think the only complexity here is the fact that grenade.sh
 implicitly
  drives stack.sh. Which means one of:
 
  1) devstack-gate could build the worker first, then run grenade.sh
 
  2) we make it so grenade.sh can execute in parts more easily, so
 it can
  hand something else running stack.sh for it.'
 
  3) we make grenade understand the subnode for partial upgrade, so
 it
  will run the stack phase on the subnode itself (given credentials).
 
  This kind of approach means deciding which services you don't want
 to
  upgrade doesn't require devstack changes, it's just a change of the
  services on the worker.
 
  We need a volunteer for taking this on, but I think all the follow
 on
  partial upgrade support will be much much easier to do after we
 have
  this kind of mechanism in place.
 
 
  I think this is a great approach for the future of partial upgrade
  support in grenade. I would like to point out step 0 here, is to get
  tempest passing consistently in multinode.
 
  Currently the neutron job is failing consistently, and nova-network
  fails roughly 10% of the time due
  to https://bugs.launchpad.net/nova/+bug/1462305
  and https://bugs.launchpad.net/nova/+bug/1445569

 Grenade is only running tempest smoke, which is a quite small number of
 tests (and not the shelve/unshelve one for instance). I would expect
 it's pass rate to be much higher.


 One way to find out. Want to get a multinode tempest smoke job running
 and see how it looks after running for a few days.


 smoke jobs*, one for nova-net and one for neutron.



Proposal for multinode smoke jobs: https://review.openstack.org/#/c/195259/






 -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-06-24 Thread Joe Gordon
On Wed, Jun 24, 2015 at 11:01 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Wed, Jun 24, 2015 at 10:45 AM, Sean Dague s...@dague.net wrote:

 On 06/24/2015 01:31 PM, Joe Gordon wrote:
 
 
  On Tue, Jun 16, 2015 at 9:58 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  Back when Nova first wanted to test partial upgrade, we did a bunch
 of
  slightly odd conditionals inside of grenade and devstack to make it
 so
  that if you were very careful, you could just not stop some of the
 old
  services on a single node, upgrade everything else, and as long as
 the
  old services didn't stop, they'd be running cached code in memory,
 and
  it would look a bit like a 2 node worker not upgraded model. It
 worked,
  but it was weird.
 
  There has been some interest by the Nova team to expand what's not
 being
  touched, as well as the Neutron team to add partial upgrade testing
  support. Both are great initiatives, but I think going about it the
 old
  way is going to add a lot of complexity in weird places, and not be
 as
  good of a test as we really want.
 
  Nodepool now supports allocating multiple nodes. We have a
 multinode job
  in Nova regularly testing live migration using this.
 
  If we slice this problem differently, I think we get a better
  architecture, a much easier way to add new configs, and a much more
  realistic end test.
 
  Conceptually, use devstack-gate multinode support to set up 2
 nodes, an
  all in one, and a worker. Let grenade upgrade the all in one, leave
 the
  worker alone.
 
  I think the only complexity here is the fact that grenade.sh
 implicitly
  drives stack.sh. Which means one of:
 
  1) devstack-gate could build the worker first, then run grenade.sh
 
  2) we make it so grenade.sh can execute in parts more easily, so it
 can
  hand something else running stack.sh for it.'
 
  3) we make grenade understand the subnode for partial upgrade, so it
  will run the stack phase on the subnode itself (given credentials).
 
  This kind of approach means deciding which services you don't want
 to
  upgrade doesn't require devstack changes, it's just a change of the
  services on the worker.
 
  We need a volunteer for taking this on, but I think all the follow
 on
  partial upgrade support will be much much easier to do after we have
  this kind of mechanism in place.
 
 
  I think this is a great approach for the future of partial upgrade
  support in grenade. I would like to point out step 0 here, is to get
  tempest passing consistently in multinode.
 
  Currently the neutron job is failing consistently, and nova-network
  fails roughly 10% of the time due
  to https://bugs.launchpad.net/nova/+bug/1462305
  and https://bugs.launchpad.net/nova/+bug/1445569

 Grenade is only running tempest smoke, which is a quite small number of
 tests (and not the shelve/unshelve one for instance). I would expect
 it's pass rate to be much higher.


 One way to find out. Want to get a multinode tempest smoke job running and
 see how it looks after running for a few days.


smoke jobs*, one for nova-net and one for neutron.




 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-06-24 Thread Joe Gordon
On Tue, Jun 16, 2015 at 9:58 AM, Sean Dague s...@dague.net wrote:

 Back when Nova first wanted to test partial upgrade, we did a bunch of
 slightly odd conditionals inside of grenade and devstack to make it so
 that if you were very careful, you could just not stop some of the old
 services on a single node, upgrade everything else, and as long as the
 old services didn't stop, they'd be running cached code in memory, and
 it would look a bit like a 2 node worker not upgraded model. It worked,
 but it was weird.

 There has been some interest by the Nova team to expand what's not being
 touched, as well as the Neutron team to add partial upgrade testing
 support. Both are great initiatives, but I think going about it the old
 way is going to add a lot of complexity in weird places, and not be as
 good of a test as we really want.

 Nodepool now supports allocating multiple nodes. We have a multinode job
 in Nova regularly testing live migration using this.

 If we slice this problem differently, I think we get a better
 architecture, a much easier way to add new configs, and a much more
 realistic end test.

 Conceptually, use devstack-gate multinode support to set up 2 nodes, an
 all in one, and a worker. Let grenade upgrade the all in one, leave the
 worker alone.

 I think the only complexity here is the fact that grenade.sh implicitly
 drives stack.sh. Which means one of:

 1) devstack-gate could build the worker first, then run grenade.sh

 2) we make it so grenade.sh can execute in parts more easily, so it can
 hand something else running stack.sh for it.'

 3) we make grenade understand the subnode for partial upgrade, so it
 will run the stack phase on the subnode itself (given credentials).

 This kind of approach means deciding which services you don't want to
 upgrade doesn't require devstack changes, it's just a change of the
 services on the worker.

 We need a volunteer for taking this on, but I think all the follow on
 partial upgrade support will be much much easier to do after we have
 this kind of mechanism in place.


I think this is a great approach for the future of partial upgrade support
in grenade. I would like to point out step 0 here, is to get tempest
passing consistently in multinode.

Currently the neutron job is failing consistently, and nova-network fails
roughly 10% of the time due to https://bugs.launchpad.net/nova/+bug/1462305
and https://bugs.launchpad.net/nova/+bug/1445569



 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to properly detect and fence a compromised host (and why I dislike TrustedFilter)

2015-06-24 Thread Joe Gordon
On Tue, Jun 23, 2015 at 3:41 AM, Sylvain Bauza sba...@redhat.com wrote:

 Hi team,

 Some discussion occurred over IRC about a bug which was publicly open
 related to TrustedFilter [1]
 I want to take the opportunity for raising my concerns about that specific
 filter, why I dislike it and how I think we could improve the situation -
 and clarify everyone's thoughts)

 The current situation is that way : Nova only checks if one host is
 compromised only when the scheduler is called, ie. only when
 booting/migrating/evacuating/unshelving an instance (well, not exactly all
 the evacuate/live-migrate cases, but let's not discuss about that now).
 When the request goes in the scheduler, all the hosts are checked against
 all the enabled filters and the TrustedFilter is making an external HTTP(S)
 call to the Attestation API service (not handled by Nova) for *each host*
 to see if the host is valid (not compromised) or not.

 To be clear, that's the only in-tree scheduler filter which explicitly
 does an external call to a separate service that Nova is not managing. I
 can see at least 3 reasons for thinking about why it's bad :

 #1 : that's a terrible bottleneck for performance, because we're
 IO-blocking N times given N hosts (we're even not multiplexing the HTTP
 requests)
 #2 : all the filters are checking an internal Nova state for the host
 (called HostState) but that the TrustedFilter, which means that
 conceptually we defer the decision to a 3rd-party engine
 #3 : that Attestation API services becomes a de facto dependency for Nova
 (since it's an in-tree filter) while it's not listed as a dependency and
 thus not gated.


 All of these reasons could be acceptable if that would cover the exposed
 usecase given in [1] (ie. I want to make sure that if my host gets
 compromised, my instances will not be running on that host) but that just
 doesn't work, due to the situation I mentioned above.

 So, given that, here are my thoughts :
 a/ if a host gets compromised, we can just disable its service to prevent
 its election as a valid destination host. There is no need for a
 specialised filter.
 b/ if a host is compromised, we can assume that the instances have to
 resurrect elsewhere, ie. we can call a nova evacuate
 c/ checking if an host is compromised or not is not a Nova responsibility
 since it's already perfectly done by [2]

 In other words, I'm considering that security usecase as something
 analog as the HA usecase [3] where we need a 3rd-party tool responsible for
 periodically checking the state of the hosts, and if compromised then call
 the Nova API for fencing the host and evacuating the compromised instances.

 Given that, I'm proposing to deprecate TrustedFilter and explictly mention
 to drop it from in-tree in a later cycle
 https://review.openstack.org/194592


Given people are using this, it is a negligible maintenance burden.  I
think deprecating with the intention of removing is not worth it.

Although it would be very useful to further document the risks with this
filter (live migration, possible performance issues etc.)




 Thoughts ?
 -Sylvain



 [1] https://bugs.launchpad.net/nova/+bug/1456228
 [2] https://github.com/OpenAttestation/OpenAttestation
 [3]
 http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-06-24 Thread Joe Gordon
On Wed, Jun 24, 2015 at 11:03 AM, Sean Dague s...@dague.net wrote:

 On 06/24/2015 01:41 PM, Russell Bryant wrote:
  On 06/24/2015 01:31 PM, Joe Gordon wrote:
 
 
  On Tue, Jun 16, 2015 at 9:58 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  Back when Nova first wanted to test partial upgrade, we did a bunch
 of
  slightly odd conditionals inside of grenade and devstack to make it
 so
  that if you were very careful, you could just not stop some of the
 old
  services on a single node, upgrade everything else, and as long as
 the
  old services didn't stop, they'd be running cached code in memory,
 and
  it would look a bit like a 2 node worker not upgraded model. It
 worked,
  but it was weird.
 
  There has been some interest by the Nova team to expand what's not
 being
  touched, as well as the Neutron team to add partial upgrade testing
  support. Both are great initiatives, but I think going about it the
 old
  way is going to add a lot of complexity in weird places, and not be
 as
  good of a test as we really want.
 
  Nodepool now supports allocating multiple nodes. We have a
 multinode job
  in Nova regularly testing live migration using this.
 
  If we slice this problem differently, I think we get a better
  architecture, a much easier way to add new configs, and a much more
  realistic end test.
 
  Conceptually, use devstack-gate multinode support to set up 2
 nodes, an
  all in one, and a worker. Let grenade upgrade the all in one, leave
 the
  worker alone.
 
  I think the only complexity here is the fact that grenade.sh
 implicitly
  drives stack.sh. Which means one of:
 
  1) devstack-gate could build the worker first, then run grenade.sh
 
  2) we make it so grenade.sh can execute in parts more easily, so it
 can
  hand something else running stack.sh for it.'
 
  3) we make grenade understand the subnode for partial upgrade, so it
  will run the stack phase on the subnode itself (given credentials).
 
  This kind of approach means deciding which services you don't want
 to
  upgrade doesn't require devstack changes, it's just a change of the
  services on the worker.
 
  We need a volunteer for taking this on, but I think all the follow
 on
  partial upgrade support will be much much easier to do after we have
  this kind of mechanism in place.
 
 
  I think this is a great approach for the future of partial upgrade
  support in grenade. I would like to point out step 0 here, is to get
  tempest passing consistently in multinode.
 
  Currently the neutron job is failing consistently, and nova-network
  fails roughly 10% of the time due
  to https://bugs.launchpad.net/nova/+bug/1462305
  and https://bugs.launchpad.net/nova/+bug/1445569
 
  If multi-node isn't reliable more generally yet, do you think the
  simpler implementation of partial-upgrade testing could proceed?  I've
  already done all of the patches to do it for Neutron.  That way we could
  quickly get something in place to help block regressions and work on the
  longer-term multinode refactoring without as much time pressure.

 The thing is, these partial service bits are sneaker than one realizes
 over time. There have been all kinds of edge conditions that crept up on
 the n-cpu one that are really subtle because code is running in memory
 on stale versions of dependencies which are no longer on disk. And the
 number of people that have this model in their head is basically down to
 a SPOF.


I agree, As the author of the current multinode job it is definitely a ugly
hack (but one that has worked surprisingly well until now).



 The fact that neutron-grenade is at a 40% fail rate right now (and has
 been for over a week) is not preventing anyone from just rechecking to
 get past it. So I think assuming additional failing grenade tests are
 going to keep folks from landing bugs is probably not a good assumption.
 Making the whole path more complicated for other people to debug is an
 explosion waiting to happen.

 So I do want to take a hard line on doing this right, because the debt
 here is higher than you might think. The partial code was always very
 conceptually fragile, and fails in really funny ways some times, because
 of the fact that old is not isolated from new in a way that would be
 expected.


Assuming the smoke jobs work, I don't think making grenade do mulitnode
should take very long. In which case we get a much more realistic upgrade
situation.



 I -1ed the n-net partial upgrade changes for the same reason.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] Proposal of nova-hyper driver

2015-06-21 Thread Joe Gordon
On Fri, Jun 19, 2015 at 12:55 PM, Peng Zhao p...@hyper.sh wrote:

Hi, all,

 I would like to propose nova-hyper driver:
 https://blueprints.launchpad.net/nova/+spec/nova-hyper.

- What is Hyper?
Put simply, Hyper is a hypervisor-agnostic Docker runtime. It is
similar to Intel’s ClearContainer, allowing to run a Docker image with any
hypervisor.


- Why Hyper driver?
Given its hypervisor nature, Hyper makes it easy to integrate with
OpenStack ecosystem, e.g. Nova, Cinder, Neutron

- How to implement?
Similar to nova-docker driver. Hyper has a daemon “hyperd” running on
each physical box. hyperd exposed a set of REST APIs. Integrating Nova with
the APIs would do the job.

- Roadmap
Integrate with Magnum  Ironic.


This sounds like a better fit for something on top of Nova such as  Magnum
then as a  Nova driver.

Nova only supports things that look like 'VMs'. That includes bare metal,
and containers, but it only includes a subset of container features.

Looking at the hyper CLI [0], there are many commands that nova would not
suppprt, such as:

* The pod notion
* exec
* pull


[0] https://docs.hyper.sh/reference/cli.html



 Appreciate for comments and inputs!
 Thanks,Peng

 -
 Hyper - Make VM run like Container


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Difference between Sahara and CloudBrak

2015-06-21 Thread Joe Gordon
On Thu, Jun 18, 2015 at 6:29 AM, Chris Buccella chris.bucce...@verilume.com
 wrote:

 I tried (or tried to try) Cloudbreak recently, as I need to deploy a newer
 version of HDP than Sahara supports.

 The interface is slick, but lacks the ability to make some choices about
 your OpenStack installation. The heat template the software generated
 wouldn't work with my deployment, and there wasn't a way to fix it. I think
 they are primarily targeting public cloud providers.


I am curious to know more about what types of issues you saw. It sounds
like this is the exact type of issue that DefCore, and the general push for
better interoperability, should fix.




 I see Sahara vs. CloudBreak like this:

 Sahara - Hadoop distro agnostic deployment for OpenStack
 CloudBreak - Cloud agnostic deployment of HDP


The future releases section of http://sequenceiq.com/cloudbreak/ says it
will soon support more then just 'Hortonworks Data Platform.'  So assuming
cloudbreak evolves as expected what is the difference, it sounds like
Sahara will just be a subset of cloudbreak.




 -Chris

 On Mon, Jun 15, 2015 at 12:36 PM, Andrew Lazarev alaza...@mirantis.com
 wrote:t it

 Hi Jay,

 Cloudbreak is a Hadoop installation tool driven by Hortonworks. The main
 difference with Sahara is a point of control. In Hortonworks world you have
 Ambari and different planforms (AWS, OpenStack, etc.) to run Hadoop. Sahara
 point of view - you have OpenStack cluster and want to control everything
 from horizon (Hadoop of any vendor, Murano apps, etc.).

 So,
 If you tied with Hortonworks, spend most working time in Ambari and run
 Hadoop on different types of clouds - choose CloudBreak.
 If you have OpenStack infrastructure and want to run Hadoop on top of it
 - choose Sahara.

 Thanks,
 Andrew.

 On Mon, Jun 15, 2015 at 9:03 AM, Jay Lau jay.lau@gmail.com wrote:

 Hi Sahara Team,

 Just notice that the CloudBreak (
 https://github.com/sequenceiq/cloudbreak) also support running on top
 of OpenStack, can anyone show me some difference between Sahara and
 CloudBreak when both of them using OpenStack as Infrastructure Manager?

 --
 Thanks,

 Jay Lau (Guangya Liu)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-21 Thread Joe Gordon
On Tue, Jun 16, 2015 at 9:16 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 In Murano project we do see a positive impact of BigTent model. Since
 Murano was accepted as a part of BigTent community we had a lot of
 conversations with potential users. They were driven exactly by the fact
 that Murano is now officially recognized in OpenStack community. It might
 be a wrong perception, but this is a perception they have.
 Most of the guys we met  are enterprises for whom catalog functionality is
 interesting. The problem with enterprises is that their thinking periods
 are often more than 6-9 months. They are not individuals who can start
 contributing over a night. They need some time to create proper org
 structure changes to organize development process. The benefits of that is
 more stable and predictable development over time as soon as they start
 contributing.


Sure, I was ignoring the question about potential users, and only looking
at 'development resources'. Although I am interested in seeing how the
user's view of being official changes now that it means something very
different (governance wise) in the big tent.



 Thanks
 Gosha



 On Tue, Jun 16, 2015 at 4:44 AM, Jay Pipes jaypi...@gmail.com wrote:

 You may also find my explanation about the Big Tent helpful in this
 interview with Niki Acosta and Jeff Dickey:

 http://blogs.cisco.com/cloud/ospod-29-jay-pipes

 Best,
 -jay


 On 06/16/2015 06:09 AM, Flavio Percoco wrote:

 On 16/06/15 04:39 -0400, gordon chung wrote:

 i won't speak to whether this confirms/refutes the usefulness of the
 big tent.
 that said, probably as a by-product of being in non-stop meetings with
 sales/
 marketing/managers for last few days, i think there needs to be better
 definitions (or better publicised definitions) of what the goals of
 the big
 tent are. from my experience, they've heard of the big tent and they
 are, to
 varying degrees, critical of it. one common point is that they see it as
 greater fragmentation to a process that is already too slow.


 Not saying this is the final answer to all the questions but at least
 it's a good place to start from:


 https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/the-big-tent-a-look-at-the-new-openstack-projects-governance



 That said, this is great feedback and we may indeed need to do a
 better job to explain the big tent. That presentation, I believe, was
 an attempt to do so.

 Flavio


 just giving my fly-on-the-wall view from the other side.

 On 15/06/2015 6:20 AM, Joe Gordon wrote:

One of the stated problems the 'big tent' is supposed to solve is:

'The binary nature of the integrated release results in projects
 outside
the integrated release failing to get the recognition they deserve.
Non-official projects are second- or third-class citizens which
 can't get
development resources. Alternative solutions can't emerge in the
 shadow of
the blessed approach. Becoming part of the integrated release,
 which was
originally designed to be a technical decision, quickly became a
life-or-death question for new projects, and a political/community
minefield.' [0]

Meaning projects should see an uptick in development once they drop
 their
second-class citizenship and join OpenStack. Now that we have been
 living
in the world of the big tent for several months now, we can see if
 this
claim is true.

Below is a list of the first few few projects to join OpenStack
 after the
big tent, All of which have now been part of OpenStack for at least
 two
months.[1]

* Mangum -  Tue Mar 24 20:17:36 2015
* Murano - Tue Mar 24 20:48:25 2015
* Congress - Tue Mar 31 20:24:04 2015
* Rally - Tue Apr 7 21:25:53 2015

When looking at stackalytics [2] for each project, we don't see any
noticeably change in number of reviews, contributors, or number of
 commits
from before and after each project joined OpenStack.

So what does this mean? At least in the short term moving from
 Stackforge
to OpenStack does not result in an increase in development
 resources (too
early to know about the long term).  One of the three reasons for
 the big
tent appears to be unfounded, but the other two reasons hold.  The
 only
thing I think this information changes is what peoples expectations
 should
be when applying to join OpenStack.

[0] https://github.com/openstack/governance/blob/master/resolutions/
20141202-project-structure-reform-spec.rst
[1] Ignoring OpenStackClent since the repos were always in
 OpenStack it
just didn't have a formal home in the governance repo.
[2] h http://stackalytics.com/?module=magnum-groupmetric=commits




 __

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http

Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-21 Thread Joe Gordon
On Mon, Jun 15, 2015 at 1:20 PM, Joe Gordon joe.gord...@gmail.com wrote:

 One of the stated problems the 'big tent' is supposed to solve is:

 'The binary nature of the integrated release results in projects outside
 the integrated release failing to get the recognition they deserve.
 Non-official projects are second- or third-class citizens which can't get
 development resources. Alternative solutions can't emerge in the shadow of
 the blessed approach. Becoming part of the integrated release, which was
 originally designed to be a technical decision, quickly became a
 life-or-death question for new projects, and a political/community
 minefield.' [0]

 Meaning projects should see an uptick in development once they drop their
 second-class citizenship and join OpenStack. Now that we have been living
 in the world of the big tent for several months now, we can see if this
 claim is true.

 Below is a list of the first few few projects to join OpenStack after the
 big tent, All of which have now been part of OpenStack for at least two
 months.[1]

 * Mangum -  Tue Mar 24 20:17:36 2015
 * Murano - Tue Mar 24 20:48:25 2015
 * Congress - Tue Mar 31 20:24:04 2015
 * Rally - Tue Apr 7 21:25:53 2015

 When looking at stackalytics [2] for each project, we don't see any
 noticeably change in number of reviews, contributors, or number of commits
 from before and after each project joined OpenStack.


Looks like my previous analysis was a bit off. Stackalytics is less useful
for gathering statistics on contributons then I originally thought.  Both
the UX and REST APIs are very limited.

Instead I looked at the number of commits and contributors directly from
git (looking only the main repo for each project, ignoring clients etc).

Of the projects listed above, all of them have the most contribuors after
joining OpenStack. In comparison projects already in OpenStack saw the most
number of contributors in the two months before the first big tent
additions. I think this is due to the Kilo release.  So it looks like there
is a measurable bump in the number of contributors once a project joins
OpenStack (although I am finding it diffucult to draw any conclusion about
the number of commits). But when looking further into the data we see a
different story.

* Magnums large spike on contributors (10 additional contributors) but when
looking at the contributor diff, the number should really be closer to 5.
* The 5 additional contributors in Murano can be attributed to new
developers from an existing company plus single patches from from two
developers about sql driver and oslo.

It is hard to read into the jump in contributors after joining the big
tent. But there is definitly something going on, just unclear what it means
over a longer period of time.

data: http://paste.openstack.org/show/310710
code: http://paste.openstack.org/show/310711

What really matters should be diversity, it is easy to see a bump in
development as compaines already involved in a project add more resources
too it. IMHO one of the hopes for a project joining the big tent is to get
new companies to join. Thankfully this is where stackalytics is very
useful.  We can compare contributions by company from kilo and liberty.

* Magnum  -- clear jump in corporate diversity for both reviews and commits
(with new companies getting involved)
  * Kilo reviews
http://stackalytics.com/?project_type=allmetric=marksmodule=magnum-grouprelease=kilo
  * Liberty reviews
http://stackalytics.com/?project_type=allmetric=marksmodule=magnum-grouprelease=liberty
  * Kio commit:
http://stackalytics.com/?project_type=allmetric=commitsmodule=magnum-grouprelease=kilo
  * Liberty commits:
http://stackalytics.com/?project_type=allmetric=commitsmodule=magnum-grouprelease=liberty
* Murano -- Slight decrese in diversity for both commits and reviews
* Congress -- Liberty numbers are too small to draw any conclusions on
* Rally - Slight increase in review diversity (with a new company joining),
commit diversity had no major change (but it is already pretty good).

From the little data we have so far, here are my revised conclusions:

* Joining the big tent doesn't automatically mean new companies will
contribute
* Projects that were fairly diverse when in stackforge get new contributing
companies after joining the big tent.
* At this point it is unclear to me if the inverse  (projects that weren't
very diverse before, don't gain new contributors) is true as well.

So it looks like joining 'OpenStack' sometimes has a clearly measurable
correlation with a projects corporate diversity.

It will be very interesting to re-analyize the numbers once Liberty is
released.



 So what does this mean? At least in the short term moving from
 Stackfokeystonerge to OpenStack does not result in an increase in
 development resources (too early to know about the long term).  One of the
 three reasons for the big tent appears to be unfounded, but the other two
 reasons hold.  The only thing I think

Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-21 Thread Joe Gordon
On Mon, Jun 15, 2015 at 2:12 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 06/15/2015 06:20 AM, Joe Gordon wrote:

 One of the stated problems the 'big tent' is supposed to solve is:

 'The binary nature of the integrated release results in projects outside
 the integrated release failing to get the recognition they deserve.
 Non-official projects are second- or third-class citizens which can't
 get development resources. Alternative solutions can't emerge in the
 shadow of the blessed approach. Becoming part of the integrated release,
 which was originally designed to be a technical decision, quickly became
 a life-or-death question for new projects, and a political/community
 minefield.' [0]

 Meaning projects should see an uptick in development once they drop
 their second-class citizenship and join OpenStack. Now that we have been
 living in the world of the big tent for several months now, we can see
 if this claim is true.

 Below is a list of the first few few projects to join OpenStack after
 the big tent, All of which have now been part of OpenStack for at least
 two months.[1]

 * Mangum -  Tue Mar 24 20:17:36 2015
 * Murano - Tue Mar 24 20:48:25 2015
 * Congress - Tue Mar 31 20:24:04 2015
 * Rally - Tue Apr 7 21:25:53 2015

 When looking at stackalytics [2] for each project, we don't see any
 noticeably change in number of reviews, contributors, or number of
 commits from before and after each project joined OpenStack.

 So what does this mean? At least in the short term moving from
 Stackforge to OpenStack does not result in an increase in development
 resources (too early to know about the long term).  One of the three
 reasons for the big tent appears to be unfounded, but the other two
 reasons hold.


 You have not given enough time to see the effects of the Big Tent, IMHO.
 Lots of folks in the corporate world just found out about it at the design
 summit, frankly.


As I responded in a different email, I tend to agree with you. Although
there are some clear trends towards new contributing companies already.




  The only thing I think this information changes is what

 peoples expectations should be when applying to join OpenStack.


 What is your assumption of what people's expectations are when applying to
 join OpenStack?


That joining OpenStack will result in more companies contributing to a
given project.


 Best,
 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-21 Thread Joe Gordon
On Thu, Jun 18, 2015 at 5:07 PM, Adrian Otto adrian.o...@rackspace.com
wrote:

   Joe,

  I must respectfully disagree. The statistics you used to indicate that
 Magnum did not benefit from joining the tent are not telling the whole
 story. Facts:


Agreed, after looking at the numbers some more, I don't know if i would
call stackforge second class, but it is definitely not 100% first class.



  1) When we had our Midcycle just before joining OpenStack in March we
 had 24 contributors from 13 affiliations when we joined. You were there,
 remember? We now have 55 contributors from 22 affiliations.


  2) There is a ramp time in the number of reviews and commits that
 newcomers offer. You don't just show up and drop 10 new commits a day. Most
 of our new contributors have just joined the effort. I can tell by their
 behavior that they are gearing up to participate in a more meaningful way.
 They are showing up at team meetings, discussing blueprints, discussing
 issues on the ML, and just staring to work on a few bugs. I am sure that
 commits are are trailing indicator of engagement, not a leading one.


Agreed,  we only have very preliminary numbers right now.



  3) Contributors who participated the most in the last cycle are not
 producing as many reviews this time around. Several of them are working on
 productization strategy and execution to bring related next generation
 cloud services to market. This focus happens downstream, not upstream. The
 top commit contributors this cycle are from HP and Intel, who were only
 minimally involved before we joined OpenStack.


Yup, the new contributions from HP and Intel appear to have a strong
correlation with joining  OpenStack.



  4) As a project proceeds through maturation, commit velocity decreases
 as the complexity of new features increases. We picked the low hanging
 fruit for Magnum, and now we are focusing on harder work that requires more
 planning and collaboration, and less blasting out of try this code. Our
 quality expectations are higher now.

  Joining worked for Magnum.


after revisiting this issue, I tend to agree. But I am still struggling to
go beyond correlation and reach causality. Since this could simply be
attributed to Magnum's growth (it already attracted 13 companies in
stackforge. Furthermore why do you think joining worked for Magnum? Joining
doesn't appear to work for every project.



  When you stay in Stackforge, you have a limited window of time to build
 community, and then it fades. You don't need to look far to find examples
 of that. Our community certainly


I don't think this is unique to stackforge, I think this is true in
OpenStack as well. OpenStack is littered with projects that lack a diverse
set of contributors.


 does treat Stackforge projects as second class. The process of starting
 Magnum reaffirmed that fact for me. I even have reviews where I was
 explicitly told in -1 vote comments that Stackforge was a second class and
 that was the point of it. Unfortunately Stackforge's reputation has been
 fouled because of the way we have treated it. I don't think that can be
 fixed. Once you are labeled a tramp, you don't recover from that socially.
 Stackforge is our tramp now, like it or not. Big Tent is our opportunity to
 build an inclusive community right. Let's not go changing it before we have
 given it a fair chance first.


I never intended this email to call for change. I was simply trying to
evaluate one of the big tent motivations, now that we have preliminary
numbers on it.  And my initial analysis was wrong.




 Thanks,

 Adrian

 On Jun 15, 2015, at 3:25 AM, Joe Gordon joe.gord...@gmail.com wrote:

   One of the stated problems the 'big tent' is supposed to solve is:

  'The binary nature of the integrated release results in projects outside
 the integrated release failing to get the recognition they deserve.
 Non-official projects are second- or third-class citizens which can't get
 development resources. Alternative solutions can't emerge in the shadow of
 the blessed approach. Becoming part of the integrated release, which was
 originally designed to be a technical decision, quickly became a
 life-or-death question for new projects, and a political/community
 minefield.' [0]

  Meaning projects should see an uptick in development once they drop
 their second-class citizenship and join OpenStack. Now that we have been
 living in the world of the big tent for several months now, we can see if
 this claim is true.

  Below is a list of the first few few projects to join OpenStack after
 the big tent, All of which have now been part of OpenStack for at least two
 months.[1]

  * Mangum -  Tue Mar 24 20:17:36 2015
  * Murano - Tue Mar 24 20:48:25 2015
 * Congress - Tue Mar 31 20:24:04 2015
 * Rally - Tue Apr 7 21:25:53 2015

  When looking at stackalytics [2] for each project, we don't see any
 noticeably change in number of reviews, contributors, or number of commits
 from before and after each

Re: [openstack-dev] Debian already using Python 3.5: please gate on that

2015-06-19 Thread Joe Gordon
On Jun 19, 2015 1:19 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 On 06/18/2015 09:48 PM, Doug Hellmann wrote:
  Excerpts from Brian Curtin's message of 2015-06-18 13:17:52 -0500:
  On Thursday, June 18, 2015, Doug Hellmann d...@doughellmann.com
  wrote:
 
  Excerpts from Thomas Goirand's message of 2015-06-18 15:44:17
  +0200:
  Hi!
 
  tl;dr: skip this message, the subject line is enough! :)
 
  As per the subject line, we already have Python 3.5 in Debian
  (AFAICT, from Debian Experimental, in version beta 2). As a
  consequence, we're already running (unit) tests using Python
  3.5. Some have failures: I could see issues in
  ceilometerclient, keystoneclient, glanceclient and more (yes,
  I am planning to report these issues, and we already started
  doing so). As Python 3.4 is still the default interpreter
  for /usr/bin/python3, that's currently fine, but it soon wont
  be.
 
  All this to say: if you are currently gating on Python 3,
  please start slowly adding support for 3.5, as we're planning
  to switch to that for Debian 9 (aka Stretch). I believe
  Ubuntu will follow (as the Python core packages are imported
  from Debian).
 
  3.5 is still in beta. What's the schedule for an official
  release from the python-dev team?
 
 
  3.4 Final is planed for September 13
  https://www.python.org/dev/peps/pep-0478/
 
  Based on that, I am confident in sticking with our plan of gating
  using 3.4 for now and keeping an eye on the 3.5 packages being
  built by Debian and Canonical.
 

 +1. Putting burden on projects to adopt to beta releases of python is
 unreasonable.

Agreed. I also found the tone of the original email misleading.


 Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJVg/nYAAoJEC5aWaUY1u579GoIANYISvCNK/ltfN8rS7W2pkGd
 k+xPNYFE1aaGIE8wTakrm0dHG3oSFpVVs5FGtiJwvgFz4RyOBPUhiv6prxp7BBvT
 SqBvaoKdWsptC2BLrNrSRM62b0TTl+n5anpwPzzdiZcPBHfdaIWjrJnSpm4KMA57
 HSZfUvBn1Ge8LL9BzZLOTlOdyQ0jDrh2nI8s27JEk/4yJ+tx2gjxZjQ0N2Xs8TGN
 WJB+qXBGmttAwwf264b9YeNSZIktAwXbLphG2LKaJHOUuP+YwA1wRdfn5MfxOmTs
 lc2dln7obATwIRZopdDUOIr6ch9KpQUm/Bs29Sdjcg7Vx1RZl2U0o9inscpoHig=
 =6C1i
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stackalytics] Complementary projects

2015-06-19 Thread Joe Gordon
On Jun 19, 2015 3:56 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 06/19/2015 09:19 AM, Ilya Shakhat wrote:

 Some reasons of having complementary projects in Stackalytics:
   * to compare efforts in other communities with OpenStack - just to
 feed curiosity on what is larger OpenStack or Kubernetes?
   * to light interest to contribute to projects that OpenStack depends
 on, like OVS and Ansible.
   * to keep an eye on commercial interest and know who is sponsoring
 near-by technologies.

 I agree that adding complementary projects was an authoritarian decision
 and ready to remove them in the community version if TC decides so.


 Hi Ilya,

 Personally, I quite like having those complementary projects in
Stackalytics, for the reasons you note above.


Although it's not very useful for 'reviews'

http://stackalytics.com/?project_type=complementary

I also wonder how well maintained the email mappings are for complementary
projects.

 Best,
 -jay


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian

2015-06-15 Thread Joe Gordon
On Sun, Jun 14, 2015 at 10:46 PM, Thomas Goirand z...@debian.org wrote:

 On 06/11/2015 11:31 PM, Nikhil Manchanda wrote:
  Hi Thomas:
 
  I just checked and I don't see suds as a requirement for trove.
  I don't think it should be a requirement for the trove debian package,
  either.
 
  Thanks,
  Nikhil

 Hi,

 I fixed the package and removed the Suggests: python-suds in both Trove
  Ironic. Now there's still the issue in:
 - nova


Nova itself doesn't depend on suds anymore. Oslo.vmware has a suds
dependency, but that is only needed if you are using the vmware virt driver
in nova.

So nova's vmware driver depends on suds (it may be suds-jurko these days),
but not nova in general.



 - cinder
 - oslo.vmware

 It'd be nice to do something about them. As I mentioned, I'll do the
 packaging work for anything that will replace it if needed.

 FYI, I filed bugs:
 https://bugs.launchpad.net/oslo.vmware/+bug/1465015
 https://bugs.launchpad.net/nova/+bug/1465016
 https://bugs.launchpad.net/cinder/+bug/1465017

 Cheers,

 Thomas Goirand (zigo)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stackforge projects are not second class citizens

2015-06-15 Thread Joe Gordon
One of the stated problems the 'big tent' is supposed to solve is:

'The binary nature of the integrated release results in projects outside
the integrated release failing to get the recognition they deserve.
Non-official projects are second- or third-class citizens which can't get
development resources. Alternative solutions can't emerge in the shadow of
the blessed approach. Becoming part of the integrated release, which was
originally designed to be a technical decision, quickly became a
life-or-death question for new projects, and a political/community
minefield.' [0]

Meaning projects should see an uptick in development once they drop their
second-class citizenship and join OpenStack. Now that we have been living
in the world of the big tent for several months now, we can see if this
claim is true.

Below is a list of the first few few projects to join OpenStack after the
big tent, All of which have now been part of OpenStack for at least two
months.[1]

* Mangum -  Tue Mar 24 20:17:36 2015
* Murano - Tue Mar 24 20:48:25 2015
* Congress - Tue Mar 31 20:24:04 2015
* Rally - Tue Apr 7 21:25:53 2015

When looking at stackalytics [2] for each project, we don't see any
noticeably change in number of reviews, contributors, or number of commits
from before and after each project joined OpenStack.

So what does this mean? At least in the short term moving from Stackforge
to OpenStack does not result in an increase in development resources (too
early to know about the long term).  One of the three reasons for the big
tent appears to be unfounded, but the other two reasons hold.  The only
thing I think this information changes is what peoples expectations should
be when applying to join OpenStack.

[0]
https://github.com/openstack/governance/blob/master/resolutions/20141202-project-structure-reform-spec.rst
[1] Ignoring OpenStackClent since the repos were always in OpenStack it
just didn't have a formal home in the governance repo.
[2] h 
http://stackalytics.com/?module=openstackclient-groupmetric=commits*http://stackalytics.com/?module=magnum-groupmetric=commits
http://stackalytics.com/?module=magnum-groupmetric=commits*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian

2015-06-15 Thread Joe Gordon
On Mon, Jun 15, 2015 at 4:16 PM, Thomas Goirand z...@debian.org wrote:

 On 06/15/2015 11:31 AM, Joe Gordon wrote:
  Nova itself doesn't depend on suds anymore.

 A quick grep still shows references to suds (that's in Kilo, but the
 master branch shows similar results):


Your git repo is out of date.


https://github.com/openstack/nova/search?utf8=%E2%9C%93q=suds



 etc/nova/logging_sample.conf:qualname = suds


this doesn't actually require suds.

We can remove this line



 nova/tests/unit/test_hacking.py: def
 fake_suds_context(calls={}):

 nova/tests/unit/virt/vmwareapi/test_vim_util.py:with
 stubs.fake_suds_context(calls):

 nova/tests/unit/virt/vmwareapi/stubs.py:def fake_suds_context(calls=None):

 nova/tests/unit/virt/vmwareapi/stubs.py:Generate a suds client
 which automatically mocks all SOAP method calls.

 nova/tests/unit/virt/vmwareapi/stubs.py:
 mock.patch('suds.client.Client', fake_client),

 nova/tests/unit/virt/vmwareapi/test_driver_api.py:import suds

 nova/tests/unit/virt/vmwareapi/test_driver_api.py:
 mock.patch.object(suds.client.Client,

 nova/tests/unit/virt/vmwareapi/fake.py:Fake factory class for the
 suds client.

 nova/tests/unit/virt/vmwareapi/fake.py:Initializes the suds
 client object, sets the service content

 nova/virt/vmwareapi/vim_util.py:import suds


this was removed in https://review.openstack.org/#/c/181554/



 nova/virt/vmwareapi/vim_util.py:for k, v in
 suds.sudsobject.asdict(obj).iteritems():

 nova/config.py:   'qpid=WARN', 'sqlalchemy=WARN',
 'suds=INFO',


We missed this, so here is a patch https://review.openstack.org/#/c/191795/



 test-requirements.txt:suds=0.4


  Oslo.vmware has a suds
  dependency, but that is only needed if you are using the vmware virt
  driver in nova.

 It's used in unit tests, no?


as explained above, nope.



  So nova's vmware driver depends on suds (it may be suds-jurko these
  days)

 As I wrote, suds-jurko isn't acceptable either, as it's also not
 maintained upstream.


Agreed, we have more work to do.



  but not nova in general.

 If we don't want suds, we don't want suds. Not just it's only in some
 parts kind of answer. Especially, it should appear in
 tests-requirements.txt and in vmwareapi unit tests. Don't you think?

 Cheers,

 Thomas Goirand (zigo)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with pymysql

2015-06-12 Thread Joe Gordon
On Fri, Jun 12, 2015 at 7:13 PM, Sean Dague s...@dague.net wrote:

 On 06/12/2015 01:17 AM, Salvatore Orlando wrote:
  It is however interesting that both lock wait timeouts and missing
  savepoint errors occur in operations pertaining the same table -
  securitygroups in this case.
  I wonder if the switch to pymysl has not actually uncovered some other
  bug in Neutron.
 
  I have no opposition to a revert, but since this will affect most
  projects, it's probably worth finding some time to investigate what is
  triggering this failure when sqlalchemy is backed by pymysql before
  doing that.

 Right, we knew that the db driver would move some bugs around because
 we're no longer blocking python processes on db access (so there used to
 be a pseudo synchronization point before you ever got to the database).

 My feeling is this should be looked into before it is straight reverted
 (are jobs failing beyond Rally?). There are a number of benefits with


A quick look at logstash.openstack.org shows some of the stacktraces are
happening in other neutron jobs as well.


 the new driver, and we can't get to python3 with the old one.


Agreed, pymysql is not to blame it looks like we have hit some neutron
issues.  So lets try to fix neutron. Just because neutron reverts the
default sql connector doesn't mean operators won't end up trying pymysql.



 Rally failing is also an indicator that just such and implicit lock was
 behavior that was depended on before, because it will be sending a bunch
 of similar operations all at once as a kind of stress test. It would
 tend to expose issues like this first.


Glad to see us catch these issues early.



 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-10 Thread Joe Gordon
On Wed, Jun 10, 2015 at 5:24 PM, Ian Cordasco ian.corda...@rackspace.com
wrote:



 On 6/10/15, 09:12, Thomas Goirand z...@debian.org wrote:

 On 06/10/2015 12:25 PM, Dave Walker wrote:
  The initial core reviewers was seeded by representatives of distro's and
  vendors to get their input on viability in distro's.
 
 Really? James, were you made core on the requirements?
 
 I once tried to follow the requirements repo, though it moves too fast,
 and sometimes, it takes less than 2 days to get a new thing approved.
 Also, this repository has more than just dependencies, there's a lot of
 noise due to changes changes in projects.txt. I also don't mind any
 upgrade for whatever oslo or client libs.
 
 I'd love to have an easier way to voice my opinion without having all
 the noise of that repo in my inbox. I'm not sure if there's a solution
 to this problem though.
 
 Thomas

 You should be able to subscribe to a subset of the changes in gerrit. I
 don't recall if it only works for directories, but you should be able to
 make something work for *requirements.txt. The docs are easy to find on
 Google or DDG.


Query to see only *requirements.txt changes:

  project:openstack/requirements  file:^.*requirements.txt is:open

how to subscribe to a subset of changes:

  https://review.openstack.org/Documentation/user-notify.html



 Cheers,
 Ian

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to SQLAlchemy 1.0.x

2015-06-09 Thread Joe Gordon
On Tue, Jun 9, 2015 at 8:08 PM, Mike Bayer mba...@redhat.com wrote:



 On 6/9/15 9:26 AM, Thomas Goirand wrote:

 Hi,

 The python-sqlalchemy package has been uploaded to Debian Experimental,
 and is about to be uploaded to Debian Unstable. So I wonder what's the
 state of the project regarding upgrading SQLA.

 Maybe Mike can tell what kind of issue we may run into? How much work
 will it be to switch to SQLA 1.0.x for Liberty? Is it possible to be
 compatible with both 9.x and 1.x (which would be the best way forward)?


 The short answer is that there are no supported use cases that have been
 intentionally changed in any backwards incompatible way in 1.0 and all
 Openstack code should be able to accommodate from 0.9.x - 1.0.x without
 any change.


Just posted a patch to test this out:  https://review.openstack.org/189847



 I run SQLAlchemy's master against a small subset of Openstack tests
 continuously, including Nova DB API, Keystone, all of oslo.db and Neutron's
 migration tests, and there was nothing that needed changing as we went
 along.   I've also run devstack against SQLAlchemy 1.0 without problems
 though I don't have a lot of openstack-user-fu so I didn't stress test it
 too much at that level.

 It's my expectation that nothing in Openstack should have to change to
 work with SQLAlchemy 1.0 - the kinds of things that change tend to be
 subtle things related to odd use cases and newer features, usually along
 the lines of applications that may have been relying upon some behavior
 that was undefined, that then changes it's behavior in some way.

 An example is that we had a user who was running a query of the form
 session.query(SomeObject).with_parent(SomeParent(id=None)), e.g. trying
 to find objects that *didn't* have a parent by using a transient Parent
 with id=None - totally unexpected way of doing that, and it wasn't even
 working for that user as it came up with an = NULL comparison that isn't
 even right, *but* when SQLA 1.0 came around it started leaking an internal
 symbol into the query, and *then* it became noticeable.  That's the kind of
 thing that breaks with new SQLAlchemy versions these days.   We had a lot
 of those this time around, and the vast majority of them were logged as
 regressions which were fixed and added to testing.   You can see those by
 browsing versions 1.0.1 - 1.0.5 at
 http://docs.sqlalchemy.org/en/rel_1_0/changelog/changelog_10.html.

 We never intentionally make *any* backwards incompatible change in just
 one major version without warnings being emitted in the previous version,
 and the warnings usually involve urging the user to set a flag to use the
 old way if they're going to upgrade; that is, we at least always try to
 add flags to keep an old behavior in place if at all possible.   I've not
 seen anything in Openstack that would be sensitive to this kind of issue
 for 1.0.

 So for Openstack, we would mostly worry about code that is doing things
 oddly in some unexpected way, however all the Openstack code I've seen
 tends to be very ORM centric and uses the ORM very conservatively so I
 don't anticipate any problems.   I would note that some silently-/ quasi-
 failing use cases for query.update() and query.delete()  now raise
 exceptions in 1.0.   They both emit warnings in 0.9 but I just checked and
 apparently one of these warnings is only in the as-yet unreleased 0.9.10.
  I've just added an extra migration note for one of these which appeared to
 be missing in the migration document (as of this writing it should be up on
 RTD within an hour).

 That said, the document which tries to capture everything that might be
 surprising is at
 http://docs.sqlalchemy.org/en/rel_1_0/changelog/migration_10.html.




 Cheers,

 Thomas Goirand (zigo)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][oslo] oslo.middleware release 2.0.0 (liberty)

2015-06-09 Thread Joe Gordon
On Tue, Jun 9, 2015 at 6:14 PM, d...@doughellmann.com wrote:

 We are excited to announce the release of:

 oslo.middleware 2.0.0: Oslo Middleware library


And this broke the gate, but the fix is already working its way though the
system.

https://bugs.launchpad.net/grenade/+bug/1463478



 This release is part of the liberty release series.

 With source available at:

 http://git.openstack.org/cgit/openstack/oslo.middleware

 For more details, please see the git log history below and:

 http://launchpad.net/oslo.middleware/+milestone/2.0.0

 Please report issues through launchpad:

 http://bugs.launchpad.net/oslo.middleware

 Changes in oslo.middleware 1.3.0..2.0.0
 ---

 bcbfceb Remove oslo namespace package

 Diffstat (except docs and test files)
 -

 oslo/__init__.py  |  13 -
 oslo/middleware/__init__.py   |  28 --
 oslo/middleware/base.py   |  13 -
 oslo/middleware/catch_errors.py   |  13 -
 oslo/middleware/correlation_id.py |  13 -
 oslo/middleware/debug.py  |  13 -
 oslo/middleware/request_id.py |  13 -
 oslo/middleware/sizelimit.py  |  13 -
 setup.cfg |   4 --
 15 files changed, 429 deletions(-)



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance

2015-06-03 Thread Joe Gordon
On Mon, Jun 1, 2015 at 6:11 AM, Flavio Percoco fla...@redhat.com wrote:

 On 01/06/15 13:30 +0100, John Garbutt wrote:

 On 1 June 2015 at 13:10, Flavio Percoco fla...@redhat.com wrote:

 On 01/06/15 11:57 +0100, John Garbutt wrote:


 On 26/05/15 13:54 -0400, Nikhil Komawar wrote:


 On 5/26/15 12:57 PM, Jesse Cook wrote:
We also had some hallway talk about putting the v1 and v2 APIs on
 top
 of
the v3 API. This forces faster adoption, verifies supportability
 via
 v1
 and
v2 tests, increases supportability of v1 and v2 APIs, and pushes
 out
 the
need to kill v1 API.

 Let's discuss more as time and development progresses on that
 possibility.
 v3
 API should stay EXPERIMENTAL for now as that would help us understand
 use-cases
 across programs as it gets adopted by various code-bases. Putting
 v1/v2
 on
 top
 of v3 would be tricky for now as we may have breaking changes with
 code
 being
 relatively-less stable due to narrow review domain.




 I actually think we'd benefit more from having V2 on top of V3 than
 not doing it. I'd probably advocate to make this M material rather
 than L but I think it'd be good.

 I think regardless of what we do, I'd like to kill v1 as it has a
 sharing model that is not secure.



 Given v1 has lots of users, killing it will be hard.

 If you maintained v1 support as v1 on top of v3 (or v2 I guess), could
 you not do something like permanently disable the bad bits of the
 API?


 I agree it'll be hard but, at least for v1, I believe it should
 happen. It has some security issues (mostly related to image sharing)
 that are not going to be fixed there.


 OK, I guess you mean this one:
 https://wiki.openstack.org/wiki/OSSN/1226078

  The idea being, users listing their images, and updating image
 metadata via v1, don't get broken during the evolution?


 The feedback we got at the summit (even from OPs) was that we could go
 ahead, mark it as deprecated, give a deprecation period and then turn
 it off.


 I am surprised by that reply, but OK.

  FWIW, moving Nova from glance v1 to glance v2, without breaking Nova's
 public API, will require someone getting a big chunk of glance v1 on
 top of glance v2.


 AFAIK, the biggest issue right now is changed-since which is
 something Glance doesn't have in v2 but it's exposed throught Nova's
 image API.


 Thats the big unanswered question that needs fixing in any spec we
 would approve around this effort.


 I don't have an answer myself right now.


  I'm happy you brought this up. What are Nova's plans to adopt Glance's
 v2 ? I heard there was a discussion and something along the lines of
 creating a library that wraps both APIs came up.


 We don't have anyone who has stepped up to work on it at his point.

 I think the push you made around this effort in kilo is the latest
 updated on this:

 http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/remove-glanceclient-wrapper.html

 It would be great if we could find a glance/nova CPL to drive this effort.


 So, unless the library is proposed and some magic happens, is it safe
 to assume that the above spec is still valid and that folks can work
 on it?


  Where can I find more info about this?


 I suspect it will be included on our liberty priority TODO list, that
 I am yet to write, but I expect to appear here:
 http://specs.openstack.org/openstack/nova-specs/

  I really think nova should put some more effort on helping this
 happen. The work I did[0] - all red now, I swear it wasn't - during
 Kilo didn't get enough attention even before we decided to push it
 back. Not a complain, really. However, I'd love to see some
 cross-project efforts on making this happen.
 [0] https://review.openstack.org/#/c/144875/


 As there is no one to work on the effort, we haven't made it a
 priority for liberty.

 If someone is able to step up to help complete the work, I can do my
 best to help get that effort reviewed, by raising its priority, just
 as we did in Kilo.


 IIRC, the patch wasn't far from being ready. The latest patch-sets
 relied on the gate to run some tests and the biggest issue I had -
 still have - is that this script[0] didn't even use glanceclient but
 direct http calls. The issue, to be precises, is that I didn't have
 ways to test it locally, which made the work painful.

 If there's a way to do it - something that has already being asked -
 it'd be great.

 This said, I'm not sure how much time I'll have for this but I'm
 trying to find someone that could help out.


 https://review.openstack.org/#/c/144875/30/plugins/xenserver/xenapi/etc/xapi.d/plugins/glance,cm


 I suspect looking at how to slowly move towards v2, rather than going
 for a big bang approach, will make this easier to land. That and
 solving how we implement changed-since, if thats not available in
 the newer glance APIs. Honestly, part of me wonders about skipping v2,
 and going straight to v3.


 Regardless, I think we should enable people to run on a v2 only
 

Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Joe Gordon
On Tue, Jun 2, 2015 at 4:12 PM, Robert Collins robe...@robertcollins.net
wrote:

 On 3 June 2015 at 10:34, Jeremy Stanley fu...@yuggoth.org wrote:
  On 2015-06-02 21:59:34 + (+), Ian Cordasco wrote:
  I like this very much. I recall there was a session at the summit
  about this that Thierry and Kyle led. If I recall correctly, the
  discussion mentioned that it wasn't (at this point in time)
  possible to use gerrit the way you describe it, but perhaps people
  were mistaken?
  [...]
 
  It wasn't an option at the time. What's being conjectured now is
  that with custom Prolog rules it might be possible to base Gerrit
  label permissions on strict file subsets within repos. It's
  nontrivial, as of yet I've seen no working demonstration, and we'd
  still need the Infrastructure Team to feel comfortable supporting it
  even if it does turn out to be technically possible. But even before
  going down the path of automating/enforcing it anywhere in our
  toolchain, projects interested in this workflow need to try to
  mentally follow the proposed model and see if it makes social sense
  for them.
 
  It's also still not immediately apparent to me that this additional
  complication brings any substantial convenience over having distinct
  Git repositories under the control of separate but allied teams. For
  example, the Infrastructure Project is now past 120 repos with more
  than 70 core reviewers among those. In a hypothetical reality where
  those were separate directory trees within a single repository, I'm
  not coming up with any significant ways it would improve our current
  workflow. That said, I understand other projects may have different
  needs and challenges with their codebase we just don't face.

 We *really* don't need a technical solution to a social problem.

 If someone isn't trusted enough to know the difference between
 project/subsystemA and project/subsystemB, nor trusted enough to not
 commit changes to subsystemB, pushing stuff out to a new repo, or
 in-repo ACLs are not the solution. The solution is to work with them
 to learn to trust them.

 Further, there are plenty of cases where the 'subsystem' is
 cross-cutting, not vertical - and in those cases its much much much
 harder to just describe file boundaries where the thing is.

 So I'd like us to really get our heads around the idea that folk are
 able to make promises ('I will only commit changes relevant to the DB
 abstraction/transaction management') and honour them. And if they
 don't - well, remove their access. *even with* CD in the picture,
 thats a wholly acceptable risk IMO.


With gerrit's great REST APIs it would be very easy to generate a report to
detect if someone breaks their promise and commits something outside of a
given sub-directory.



 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] inactive projects

2015-05-29 Thread Joe Gordon
Hi All,

It turns out we have a few repositories that have been inactive for almost
a year, appear to be dead but don't have any documentation to reflecting
that. (And a lot more that have been dead for over half a year)


format: days since last updated - name

List of stackforge that have not been updated in almost a year.

816 stackforge/MRaaS
732 stackforge/occi-os
713 stackforge/bufunfa
437 stackforge/milk

We also have one openstack project that appears to be inactive and possible
dead:

171 openstack/kite


If you know if one of these repos is not maintained, please update the
README to reflect the state of the repo. For example:
https://git.openstack.org/cgit/stackforge/fuel-ostf-plugin/tree/README.rst

Source code: http://paste.openstack.org/show/246039/
Raw output: http://paste.openstack.org/show/246060
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] The new Lieutenant system in Neutron

2015-05-29 Thread Joe Gordon
On Thu, May 28, 2015 at 6:55 AM, Kyle Mestery mest...@mestery.com wrote:

 As part of the team's continuing effort to scale development in Neutron,
 we have officially merged the Lieutenant patch [1]. As the codebase
 continues to grow, this is an attempt to scale the code review load so we
 can grow new core reviewers while maintaining the model of trust required
 to successfully run the project. Please note we've selected some initial
 horizontal Lieutenant positions (you can see them here [2]). This could
 change as we experiment with things, but for now we've tried to select
 areas which make the most sense immediately.

 Thanks to all who contributed feedback on the patchset which enabled this.
 As with everything, we'll experiment and reevaluate where things are
 throughout Liberty and do a postmortem at the next Summit to understand if
 we need to adopt the model.


Great job, I look forward to seeing this model adopted elsewhere.

This sounds a lot like another large scale project we all know of


 has long since grown to a size where no single
developer could possibly inspect and select every patch unassisted.  The
way the ... developers have addressed this growth is through the use of
a lieutenant system built around a chain of trust.

The ... code base is logically broken down into a set of subsystems:
...  Most subsystems have a designated maintainer, a developer
who has overall responsibility for the code within that subsystem.  These
subsystem maintainers are the gatekeepers (in a loose way) for the portion
of the  they manage.

I wonder if we can adopt the rolling development model as well?

https://www.kernel.org/doc/Documentation/development-process/2.Process



 Thanks,
 Kyle

 [1] https://review.openstack.org/#/c/178846/
 [2]
 http://docs.openstack.org/developer/neutron/policies/core-reviewers.html

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-05-28 Thread Joe Gordon
On Thu, May 28, 2015 at 9:41 AM, Devananda van der Veen 
devananda@gmail.com wrote:

 Hi all,

 tl;dr;

 At the summit, the Ironic team discussed the challenges we've had with
 the current release model and came up with some ideas to address them.
 I had a brief follow-up conversation with Doug and Thierry, but I'd
 like this to be discussed more openly and for us (the Ironic dev
 community) to agree on a clear plan before we take action.

 If Ironic moves to a release:independent model, it shouldn't have any
 direct effect on other projects we integrate with -- we will continue
 to follow release:at-6mo-cycle-end -- but our processes for how we get
 there would be different, and that will have an effect on the larger
 community.

 Longer version...

 We captured some notes from our discussion on Thursday afternoon's
 etherpad:
 https://etherpad.openstack.org/p/liberty-ironic-scaling-the-dev-team

 Which I've summarized below, and mixed in several themes which didn't
 get captured on the 'pad but were discussed somewhere, possibly in a
 hallway or on Friday.

 Current challenges / observations:
 - six weeks' feature freeze is not actually having the desired
 stabilizing effect
 - the post-release/pre-summit slump only makes that worse
 - many folks stop reviewing during this time because they know their
 own features won't land, and instead focus their time downstream
 - this creates pressure to land features which aren't ready, and
 leaves few people to find bugs, write docs, and generally prepare the
 release during this window

 The alternative we discussed:
 - use feature branches for risky / large work, keeping total # of
 branches small, and rebasing them regularly on master
 - keep trunk moving quickly for smaller / less risky / refactoring changes
 - slow down for a week or two before a release, but dont actually
 freeze master
 - cut releases when new features are available
 - OpenStack coordinated releases are taken from latest independent release
 - that release will then get backports  stable maintenance, other
 independent releases don't

 We think this will accomplish a few things:
 - make the developer experience better by being more consistent, thus
 keeping developers engaged year-round and increase the likelyhood
 they'll find and fix bugs
 - reduce stress on core reviewers since there's no crunch time at
 the end of a cycle
 - allow big changes to bake in a feature branch, rather than in a
 series of gerrit patches that need to be continually re-reviewed and
 cherry-picked to test them.
 - allow operators who wish to use Ironic outside of OpenStack to
 consume feature releases more rapidly, while still consuming approved
 releases instead of being forced to deploy from trunk


 For reference, Michael has posted a tracking change to the governance
 repo here: https://review.openstack.org/#/c/185203/

 Before Ironic actually makes the switch, I would like us to discuss
 and document the approach we're going to take more fully, and get
 input from other teams on this approach. Often, the devil is in the
 details - and, for instance, I don't yet understand how we'll fit this
 approach into SemVer, or how this will affect our use of launchpad to
 track features (maybe it means we stop doing that?).


Sounds like a great plan.



 Input appreciated.

 Thanks,
 Devananda

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][all] Scalable metering

2015-05-27 Thread Joe Gordon
On Tue, May 26, 2015 at 6:03 PM, gordon chung g...@live.ca wrote:

 hi Tim,

 we're still doing some investigation but we're tracking/discussing part of
 the polling load issue here: https://review.openstack.org/#/c/185084/

 we're open to any ideas -- especially from nova api et al experts.


So I agree doing lots of naive polling can lead to issues on even the
fastest of APIs, but are there any bugs about this that were opened against
nova? At the very least nova should investigate why the specific calls are
so slow and see if we what we can do to make them at least a little faster
and lighter weight.





 cheers,
 gord


 
  From: tim.b...@cern.ch
  To: openstack-dev@lists.openstack.org
  Date: Tue, 26 May 2015 17:45:37 +
  Subject: [openstack-dev] [ceilometer][all] Scalable metering
 
 
 
 
  We had a good discussion at the summit regarding ceilometer scaling.
  Julien has written up some of the items discussed in
 
 https://julien.danjou.info/blog/2015/openstack-summit-liberty-vancouver-ceilometer-gnocchi
  and there is work ongoing in the storage area for scalable storage of
  ceilometer data using gnocchi.
 
 
 
  I’d like community input on the other scalability concern raised during
  the event, namely the load on other services when ceilometer is
  enabled. From the blog, “Ceilometer hits various endpoints in OpenStack
  that are poorly designed, and hitting those endpoints of Nova or other
  components triggers a lot of load on the platform.”.
 
 
 
  I would welcome suggestions on how to identify the potential changes in
  the OpenStack projects and improve the operator experience when
  deploying metering.
 
 
 
  Tim
 
 
 
 
 
 
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Using depends-on for patches which require an approved spec

2015-05-27 Thread Joe Gordon
On Tue, May 26, 2015 at 8:45 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Fri, May 22, 2015 at 02:57:23PM -0700, Michael Still wrote:
  Hey,
 
  it would be cool if devs posting changes for nova which depend on us
  approving their spec could use Depends-On to make sure their code
  doesn't land until the spec does.

 Does it actually bring any benefit ?  Any change for which there is
 a spec is already supposed to be tagged with 'Blueprint: foo-bar-wiz'
 and nova core devs are supposed to check the blueprint is approved
 before +A'ing it.  So also adding a Depends-on just feels redundant
 to me, and so is one more hurdle for contributors to remember to
 add. If we're concerned people forget the Blueprint tag, or forget
 to check blueprint approval, then we'll just have same problem with
 depends-on - people will forget to add it, and cores will forget
 to check the dependant change. So this just feels like extra rules
 for no gain and extra pain.


I think it does have a benefit. Giving a spec implementation patches,
commonly signals to reviewers to not review this patch (a -2 looks scary).
Instead of there was a depends-on no scary -2 is needed, we also wouldn't
need to hunt down the -2er and ask them to remove it (can be a delay due to
timezones). Anything that reduces the number of procedural -2s we need is a
good thing IMHO. But that doesn't mean we should require folks to do this,
we can try it out on a few patches and see how it goes.



 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 11:23 AM, Davanum Srinivas dava...@gmail.com
wrote:

 Joe,

 Given that the code once lived in nova and the team across has spent
 quite a bit of time to turn it into a library which at last count was
 adopted by 6 projects at least. i'd like to give the team some credit.


Agreed, they have done a great job. I was just pointing out a lot of
OpenStack libs don't use semver's 0.x.x clause much.



 openstack/ceilometer/test-requirements.txt:oslo.vmware=0.11.1
 # Apache-2.0
 openstack/cinder/requirements.txt:oslo.vmware=0.11.1
# Apache-2.0
 openstack/congress/requirements.txt:oslo.vmware=0.11.1
  # Apache-2.0
 openstack/glance/requirements.txt:oslo.vmware=0.11.1
# Apache-2.0
 openstack/glance_store/test-requirements.txt:oslo.vmware=0.11.1
   # Apache-2.0
 openstack/nova/test-requirements.txt:oslo.vmware=0.11.1,!=0.13.0
   # Apache-2.0

 Shit happens! the team that works on oslo.vmware overlaps nova too and
 others. There were several solutions that came up quickly as well. we
 can't just say nothing should ever break or we should not use 0.x.x
 then we can never make progress. This is not going to get any better
 either with the big tent coming up. All that matters is how quickly we
 can recover and move on with our collective sanity intact. Let's work
 on that in addition as well. I'd also want to give some more say in
 the actual folks who are contributing and working on the code as well
 in the specific discussion.

 Anyway, with the global-requirements block of 0.13.0, nova should
 unclog and we'll try to get something out soon in 0.13.1 to keep
 @haypo's python34 effort going as well.


Thanks! I think it would be good to move to 1.x.x soon to show that the API
is stable. But then again we do have a lot of other libraries that are
below 0.x.x so maybe we should look at that more holistically.



 +1000 to release fewer unexpectedly incompatible libraries and
 continue working on improving how we handle dependencies in general.
 i'd like to hear specific things we can do that we are not doing both
 for libraries under our collective care as well as things we use from
 the general python community.


For openstack libraries that have a fairly limited number of consumers we
can test source of the lib against target unit test suites, in addition to
a devstack run. So oslo.vmware would have a job running source oslo.vmware
against nova py27 unit tests.

As for in general, is cooking up a plan.



 thanks,
 dims

 On Wed, May 27, 2015 at 1:52 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
  On Wed, May 27, 2015 at 12:54 AM, Gary Kotton gkot...@vmware.com
 wrote:
 
  Hi,
  I prefer the patched posted by Sabari. The patch has two changes:
 
  It fixes unit tests
  In the even that an instance spawn fails then it catches an exception to
  warn the admin that the guestId may be invalid. The only degradation
 may be
  that the warning will no longer be there. I think that the admin can get
  this information from the logged exception too.
 
 
 
  So this breakage takes us into some strange waters.
 
  oslo.vmware is at version 0.x.x which according to semver [0] means
 Major
  version zero (0.y.z) is for initial development. Anything may change at
 any
  time. The public API should not be considered stable. If that is
 accurate,
  then nova should not be using oslo.vmware, since we shouldn't use an
  unstable library in production. If we are treating the API as stable then
  semver says we need to rev the major version (MAJOR version when you
 make
  incompatible API changes).
 
  What I am trying to say is, I don't know how you can say the nova unit
 tests
  are 'wrong.' either nova using oslo.vmware is 'wrong' or oslo.vmware
  breaking the API is 'wrong'.
 
  With OpenStack being so large and having so many dependencies (many of
 them
  openstack owned), we should focus on making sure we release fewer
  unexpectedly incompatible libraries and continue working on improving
 how we
  handle dependencies in general (lifeless has a big arch he is working on
  here AFAIK). So I am not in favor of the nova unit test change as a fix
  here.
 
 
  [0] http://semver.org/
 
 
  Thanks
  Gary
 
  From: Sabari Murugesan sabari.b...@gmail.com
  Reply-To: OpenStack List openstack-dev@lists.openstack.org
  Date: Wednesday, May 27, 2015 at 6:20 AM
  To: OpenStack List openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)
 
  Matt
 
  I posted a patch https://review.openstack.org/#/c/185830/1 to fix the
 nova
  tests and make it compatible with the oslo.vmware 0.13.0 release. I am
 fine
  with the revert and g-r blacklist as oslo.vmware broke the semver but
 we can
  also consider this patch as an option.
 
  Thanks
  Sabari
 
 
 
  On Tue, May 26, 2015 at 2:53 PM, Davanum Srinivas dava...@gmail.com
  wrote:
 
  Vipin, Gary,
 
  Can you please accept the revert or figure out the best way to handle
  this?
 
  thanks

Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic liaison

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 3:20 AM, Davanum Srinivas dava...@gmail.com wrote:

 Victor,

 Nice, yes, Joe was the liaison with Nova so far. Yes, please go ahead
 and add your name in the wiki for Nova as i believe Joe is winding
 down the oslo liaison as well.
 https://wiki.openstack.org/wiki/CrossProjectLiaisons#Oslo



Yup, thank you Victor!




 thanks,
 dims

 On Wed, May 27, 2015 at 5:12 AM, Victor Stinner vstin...@redhat.com
 wrote:
  Hi,
 
  By the way, who is the oslo liaison for nova? If there is nobody, I
 would
  like to take this position.
 
  Victor
 
  Le 25/05/2015 18:45, Ghe Rivero a écrit :
 
  My focus on the Ironic project has been decreasing in the last cycles,
  so it's about time to relinquish my position as a oslo-ironic liaison so
  new contributors can take over it and help ironic to be the vibrant
  project it is.
 
  So long, and thanks for all the fish,
 
  Ghe Rivero
  --
  Pinky: Gee, Brain, what do you want to do tonight?
  The Brain: The same thing we do every night, Pinky—try to take over the
  world!
 
.''`.  Pienso, Luego Incordio
  : :' :
  `. `'
 `- www.debian.org http://www.debian.org www.openstack.com
  http://www.openstack.com
 
  GPG Key: 26F020F7
  GPG fingerprint: 4986 39DA D152 050B 4699  9A71 66DB 5A36 26F0 20F7
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: https://twitter.com/dims

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 10:31 AM, Mike Bayer mba...@redhat.com wrote:



 On 5/27/15 3:06 AM, Kekane, Abhishek wrote:

  Hi Devs,



 Each OpenStack service sends a request ID header with HTTP responses. This
 request ID can be useful for tracking down problems in the logs. However,
 when operation crosses service boundaries, this tracking can become
 difficult, as each service has its own request ID. Request ID is not
 returned to the caller, so it is not easy to track the request. This
 becomes especially problematic when requests are coming in parallel. For
 example, glance will call cinder for creating image, but that cinder
 instance may be handling several other requests at the same time. By using
 same request ID in the log, user can easily find the cinder request ID that
 is same as glance request ID in the g-api log. It will help
 operators/developers to analyse logs effectively.



 To address this issue we have come up with following solutions:



 Solution 1: Return tuple containing headers and body from respective
 clients (also favoured by Joe Gordon)

 Reference:
 https://review.openstack.org/#/c/156508/6/specs/log-request-id-mappings.rst


 I like solution 1 as well as solution 3 at the same time, in fact.
 There's usefulness to being able to easily identify a set of requests as
 all part of the same operation as well as being able to identify a call's
 location in the hierarchy.

 In fact does solution #1 make the hierarchy apparent ?   I'd want it to do
 that, e.g. if call A calls B, which calls C and D, I'd want to know that
 the dependency tree is A-B-(C, D), and not just a bucket of (A, B, C,
 D).


#1 should make the hierarchy apparent. That IMHO is the biggest pro for #1.




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 12:06 AM, Kekane, Abhishek 
abhishek.kek...@nttdata.com wrote:

  Hi Devs,



 Each OpenStack service sends a request ID header with HTTP responses. This
 request ID can be useful for tracking down problems in the logs. However,
 when operation crosses service boundaries, this tracking can become
 difficult, as each service has its own request ID. Request ID is not
 returned to the caller, so it is not easy to track the request. This
 becomes especially problematic when requests are coming in parallel. For
 example, glance will call cinder for creating image, but that cinder
 instance may be handling several other requests at the same time. By using
 same request ID in the log, user can easily find the cinder request ID that
 is same as glance request ID in the g-api log. It will help
 operators/developers to analyse logs effectively.


Thank you for writing this up.




 To address this issue we have come up with following solutions:



 Solution 1: Return tuple containing headers and body from respective
 clients (also favoured by Joe Gordon)

 Reference:
 https://review.openstack.org/#/c/156508/6/specs/log-request-id-mappings.rst



 Pros:

 1. Maintains backward compatibility

 2. Effective debugging/analysing of the problem as both calling service
 request-id and called service request-id are logged in same log message

 3. Build a full call graph

 4. End user will able to know the request-id of the request and can
 approach service provider to know the cause of failure of particular
 request.



 Cons:

 1. The changes need to be done first in cross-projects before making
 changes in clients

 2. Applications which are using python-*clients needs to do required
 changes (check return type of  response)


Additional cons:

3. Cannot simply search all logs (ala logstash) using the request-id
returned to the user without any post processing of the logs.






 Solution 2:  Use thread local storage to store 'x-openstack-request-id'
 returned from headers (suggested by Doug Hellmann)

 Reference:
 https://review.openstack.org/#/c/156508/9/specs/log-request-id-mappings.rst



 Add new method ‘get_openstack_request_id’ to return this request-id to the
 caller.



 Pros:

 1. Doesn’t break compatibility

 2. Minimal changes are required in client

 3. Build a full call graph



 Cons:

 1. Malicious user can send long request-id to fill up the disk-space,
 resulting in potential DoS

 2. Changes need to be done in all python-*clients

 3. Last request id should be flushed out in a subsequent call otherwise it
 will return wrong request id to the caller





 Solution 3: Unique request-id across OpenStack Services (suggested by
 Jamie Lennox)

 Reference:
 https://review.openstack.org/#/c/156508/10/specs/log-request-id-mappings.rst



 Get 'x-openstack-request-id' from auth plugin and add it to the request
 headers. If 'x-openstack-request-id' key is present in the request header,
 then it will use the same one further or else it will generate a new one.



 Dependencies:

 https://review.openstack.org/#/c/164582/ - Include request-id in auth
 plugin and add it to request headers

 https://review.openstack.org/#/c/166063/ - Add session-object for glance
 client

 Add 'UserAuthPlugin' and '_ContextAuthPlugin' same as nova in cinder and
 neutron





 Pros:

 1. Using same request id for the request crossing multiple service
 boundaries will help operators/developers identify the problem quickly

 2. Required changes only in keystonemiddleware and oslo_middleware
 libraries. No changes are required in the python client bindings or
 OpenStack core services



 Cons:

 1. As 'x-openstack-request-id' in the request header will be visible to
 the user, it is possible to send same request id for multiple requests
 which in turn could create more problems in case of troubleshooting cause
 of the failure as request_id middleware will not check for its uniqueness
 in the scope of the running OpenStack service.

 2. Having the same request ID for all services for a single user API call
 means you cannot generate a full call graph. For example if a single user's
 nova API call produces 2 calls to glance you want to be able to
 differentiate the two different calls.





 During the Liberty design summit, I had a chance of discussing these
 designs with some of the core members like Doug, Joe Gordon, Jamie Lennox
 etc. But not able to came to any conclusion on the final design and know
 the communities direction by which way they want to use this request-id
 effectively.



 However IMO, solution 1 sounds more useful as the debugger can able to
 build the full call graph which can be helpful for analysing gate failures
 effectively as well as end user will be able to know his request-id and can
 track his request.



 I request all community members to go through these solutions and let us
 know which is the appropriate way to improve the logs by logging request-id.





 Thanks  Regards

Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 12:54 AM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 I prefer the patched posted by Sabari. The patch has two changes:

1. It fixes unit tests
2. In the even that an instance spawn fails then it catches an
exception to warn the admin that the guestId may be invalid. The only
degradation may be that the warning will no longer be there. I think that
the admin can get this information from the logged exception too.



So this breakage takes us into some strange waters.

oslo.vmware is at version 0.x.x which according to semver [0] means Major
version zero (0.y.z) is for initial development. Anything may change at any
time. The public API should not be considered stable. If that is accurate,
then nova should not be using oslo.vmware, since we shouldn't use an
unstable library in production. If we are treating the API as stable then
semver says we need to rev the major version (MAJOR version when you make
incompatible API changes).

What I am trying to say is, I don't know how you can say the nova unit
tests are 'wrong.' either nova using oslo.vmware is 'wrong' or oslo.vmware
breaking the API is 'wrong'.

With OpenStack being so large and having so many dependencies (many of them
openstack owned), we should focus on making sure we release fewer
unexpectedly incompatible libraries and continue working on improving how
we handle dependencies in general (lifeless has a big arch he is working on
here AFAIK). So I am not in favor of the nova unit test change as a fix
here.


[0] http://semver.org/


 Thanks
 Gary

   From: Sabari Murugesan sabari.b...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, May 27, 2015 at 6:20 AM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)

   Matt

  I posted a patch https://review.openstack.org/#/c/185830/1 to fix the
 nova tests and make it compatible with the oslo.vmware 0.13.0 release. I am
 fine with the revert and g-r blacklist as oslo.vmware broke the semver but
 we can also consider this patch as an option.

  Thanks
 Sabari



 On Tue, May 26, 2015 at 2:53 PM, Davanum Srinivas dava...@gmail.com
 wrote:

 Vipin, Gary,

 Can you please accept the revert or figure out the best way to handle
 this?

 thanks,
 dims

 On Tue, May 26, 2015 at 5:41 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 
 
  On 5/26/2015 4:19 PM, Matt Riedemann wrote:
 
 
 
  On 5/26/2015 9:53 AM, Davanum Srinivas wrote:
 
  We are gleeful to announce the release of:
 
  oslo.vmware 0.13.0: Oslo VMware library
 
  With source available at:
 
   http://git.openstack.org/cgit/openstack/oslo.vmware
 
  For more details, please see the git log history below and:
 
   http://launchpad.net/oslo.vmware/+milestone/0.13.0
 
  Please report issues through launchpad:
 
   http://bugs.launchpad.net/oslo.vmware
 
  Changes in oslo.vmware 0.12.0..0.13.0
  -
 
  5df9daa Add ToolsUnavailable exception
  286cb9e Add support for dynamicProperty
  7758123 Remove support for Python 3.3
  11e7d71 Updated from global requirements
  883c441 Remove run_cross_tests.sh
  1986196 Use suds-jurko on Python 2
  84ab8c4 Updated from global requirements
  6cbde19 Imported Translations from Transifex
  8d4695e Updated from global requirements
  1668fef Raise VimFaultException for unknown faults
  15dbfb2 Imported Translations from Transifex
  c338f19 Add NoDiskSpaceException
  25ec49d Add utility function to get profiles by IDs
  32c61ee Add bandit to tox for security static analysis
  f140b7e Add SPBM WSDL for vSphere 6.0
 
  Diffstat (except docs and test files)
  -
 
  bandit.yaml|  130 +++
  openstack-common.conf  |2 -
  .../locale/fr/LC_MESSAGES/oslo.vmware-log-error.po |9 -
  .../locale/fr/LC_MESSAGES/oslo.vmware-log-info.po  |3 -
  .../fr/LC_MESSAGES/oslo.vmware-log-warning.po  |   10 -
  oslo.vmware/locale/fr/LC_MESSAGES/oslo.vmware.po   |   86 +-
  oslo.vmware/locale/oslo.vmware.pot |   48 +-
  oslo_vmware/api.py |   10 +-
  oslo_vmware/exceptions.py  |   13 +-
  oslo_vmware/objects/datastore.py   |6 +-
  oslo_vmware/pbm.py |   18 +
  oslo_vmware/service.py |2 +-
  oslo_vmware/wsdl/6.0/core-types.xsd|  237 +
  oslo_vmware/wsdl/6.0/pbm-messagetypes.xsd  |  186 
  oslo_vmware/wsdl/6.0/pbm-types.xsd |  806
 ++
  oslo_vmware/wsdl/6.0/pbm.wsdl  | 1104
  
  oslo_vmware/wsdl/6.0/pbmService.wsdl   |   16 +
  requirements-py3.txt   |   27 -
  requirements.txt   |8 +-
  setup.cfg 

Re: [Openstack-operators] OpenSource ELK configurations

2015-05-27 Thread Joe Gordon
On Mon, May 18, 2015 at 3:11 PM, Tom Fifield t...@openstack.org wrote:

 There's some stuff in the osops repo:

 https://github.com/osops/tools-logging

 Please contribute if you can!

 Regards,

 Tom


 On 18/05/15 14:56, Anand Kumar Sankaran wrote:

 Hi all

 Is there a set of open source ELK configurations available?  (log stash
 filters, templates, kibana dashboards).  I see a github repository from
 Godaddy, wondering if there is a standard set that is used.


OpenStack has an ELK stack running at logstash.openstack.org to help debug
test jobs.

The logstash filters can be found at:
http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/templates/logstash/indexer.conf.erb



 Thanks.

 —
 anand


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenSource ELK configurations

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 12:27 PM, Joseph Bajin josephba...@gmail.com
wrote:

 I think those from the Infra Project are a great start, but I do think
 they are missing a lot.


I am sure they are, and we would love contributions to make them better.
The filters from the infra project are used every day by developers use to
hunt down issues. And the more similar the developer and operator
experience is here the easier it will be for the two groups to talk about
the same problems.



 Instead of breaking down the message into parts, it just breaks down the
 type (INFO, DEBUG, etc) and then makes the rest of the message greedy.
 That doesn't help searching or graphing or anything like that (or at least
 makes it more difficult over time). At least that is what I have seen.


One of the reasons for this is because most of the log messages are very
unstructured. But I have a hunch we can do better then what we have today.



 We are using a little bit tweaked version of the godaddy's scripts.  That
 seems to give us a good amount of detail that we can search and filter on.
 I'll see if I can post my versions of them to the repo.


Are you talking about https://github.com/osops/tools-logging
https://github.com/osops/tools-logging/pull/3 or the infra repo? It would
be great to have both developer and operator communities  collaborate on
this and improve everyones ELK configuration.



 On Wed, May 27, 2015 at 2:00 PM, Mark Voelker mvoel...@vmware.com wrote:

 Sounds like a fine thing to point people to…thanks Joe.

 https://github.com/osops/tools-logging/pull/3

 At Your Service,

 Mark T. Voelker


  On May 27, 2015, at 1:12 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
  On Mon, May 18, 2015 at 3:11 PM, Tom Fifield t...@openstack.org wrote:
  There's some stuff in the osops repo:
 
  https://github.com/osops/tools-logging
 
  Please contribute if you can!
 
  Regards,
 
  Tom
 
 
  On 18/05/15 14:56, Anand Kumar Sankaran wrote:
  Hi all
 
  Is there a set of open source ELK configurations available?  (log stash
  filters, templates, kibana dashboards).  I see a github repository from
  Godaddy, wondering if there is a standard set that is used.
 
  OpenStack has an ELK stack running at logstash.openstack.org to help
 debug test jobs.
 
  The logstash filters can be found at:
 http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/templates/logstash/indexer.conf.erb
 
 
  Thanks.
 
  —
  anand
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 4:27 PM, Fox, Kevin M kevin@pnnl.gov wrote:

  I'd say, tools that utilize OpenStack, like the knife openstack plugin,
 are not something that you would probably go to the catalog to find. And
 also, the recipes that you would use with knife would not be specific to
 OpenStack in any way, so you would just be duplicating the config
 management system's own catalog in the OpenStack catalog, which would be
 error prone. Duplicating all the chef recipes, and docker containers,
 puppet stuff, and . is a lot of work...


I am very much against duplicating things, including chef recipes that use
the openstack plugin for knife. But we can still easily point to external
resources from apps.openstack.org. In fact we already do (
http://apps.openstack.org/#tab=heat-templatesasset=Lattice).



 The vision I have for the Catalog (I can be totally wrong here, lets
 please discuss) is a place where users (non computer scientists) can visit
 after logging into their Cloud, pick some app of interest, hit launch, and
 optionally fill out a form. They then have a running piece of software,
 provided by the greater OpenStack Community, that they can interact with,
 and their Cloud can bill them for. Think of it as the Apple App Store for
 OpenStack.  Having a reliable set of deployment engines (Murano, Heat,
 whatever) involved is critical to the experience I think. Having too many
 of them though will mean it will be rare to have a cloud that has all of
 them, restricting the utility of the catalog. Too much choice here may
 actually be a detriment.


calling this a catalog, which it sounds accurate, is confusing since
keystone already has a catalog.   Naming things is unfortunately a
difficult problem.

I respectfully disagree with this vision. I mostly agree with the first
part about it being somewhere users can go to find applications that can be
quickly deployed on OpenStack (note all the gotchas that Monty described
here). The part I disagree with is about limiting the deployment engines to
invented here. Even if we have 100 deployment engines on apps.openstack.org,
it would be very easy for a user to filter by the deployment engines they
use so I do not agree with your concern about too many choices here being a
detriment (after all isn't OpenStack about choices?).

Secondly IMHO the notion that 'if it wasn't invented here we shouldn't
support it' [0] is a dangerous one that results on us constantly
re-inventing the wheel while alienating the larger developer community by
saying there solutions are no good, you should use the OpenStack version of
it.


OpenStack isn't a single 'thing' it is a collection of 'things' and user's
should be able to pick and choose which components they want and which
components they want to get from elsewhere.

[0] http://en.wikipedia.org/wiki/Not_invented_here


If chef, or what ever other configuration management system became
 multitenant aware, and integrated into OpenStack and provided by the Cloud
 providers, then maybe it would fit into the app store vision?


I am not sure why this matters?  As a dependency you simply state chef, and
either require users to provide it or tell them to use a chef heat
template, glance image, etc.



 Thanks,
 Kevin
 --
 *From:* Joe Gordon [joe.gord...@gmail.com]
 *Sent:* Wednesday, May 27, 2015 3:20 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [new][app-catalog] App Catalog next steps



 On Fri, May 22, 2015 at 9:06 PM, Christopher Aedo ca...@mirantis.com
 wrote:

 I want to start off by thanking everyone who joined us at the first
 working session in Vancouver, and those folks who have already started
 adding content to the app catalog. I was happy to see the enthusiasm
 and excitement, and am looking forward to working with all of you to
 build this into something that has a major impact on OpenStack
 adoption by making it easier for our end users to find and share the
 assets that run on our clouds.


  Great job. This is very exciting to see, I have been wanting something
 like this for some time now.



 The catalog: http://apps.openstack.org
 The repo: https://github.com/stackforge/apps-catalog
 The wiki: https://wiki.openstack.org/wiki/App-Catalog

 Please join us via IRC at #openstack-app-catalog on freenode.

 Our initial core team is Christopher Aedo, Tom Fifield, Kevin Fox,
 Serg Melikyan.

 I’ve started a doodle poll to vote on the initial IRC meeting
 schedule, if you’re interested in helping improve and build up this
 catalog please vote for the day/time that works best and get involved!
 http://doodle.com/vf3husyn4bdkui8w

 At the summit we managed to get one planning session together. We
 captured that on etherpad[1], but I’d like to highlight here a few of
 the things we talked about working on together in the near term:

 -More information around asset dependencies (like clarifying
 requirements for Heat templates

Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Joe Gordon
On Fri, May 22, 2015 at 9:06 PM, Christopher Aedo ca...@mirantis.com
wrote:

 I want to start off by thanking everyone who joined us at the first
 working session in Vancouver, and those folks who have already started
 adding content to the app catalog. I was happy to see the enthusiasm
 and excitement, and am looking forward to working with all of you to
 build this into something that has a major impact on OpenStack
 adoption by making it easier for our end users to find and share the
 assets that run on our clouds.


Great job. This is very exciting to see, I have been wanting something like
this for some time now.



 The catalog: http://apps.openstack.org
 The repo: https://github.com/stackforge/apps-catalog
 The wiki: https://wiki.openstack.org/wiki/App-Catalog

 Please join us via IRC at #openstack-app-catalog on freenode.

 Our initial core team is Christopher Aedo, Tom Fifield, Kevin Fox,
 Serg Melikyan.

 I’ve started a doodle poll to vote on the initial IRC meeting
 schedule, if you’re interested in helping improve and build up this
 catalog please vote for the day/time that works best and get involved!
 http://doodle.com/vf3husyn4bdkui8w

 At the summit we managed to get one planning session together. We
 captured that on etherpad[1], but I’d like to highlight here a few of
 the things we talked about working on together in the near term:

 -More information around asset dependencies (like clarifying
 requirements for Heat templates or Glance images for instance),
 potentially just by providing better guidance in what should be in the
 description and attributes sections.
 -With respect to the assets that are listed in the catalog, there’s a
 need to account for tagging, rating/scoring, and a way to have
 comments or a forum for each asset so potential users can interact
 outside of the gerrit review system.
 -Supporting more resource types (Sahara, Trove, Tosca, others)


What about expanding the scope of the application catalog to any
application that can run *on* OpenStack, versus the implied scope of
applications that can be deployed *by* (heat, murano, etc.) OpenStack and
*on* OpenStack services (nova, cinder etc.). This would mean adding room
for Ansible roles that provision openstack resources [0]. And more
generally it would reinforce the point that there is no 'blessed' method of
deploying applications on OpenStack, you can use tools developed
specifically for OpenStack or tools developed elsewhere.


[0]
https://github.com/ansible/ansible-modules-core/blob/1f99382dfb395c1b993b2812122761371da1bad6/cloud/openstack/os_server.py


 -Discuss using glance artifact repository as the backend rather than
 flat YAML files
 -REST API, enable searching/sorting, this would ease native
 integration with other projects
 -Federated catalog support (top level catalog including contents from
 sub-catalogs)
 - I’ll be working with the OpenStack infra team to get the server and
 CI set up in their environment (though that work will not impact the
 catalog as it stands today).


I am pleased to see moving this to OpenStack Infra is a high priority.

A quick nslookup of http://apps.openstack.org shows it us currently hosted
on linode at http://nb-23-239-6-45.fremont.nodebalancer.linode.com/. And
last I checked linode isn't OpenStack powered.  apps.openstack.org is a
great example of the type of application that should be easy to deploy with
OpenStack, since as far as I can tell it just needs a web server and that
is it. So wearing my OpenStack developer hat on, why did you go with linode
and not any one of the OpenStack based public clouds [1]? If OpenStack is
not a good solution for workloads like this, then it would be great to know
how what needs work.


[1] https://www.openstack.org/marketplace/public-clouds/


 There were a ton of great ideas that came up and it was clear there
 was WAY more to discuss than we could accomplish in one short session
 at the summit.  I’m looking forward to continuing the conversation
 here on the mailing list, on IRC, and in Tokyo as well!

 [1] https://etherpad.openstack.org/p/YVR-app-catalog-plans

 -Christopher

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] RPC Asynchronous Communication

2015-05-22 Thread Joe Gordon
On Thu, May 21, 2015 at 8:13 AM, Sahid Orentino Ferdjaoui 
sahid.ferdja...@redhat.com wrote:

 On Fri, May 08, 2015 at 09:13:59AM -0400, Doug Hellmann wrote:
  Excerpts from Joe Gordon's message of 2015-05-07 17:43:06 -0700:
   On May 7, 2015 2:37 AM, Sahid Orentino Ferdjaoui 
   sahid.ferdja...@redhat.com wrote:
   
Hi,
   
The primary point of this expected discussion around asynchronous
communication is to optimize performance by reducing latency.
   
For instance the design used in Nova and probably other projects let
able to operate ascynchronous operations from two way.
   
1. When communicate between inter-services
2. When communicate to the database
   
1 and 2 are close since they use the same API but I prefer to keep a
difference here since the high level layer is not the same.
   
From Oslo Messaging point of view we currently have two methods to
invoke an RPC:
   
  Cast and Call: The first one is not bloking and will invoke a RPC
without to wait any response while the second will block the
process and wait for the response.
   
The aim is to add new method which will return without to block the
process an object let's call it Future which will provide some
 basic
methods to wait and get a response at any time.
   
The benefice from Nova will comes on a higher level:
   
1. When communicate between services it will be not necessary to
 block
   the process and use this free time to execute some other
   computations.
  
   Isn't this what the use of green threads (and eventlet) is supposed to
   solve. Assuming my understanding is correct, and we can fix any issues
   without adding async oslo.messaging, then adding yet another async
 pattern
   seems like a bad thing.

 The aim is to be not library specific and avoid to add different and
 custom patterns on the base code each time we want something not
 blocking.
 We can let the well experimented community working in olso messaging
 to maintain that part and as Doug said oslo can use different
 executors so we can avoid the requirement of a specific library.


I see no problem with nova depending on a specific async library. I am not
keen on adding yet another column to our support matrix.



  Yes, this is what the various executors in the messaging library do,
  including the eventlet-based executor we use by default.
 
  Where are you seeing nova block on RPC calls?

 In Nova we use the indirection api to make call to the database by the
 conductor through RPCs.
 By the solution presented we can create async operations to read and
 write from the database.

 Olekssi asks me to give an example I will reply to him.

 s.
  Doug
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] meeting with the zaqar team at summit; my notes

2015-05-22 Thread Joe Gordon
On Fri, May 22, 2015 at 8:48 AM, Amrith Kumar amr...@tesora.com wrote:

  I’m posting this to the mailing list to summarize my notes from a
 meeting at 5pm yesterday at Summit relative to Zaqar and lightweight
 multi-tenant messaging and how it may be applicable to a number of projects.



 I’ll begin by saying these are not ‘minutes’ of a meeting, merely my notes
 and observations after the meeting and how they relate specifically to
 Trove. I don’t claim to speak for Trove, other contributors to Trove, other
 projects who were at the meeting, for zaqar, etc., etc.,



 After the meeting I think I have a slightly better understanding of what
 Zaqar is but I am still not entirely sure. As best as I can tell, it is a
 lightweight, keystone authenticated, multi-tenant messaging system. I am
 still a little troubled that of the many people in the room who were
 knowledgeable of zaqar, there appeared to be some disagreement on how best
 to describe or explain the project.


If we cannot agree on how to explain zaqar, how can projects even think
about adopting it?




 I learned that users of zaqar can authenticate with keystone and then
 interact with zaqar, and pass messages using it. I learned also that zaqar
 is spelt with a ‘q’ that is not followed by a ‘u’. i.e. it isn’t zaquar as
 I had thought it was.



 It became clear that the underlying transport in zaqar is not based on an
 existing AMQP service, rather zaqar is a “from the ground up”
 implementation. This scares me (a lot).



 I gather there is currently no oslo.messaging integration with zaqar; for
 Trove to use zaqar we would have to either (a) abandon oslo.messaging and
 use zaqar, or (b) build in smarts within Trove to determine at run time
 whether we are using zaqar or o.m and implement code in Trove to handle the
 differences between them if any.



 It wasn’t clear to me after the meeting what differences there may be with
 Trove; one which was alluded to was the inability to do a synchronous
 (call()) style message and the statement was that this was something that
 “could be built into a driver”.



 It wasn’t clear to me what scale zaqar has been run at and whether anyone
 has in fact deployed and run zaqar at scale, and whether it has been battle
 hardened the way a service like RabbitMQ has. While I hear from many that
 RabbitMQ is a nightmare to scale and manage, I realize that it does in fact
 have a long history of deployments at scale.



 We discussed some of the assumptions being made in the conversation
 relative to the security of the various parties to the communication on the
 existing rabbit message queue and at the conclusion of the meeting I
 believe we left things as below.



 (a)Zaqar would be more appealing if it had a simple oslo.messaging driver
 and an easier path to integration by client projects like Trove. The
 rip-and-replace option put a certain damper on the enthusiasm

 (b)Even with an o.m integration, the incremental benefits that zaqar
 brought were diminished by the fact that one would still have to operate an
 AMQP (RabbitMQ) service for the rest of the infrastructure message passing
 needs unless and until all projects decide to abandon RabbitMQ in favor of
 zaqar

 (c)At this time it is likely that there is no net benefit to a project
 like Trove in integrating with zaqar given that the upside is likely
 limited, the downside(s) that we know of are significant, and there is a
 significant unknown risk.



 My thanks to the folks from zaqar for having the session, I certainly
 learnt a lot more about the project, and about openstack.



 Let me conclude where I began, by saying the preceding is not a ‘minutes
 of the meeting’, merely my notes from the meeting.



 Thanks,



 -amrith

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate pep8 jobs broken today

2015-05-19 Thread Joe Gordon
On May 19, 2015 12:43 AM, Andreas Jaeger a...@suse.com wrote:

 On 05/19/2015 09:28 AM, Andreas Jaeger wrote:

 On 05/19/2015 02:54 AM, Robert Collins wrote:

 Hi, we had a gate outage today for a few hours.

 http://pad.lv/1456376

 The issue was an interaction between the existence of pbr 1.0, project
 local requirements, hacking 0.10.1 and flake8 2.4.1.

 When flake8 2.4.1 loads plugins (which hacking is) it uses
 pkg_resources and calls load(), which checks requirements.

 pbr in the pep8 jobs is installed by the project requirements.txt
 files, which per global-requirements mostly now say =0.11, 2.0, so
 pbr 1.0.0 was immediately installed once it was released.

 hacking is installed from release, so hacking 0.10.1 was installed,
 which has the constraint for pbr of 1.0 that we had prior to bumping
 the releases in global-requirements. And so boom.

 We've now released hacking 0.10.2, which contains only the updated pbr
 constraint, and we don't expect any additional fallout from it.

 Thanks Clark, Doug, Ian, Sean, and Joe for helping unwind, analyze and
 fix this.


 There are some projects like ironic that pin an old hacking version and
 thus will not benefit from the new hacking release:

 hacking=0.9.2,0.10

 They need to update their hacking version [1],

 Andreas

 [1] https://review.openstack.org/184198


 Additional projects in the openstack namespace that might fail pep8 now
due to the pinning of hacking:

 castellan/test-requirements.txt:hacking=0.9.2,0.10
 congress/test-requirements.txt:hacking=0.9.2,0.10
 designate/test-requirements.txt:hacking=0.9.2,0.10
 heat-cfntools/test-requirements.txt:hacking=0.8.0,0.9
 heat-templates/test-requirements.txt:hacking=0.8.0,0.9
 ironic-python-agent/test-requirements.txt:hacking=0.8.0,0.9
 keystonemiddleware/test-requirements-py3.txt:hacking=0.8.0,0.9
 kite/test-requirements.txt:hacking=0.9.2,0.10
 manila/test-requirements.txt:hacking=0.9.2,0.10
 murano-agent/test-requirements.txt:hacking=0.8.0,0.9
 os-apply-config/test-requirements.txt:hacking=0.9.2,0.10
 os-client-config/test-requirements.txt:hacking=0.9.2,0.10
 os-cloud-config/test-requirements.txt:hacking=0.9.2,0.10
 os-collect-config/test-requirements.txt:hacking=0.9.2,0.10
 os-refresh-config/test-requirements.txt:hacking=0.9.2,0.10
 python-cinderclient/test-requirements.txt:hacking=0.8.0,0.9
 python-congressclient/test-requirements.txt:hacking=0.9.2,0.10
 python-designateclient/test-requirements.txt:hacking=0.9.2,0.10
 python-glanceclient/test-requirements.txt:hacking=0.8.0,0.9
 python-heatclient/test-requirements.txt:hacking=0.8.0,0.9
 python-kiteclient/test-requirements.txt:hacking=0.9.1,0.10
 python-manilaclient/test-requirements.txt:hacking=0.9.2,0.10
 python-muranoclient/test-requirements.txt:hacking=0.9.2,0.10
 python-swiftclient/test-requirements.txt:hacking=0.8.0,0.9
 python-troveclient/test-requirements.txt:hacking=0.8.0,0.9
 python-tuskarclient/test-requirements.txt:hacking=0.9.2,0.10
 python-zaqarclient/test-requirements.txt:hacking=0.8.0,0.9
 rally/test-requirements.txt:hacking=0.9.2,0.10
 swift-bench/test-requirements.txt:hacking=0.8.0,0.9
 swift/test-requirements.txt:hacking=0.8.0,0.9
 tripleo-image-elements/test-requirements.txt:hacking=0.9.2,0.10
 tripleo-puppet-elements/test-requirements.txt:hacking=0.8.0,0.9
 trove/test-requirements.txt:hacking=0.8.0,0.9
 tuskar/test-requirements.txt:hacking=0.9.2,0.10
 zaqar/test-requirements-py3.txt:hacking=0.8.0,0.9
 zaqar/test-requirements.txt:hacking=0.9.2,0.10

 I won't fix them myself. Note that the new hacking version indroduces new
checks also might need to get fixed,


While I would like to see projects move off 0.9.x etc. I think the better
option is to backport the fix to 0.9.x and 0.8.x as needed.

 Andreas

 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
HRB 21284 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate pep8 jobs broken today

2015-05-19 Thread Joe Gordon
Once these land [0], I can cut Hacking 0.8.2 and 0.9.6 to unblock projects
still using hacking 0.8.x and 0.9.x respectively


[0]
https://review.openstack.org/#/q/I77f2b7e661c4de067e39596765d36a4463a2d143,n,z

On Tue, May 19, 2015 at 3:54 AM, Joe Gordon joe.gord...@gmail.com wrote:


 On May 19, 2015 12:43 AM, Andreas Jaeger a...@suse.com wrote:
 
  On 05/19/2015 09:28 AM, Andreas Jaeger wrote:
 
  On 05/19/2015 02:54 AM, Robert Collins wrote:
 
  Hi, we had a gate outage today for a few hours.
 
  http://pad.lv/1456376
 
  The issue was an interaction between the existence of pbr 1.0, project
  local requirements, hacking 0.10.1 and flake8 2.4.1.
 
  When flake8 2.4.1 loads plugins (which hacking is) it uses
  pkg_resources and calls load(), which checks requirements.
 
  pbr in the pep8 jobs is installed by the project requirements.txt
  files, which per global-requirements mostly now say =0.11, 2.0, so
  pbr 1.0.0 was immediately installed once it was released.
 
  hacking is installed from release, so hacking 0.10.1 was installed,
  which has the constraint for pbr of 1.0 that we had prior to bumping
  the releases in global-requirements. And so boom.
 
  We've now released hacking 0.10.2, which contains only the updated pbr
  constraint, and we don't expect any additional fallout from it.
 
  Thanks Clark, Doug, Ian, Sean, and Joe for helping unwind, analyze and
  fix this.
 
 
  There are some projects like ironic that pin an old hacking version and
  thus will not benefit from the new hacking release:
 
  hacking=0.9.2,0.10
 
  They need to update their hacking version [1],
 
  Andreas
 
  [1] https://review.openstack.org/184198
 
 
  Additional projects in the openstack namespace that might fail pep8 now
 due to the pinning of hacking:
 
  castellan/test-requirements.txt:hacking=0.9.2,0.10
  congress/test-requirements.txt:hacking=0.9.2,0.10
  designate/test-requirements.txt:hacking=0.9.2,0.10
  heat-cfntools/test-requirements.txt:hacking=0.8.0,0.9
  heat-templates/test-requirements.txt:hacking=0.8.0,0.9
  ironic-python-agent/test-requirements.txt:hacking=0.8.0,0.9
  keystonemiddleware/test-requirements-py3.txt:hacking=0.8.0,0.9
  kite/test-requirements.txt:hacking=0.9.2,0.10
  manila/test-requirements.txt:hacking=0.9.2,0.10
  murano-agent/test-requirements.txt:hacking=0.8.0,0.9
  os-apply-config/test-requirements.txt:hacking=0.9.2,0.10
  os-client-config/test-requirements.txt:hacking=0.9.2,0.10
  os-cloud-config/test-requirements.txt:hacking=0.9.2,0.10
  os-collect-config/test-requirements.txt:hacking=0.9.2,0.10
  os-refresh-config/test-requirements.txt:hacking=0.9.2,0.10
  python-cinderclient/test-requirements.txt:hacking=0.8.0,0.9
  python-congressclient/test-requirements.txt:hacking=0.9.2,0.10
  python-designateclient/test-requirements.txt:hacking=0.9.2,0.10
  python-glanceclient/test-requirements.txt:hacking=0.8.0,0.9
  python-heatclient/test-requirements.txt:hacking=0.8.0,0.9
  python-kiteclient/test-requirements.txt:hacking=0.9.1,0.10
  python-manilaclient/test-requirements.txt:hacking=0.9.2,0.10
  python-muranoclient/test-requirements.txt:hacking=0.9.2,0.10
  python-swiftclient/test-requirements.txt:hacking=0.8.0,0.9
  python-troveclient/test-requirements.txt:hacking=0.8.0,0.9
  python-tuskarclient/test-requirements.txt:hacking=0.9.2,0.10
  python-zaqarclient/test-requirements.txt:hacking=0.8.0,0.9
  rally/test-requirements.txt:hacking=0.9.2,0.10
  swift-bench/test-requirements.txt:hacking=0.8.0,0.9
  swift/test-requirements.txt:hacking=0.8.0,0.9
  tripleo-image-elements/test-requirements.txt:hacking=0.9.2,0.10
  tripleo-puppet-elements/test-requirements.txt:hacking=0.8.0,0.9
  trove/test-requirements.txt:hacking=0.8.0,0.9
  tuskar/test-requirements.txt:hacking=0.9.2,0.10
  zaqar/test-requirements-py3.txt:hacking=0.8.0,0.9
  zaqar/test-requirements.txt:hacking=0.9.2,0.10
 
  I won't fix them myself. Note that the new hacking version indroduces
 new checks also might need to get fixed,
 

 While I would like to see projects move off 0.9.x etc. I think the better
 option is to backport the fix to 0.9.x and 0.8.x as needed.

  Andreas
 
  --
   Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
 GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
 HRB 21284 (AG Nürnberg)
  GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman

Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-05-15 Thread Joe Gordon
On Fri, May 15, 2015 at 2:27 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Thu, May 14, 2015 at 3:52 AM, John Garbutt j...@johngarbutt.com
 wrote:

 On 12 May 2015 at 20:33, Sean Dague s...@dague.net wrote:
  On 05/12/2015 01:12 PM, Jeremy Stanley wrote:
  On 2015-05-12 10:04:11 -0700 (-0700), Clint Byrum wrote:
  It's a nice up side. However, as others have pointed out, it's only
  capable of displaying the most basic pieces of the architecture.
 
  For higher level views with more components, I don't think ASCII art
  can provide enough bandwidth to help as much as a vector diagram.
 
  Of course, simply a reminder that just because you have one or two
  complex diagram callouts in a document doesn't mean it's necessary
  to also go back and replace your simpler ASCII art diagrams with
  unintelligible (without rendering) SVG or Postscript or whatever.
  Doing so pointlessly alienates at least some fraction of readers.
 
  Sure, it's all about trade offs.
 
  But I believe that statement implicitly assumes that ascii art diagrams
  do not alienate some fraction of readers. And I think that's a bad
  assumption.
 
  If we all feel alienated every time anyone does anything that's not
  exactly the way we would have done it, it's time to give up and pack it
  in. :) This thread specifically mentioned source based image formats
  that were internationally adopted open standards (w3c SVG, ISO ODG) that
  have free software editors that exist in Windows, Mac, and Linux
  (Inkscape and Open/LibreOffice).

 Some great points make here.

 Lets try decide something, and move forward here.

 Key requirements seem to be:
 * we need something that gives us readable diagrams
 * if its not easy to edit, it will go stale
 * ideally needs to be source based, so it lives happily inside git
 * needs to integrate into our sphinx pipeline
 * ideally have an opensource editor for that format (import and
 export), for most platforms

 ascii art fails on many of these, but its always a trade off.

 Possible way forward:
 * lets avoid merging large hard to edit bitmap style images
 * nova-core reviewers can apply their judgement on merging source based
 formats
 * however it *must* render correctly in the generated html (see result
 of docs CI job)

 Trying out SVG, and possibly blockdiag, seem like the front runners.
 I don't think we will get consensus without trying them, so lets do that.

 Will that approach work?


 Sounds like a great plan.




After further investigation in blockdiag, is useless for moderately complex
diagrams.

Here is my attempt at graphing nova [0], but due to a blockdiag bug from
2013, [1] it is impossible to clearly read. For example, in the diagram
there is not supposed to be any arrow between the conductor and
cinder/glance/neutron. I looked into dia, and while it has plenty of
diagram shapes it doesn't have a good template for software architecture,
but maybe there is a way to make dia work. And that just  leaves SVG
graphics,  after spending an hour or two  playing around with Inkscape and
it looks promising (although the learning curve is pretty steep). Here is
my first attempt in Inkscape [2].


[0]
http://interactive.blockdiag.com/?compression=deflatesrc=eJx9UMtOAzEMvOcrrL0vPwCtVHYryoG2EvSEOHiTtI0axavEFQK0_47dB1oOkEuSmbE9ni6SPbiAO_gyAJviM7yWPfYeJlChZcrV2-2VqafQxOAT62u2fhwTC8rhk9KIkWOMfuBOC0NyPtdLf-RMqX6ImKwXWbN6Wm9e5v9ppNcu07EXi_puVsv2LL-U6jAd8wsSTByJV-QgtibQU-aMgcft4G-RcBE7HzWH9h7QWl9KpaMKf0SNxxGzdyfkElgMSVcCS5GyFnYR7aESxCFjh8WPwt1Gerd7zHxzJc9J_2wiW8r93Czm7cnOYAZjhm9d4H0M
[1] https://bitbucket.org/blockdiag/blockdiag/issue/45/arrows-collisions
[2] https://i.imgur.com/TXwsRoB.png



 Thanks,
 John

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [surge] Introducing Surge - rapid deploy/scale stream processing systems on OpenStack

2015-05-15 Thread Joe Gordon
On Fri, May 15, 2015 at 10:13 AM, Debojyoti Dutta ddu...@gmail.com wrote:

 Hi,

 It gives me a great pleasure to introduce Surge - a system to rapidly
 deploy and scale a stream processing system on OpenStack. It leverages
 Vagrant and Ansible, and supports both OpenStack as well as the local mode
 (with VirtualBox).

 https://github.com/CiscoSystems/surge


I see you support Storm and Kafka,

How is this different then Sahara's Storm plugin?

https://github.com/openstack/sahara/blob/45045d918f363fa5763cde700561434345017661/setup.cfg#L47

And I See Sahara is exploring Kafka support:
https://blueprints.launchpad.net/sahara/+spec/cdh-kafka-service


 Hope to see a lot of pull requests and comments.

 thx
 -Debo~

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-05-15 Thread Joe Gordon
On Tue, May 12, 2015 at 5:25 AM, Thierry Carrez thie...@openstack.org
wrote:

 Sean Dague wrote:
  They can be modified if you provide source files, or use a source
  oriented format like SVG, or ISO standard ODG (used by OpenOffice /
  LibreOffice). There is a reason the spider diagram has ended up in
  every single OpenStack presentation I've seen for the last 2 years.

 And there is also a reason the spider diagram was never updated (or
 fixed for accuracy, for that matter).


I tried creating the spider diagram pragmatically with pydot, but it didn't
work very well, it just looks like plate of spaghetti

Diagram: https://i.imgur.com/fDqpB9m.png
source: https://github.com/jogo/graphing-openstack


 I'm not saying we should use ASCII art, I'm saying we should use a
 source-oriented format so that we reduce the likeliness of stale
 information. I was mostly reacting to Matt's mention of real image
 files (taking as an example the pictures in
 https://wiki.openstack.org/wiki/QA/AuthInterface which are PNGs without
 any source).

 If only one person can easily update a picture, it *will* go stale. And
 I think I prefer ugly to plain wrong. With source-oriented formats
 you can take advantage of more dimensions without sacrificing the
 ability to update it.

 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-05-15 Thread Joe Gordon
On Thu, May 14, 2015 at 3:52 AM, John Garbutt j...@johngarbutt.com wrote:

 On 12 May 2015 at 20:33, Sean Dague s...@dague.net wrote:
  On 05/12/2015 01:12 PM, Jeremy Stanley wrote:
  On 2015-05-12 10:04:11 -0700 (-0700), Clint Byrum wrote:
  It's a nice up side. However, as others have pointed out, it's only
  capable of displaying the most basic pieces of the architecture.
 
  For higher level views with more components, I don't think ASCII art
  can provide enough bandwidth to help as much as a vector diagram.
 
  Of course, simply a reminder that just because you have one or two
  complex diagram callouts in a document doesn't mean it's necessary
  to also go back and replace your simpler ASCII art diagrams with
  unintelligible (without rendering) SVG or Postscript or whatever.
  Doing so pointlessly alienates at least some fraction of readers.
 
  Sure, it's all about trade offs.
 
  But I believe that statement implicitly assumes that ascii art diagrams
  do not alienate some fraction of readers. And I think that's a bad
  assumption.
 
  If we all feel alienated every time anyone does anything that's not
  exactly the way we would have done it, it's time to give up and pack it
  in. :) This thread specifically mentioned source based image formats
  that were internationally adopted open standards (w3c SVG, ISO ODG) that
  have free software editors that exist in Windows, Mac, and Linux
  (Inkscape and Open/LibreOffice).

 Some great points make here.

 Lets try decide something, and move forward here.

 Key requirements seem to be:
 * we need something that gives us readable diagrams
 * if its not easy to edit, it will go stale
 * ideally needs to be source based, so it lives happily inside git
 * needs to integrate into our sphinx pipeline
 * ideally have an opensource editor for that format (import and
 export), for most platforms

 ascii art fails on many of these, but its always a trade off.

 Possible way forward:
 * lets avoid merging large hard to edit bitmap style images
 * nova-core reviewers can apply their judgement on merging source based
 formats
 * however it *must* render correctly in the generated html (see result
 of docs CI job)

 Trying out SVG, and possibly blockdiag, seem like the front runners.
 I don't think we will get consensus without trying them, so lets do that.

 Will that approach work?


Sounds like a great plan.


 Thanks,
 John

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] A way for Operators/Users to submit feature requests

2015-05-14 Thread Joe Gordon
On May 14, 2015 12:50 PM, Maish Saidel-Keesing mais...@maishsk.com
wrote:

 I just saw an email on the Operators list [1] that I think would allow a
much simpler process for the non-developer community to submit a feature
request. I understand that this was raised once upon a time [2] - at least
in part a while back.

 Rally have have the option to submit a feature request (a.k.a. backlog) -
which I think is straight forward and simple.

 I think this will be a good way for those who are not familiar with the
way a spec should be written, and honestly would not know how to write such
a spec for any of the projects, but have identified a missing feature or a
need for an improvement in one of the Projects.

 They only need to specify 3 small things (a sentence / two for each)
 1. Use Case
 2. Problem description
 3. Possible solution

That is exactly what the backlog is supposed to be.

These specifications have the problem description completed, but all other
sections are optional.

From:

http://specs.openstack.org/openstack/nova-specs/specs/backlog/

As I am sure there is room for improvement, why not propose a change to the
specs repo to improve the backlog spec process?


 I am not saying that each feature request should be implemented - or that
each possible solution is the best and only way to solve the problem. That
should be up to each and every project how this will be (or even if it
should be) implemented. How important it will be for them to implement this
feature and what priority this should receive. A developer then picks up
the request and turns it into a proper blueprint with proper actionable
items.

 Of course this has to be valid feature request, and not just someone
looking for support - how exactly this should be vetted, I have not thought
this through till the end. But I was looking to hear some feedback on the
idea of making this a way for all of the OpenStack projects to allow them
to collect actual feedback in a simple way.

 Your thoughts and feedback would be appreciated.

 [1]
http://lists.openstack.org/pipermail/openstack-operators/2015-May/006982.html
 [2]
http://lists.openstack.org/pipermail/openstack-dev/2014-August/044057.html

 --
 Best Regards,
 Maish Saidel-Keesing

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] python-novaclient 2.25.0

2015-05-12 Thread Joe Gordon
And openstackclient 1.0.4 broke grenade:

https://bugs.launchpad.net/python-openstackclient/+bug/1454467

I think we need a 1.1 release for trunk and make sure caps are set so its
not used in stable/kilo. Or something like that.

On Tue, May 12, 2015 at 12:57 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
 wrote:



 On 5/12/2015 10:35 AM, Matt Riedemann wrote:

 https://launchpad.net/python-novaclient/+milestone/2.25.0

 mriedem@ubuntu:~/git/python-novaclient$ git log --no-merges --oneline
 2.24.1..2.25.0
 0e35f2a Drop use of 'oslo' namespace package
 667f1af Reuse uuidutils frim oslo_utils
 4a7cf96 Sync latest code from oslo-incubator
 d03a85a Updated from global requirements
 02c04c5 Make _discover_extensions public
 99fcc69 Updated from global requirements
 bf6fbdb nova client now support limits subcommand
 95421a3 Don't use SessionClient for version-list API
 86ec0c6 Add min/max microversions to version-list cmd
 61ef35f Deprecate v1.1 and remove v3
 4f9e65c Don't lookup service url when bypass_url is given
 098116d Revert nova flavor-show command is inconsistent
 af7c850 Update README to work with release tools
 420dc28 refactor functional test base class to no inherit from tempest_lib
 2761606 Report better error message --ephemeral poor usage
 a63aa51 Fix displaying of an unavailable flavor of a showing instance
 19d4d35 Handle binary userdata files such as gzip
 14cada7 Add --all-tenants option to 'nova delete'



 And stable/kilo is busted, as I predicted would happen in the nova channel
 around 10am before the release:

 https://bugs.launchpad.net/python-openstackclient/+bug/1454397


 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] python-novaclient 2.25.0

2015-05-12 Thread Joe Gordon
On Tue, May 12, 2015 at 7:18 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-05-12 17:40:47 -0700 (-0700), Joe Gordon wrote:
  And openstackclient 1.0.4 broke grenade:
 
  https://bugs.launchpad.net/python-openstackclient/+bug/1454467
 
  I think we need a 1.1 release for trunk and make sure caps are set so its
  not used in stable/kilo. Or something like that.

 It looks like the issue is that stable/kilo global requirements has
 capped python-cinderclient1.2.0 but and python-openstackclient
 1.0.4 matches that, but stable/kilo of glance_store requires
 python-cinderclient with no cap and is being installed first (it's
 not in the openstack/requirements projects.txt on master or
 stable/kilo and so hasn't been getting requirements update proposal
 changes).


It looks like there is enough trouble for us to both be right.

http://logs.openstack.org/92/170492/2/gate/gate-grenade-dsvm/c8d20ea/logs/old/devstacklog.txt.gz#_2015-05-12_23_54_53_262
 Is exactly what you are describe

But then we go on to install nova which has the correct cap, and we revert
to the right cinderclient for kilo

http://logs.openstack.org/92/170492/2/gate/gate-grenade-dsvm/c8d20ea/logs/old/devstacklog.txt.gz#_2015-05-12_23_55_51_481

So we do have a bad dependency in Kilo but its not breaking grenade on
master.



 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] python-novaclient 2.25.0

2015-05-12 Thread Joe Gordon
On Tue, May 12, 2015 at 7:52 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-05-12 19:43:21 -0700 (-0700), Joe Gordon wrote:
 [...]
  But then we go on to install nova which has the correct cap, and we
 revert
  to the right cinderclient for kilo
 
 
 http://logs.openstack.org/92/170492/2/gate/gate-grenade-dsvm/c8d20ea/logs/old/devstacklog.txt.gz#_2015-05-12_23_55_51_481
 
  So we do have a bad dependency in Kilo but its not breaking grenade on
  master.

 Aha, and then grenade fails to upgrade from the kilo openstackclient
 point release (1.0.4) to the latest release for master (1.2.0).


Hopefully this is the fix: https://review.openstack.org/182524


 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-05-11 Thread Joe Gordon
When learning about how a project works one of the first things I look for
is a brief architecture description along with a diagram. For most
OpenStack projects, all I can find is a bunch of random third party slides
and diagrams.

Most Individual OpenStack projects have either no architecture diagram or
ascii art. Searching for 'OpenStack X architecture' where X is any of the
OpenStack projects turns up pretty sad results. For example heat [0] an
Keystone [1] have no diagram. Nova on the other hand does have a diagram,
but its ascii art [2]. I don't think ascii art makes for great user facing
documentation (for any kind of user).

So how can we do better then ascii art architecture diagrams?

[0] http://docs.openstack.org/developer/heat/architecture.html
[1] http://docs.openstack.org/developer/keystone/architecture.html
[2] http://docs.openstack.org/developer/nova/devref/architecture.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Joe Gordon
On Fri, May 8, 2015 at 4:23 AM, Sean Dague s...@dague.net wrote:

 On 05/08/2015 07:13 AM, Robert Collins wrote:
  On 8 May 2015 at 22:54, Sean Dague s...@dague.net wrote:
  I'm slightly confused how we got there, because we do try to install
  everything all at once in the test jobs -
 
 http://logs.openstack.org/83/181083/1/check/check-requirements-integration-dsvm/4effcf7/console.html#_2015-05-07_17_49_26_699
 
  And it seemed to work, you can find similar lines in previous changes as
  well. That was specifically added as a check for these kinds of issues.
  Is this a race in the resolution?
 
  What resolution :).
 
  So what happens with pip install -r
  /opt/stack/new/requirements/global-requirements.txt is that the
  constraints in that file are all immediately put into pip's state,
  including oslo.config = 1.11.0, and then all other constraints that
  reference to oslo.config are simply ignored. this is 1b (and 2a) on
  https://github.com/pypa/pip/issues/988.
 
  IOW we haven't been testing what we've thought we've been testing.
  What we've been testing is that 'python setup.py install X' for X in
  global-requirements.txt works, which sadly doesn't tell us a lot at
  all.
 
  So, as I have a working (but unpolished) resolver, when I try to do
  the same thing, it chews away at the problem and concludes that no, it
  can't do it - because its no longer ignoring the additional
  constraints.
 
  To get out of the hole, we might consider using pip-compile now as a
  warning job - if it can succeed we'll be able to be reasonably
  confident that pip itself will succeed once the resolver is merged.
 
  The resolver I have doesn't preserve the '1b' feature at all at this
  point, and we're going to need to find a way to separate out 'I want
  X' from 'I want X and I know better than you', which will let folk get
  into tasty tasty trouble (like we're in now).

 Gotcha, so, yes, so the subtleties of pip were lost here.

 Instead of using another tool, could we make a version of this job pull
 and use the prerelease version of your pip code. Then we can run the
 same tests and fix them in a non voting job against this code that has
 not yet released.


Once we are actually testing that all of global requirements is
co-installable  will we end up with even more cases like this? Or is this
just an artifact of capping fro kilo?
https://review.openstack.org/#/c/166377/


 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Joe Gordon
As a heads up, here is a patch to remove Sahara from the default
configuration as well. This is part of the effort to further decouple the
'integrated gate' so we don't have to gate every project on the tests for
every project.

https://review.openstack.org/#/c/181230/

On Thu, May 7, 2015 at 11:58 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 So... test jobs should be extremely explicit about what they setup and
 what they expect.


 +2

 Best regards,
 Boris Pavlovic

 On Thu, May 7, 2015 at 9:44 PM, Sean Dague s...@dague.net wrote:

 On 05/07/2015 02:29 PM, Joshua Harlow wrote:
  Boris Pavlovic wrote:
  Sean,
 
  Nobody is able to track and know *everything*.
 
  Friendly reminder that Heat is going to be removed and not installed by
  default would help to avoid such situations.
 
  Doesn't keystone have a service listing? Use that in rally (and
  elsewhere?), if keystone had a service and each service had a API
  discovery ability, there u go, profit! ;)

 Service listing for test jobs is actually quite dangerous, because then
 something can change something about which services are registered, and
 you automatically start skipping 30% of your tests because you react
 correctly to this change. However, that means the job stopped doing what
 you think it should do.

 *This has happened multiple times in the past*. And typically days,
 weeks, or months go by before someone notices in investigating an
 unrelated failure. And then it's days, weeks, or months to dig out of
 the regressions introduced.

 So... test jobs should be extremely explicit about what they setup and
 what they expect.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] RPC Asynchronous Communication

2015-05-07 Thread Joe Gordon
On May 7, 2015 2:37 AM, Sahid Orentino Ferdjaoui 
sahid.ferdja...@redhat.com wrote:

 Hi,

 The primary point of this expected discussion around asynchronous
 communication is to optimize performance by reducing latency.

 For instance the design used in Nova and probably other projects let
 able to operate ascynchronous operations from two way.

 1. When communicate between inter-services
 2. When communicate to the database

 1 and 2 are close since they use the same API but I prefer to keep a
 difference here since the high level layer is not the same.

 From Oslo Messaging point of view we currently have two methods to
 invoke an RPC:

   Cast and Call: The first one is not bloking and will invoke a RPC
 without to wait any response while the second will block the
 process and wait for the response.

 The aim is to add new method which will return without to block the
 process an object let's call it Future which will provide some basic
 methods to wait and get a response at any time.

 The benefice from Nova will comes on a higher level:

 1. When communicate between services it will be not necessary to block
the process and use this free time to execute some other
computations.

Isn't this what the use of green threads (and eventlet) is supposed to
solve. Assuming my understanding is correct, and we can fix any issues
without adding async oslo.messaging, then adding yet another async pattern
seems like a bad thing.


   future = rpcapi.invoke_long_process()
  ... do something else here ...
   result = future.get_response()

 2. We can use the benefice of all of the work previously done with the
Conductor and so by updating the framework Objects and Indirection
Api we should take advantage of async operations to the database.

MyObject = MyClassObject.get_async()
  ... do something else here ...
MyObject.wait()

MyObject.foo = bar
MyObject.save_async()
  ... do something else here ...
MyObject.wait()

 All of this is to illustrate and have to be discussed.

 I guess the first job needs to come from Oslo Messaging so the
 question is to know the feeling here and then from Nova since it will
 be the primary consumer of this feature.

 https://blueprints.launchpad.net/nova/+spec/asynchronous-communication

 Thanks,
 s.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-05 Thread Joe Gordon
On Tue, May 5, 2015 at 9:53 AM, James Bottomley 
james.bottom...@hansenpartnership.com wrote:

 On Tue, 2015-05-05 at 10:45 +0200, Thierry Carrez wrote:
  Joe Gordon wrote:
   [...]
   To tackle this I would like to propose the idea of a periodic developer
   oriented newsletter, and if we agree to go forward with this, hopefully
   the foundation can help us find someone to write newsletter.
 
  I've been discussing the idea of a LWN for OpenStack for some time,
  originally with Mark McLoughlin. For those who don't know it, LWN
  (lwn.net) is a source of quality tech reporting on Linux in general (and
  the kernel in particular). It's written by developers and tech reporters
  and funded by subscribers.
 
  An LWN-like OpenStack development newsletter would provide general
  status, dive into specific features, report on specific
  talks/conferences, summarize threads etc. It would be tremendously
  useful to the development community.
 
  The issue is, who can write such content ? It is a full-time job to
  produce authored content, you can't just copy (or link to) content
  produced elsewhere. It takes a very special kind of individual to write
  such content: the person has to be highly technical, able to tackle any
  topic, and totally connected with the OpenStack development community.
  That person has to be cross-project and ideally have already-built
  legitimacy.

 Here, you're being overly restrictive.  Lwn.net isn't staffed by top
 level kernel maintainers (although it does solicit the occasional
 article from them).  It's staffed by people who gained credibility via
 their insightful reporting rather than by their contributions.  I see no
 reason why the same model wouldn't work for OpenStack.


++.  I have a hunch that like many things (in OpenStack) if you make a
space for people to step up,  they will.



 There is one technical difference: in the kernel, you can get all the
 information from the linux-kernel (and other mailing list) firehose if
 you're skilled enough to extract it.  With OpenStack, openstack-dev
 isn't enough so you have to do other stuff as well, but that's more or
 less equivalent to additional research.

   It's basically the kind of profile every OpenStack company
  is struggling and fighting to hire. And that rare person should not
  really want to spend that much time developing (or become CTO of a
  startup) but prefer to write technical articles about what happens in
  OpenStack development. I'm not sure such a person exists. And a
  newsletter actually takes more than one such person, because it's a lot
  of work (even if not weekly).

 That's a bit pessimistic: followed to it's logical conclusion it would
 say that lwn.net can't exist either ... which is a bit of a
 contradiction.

  So as much as I'd like this to happen, I'm not convinced it's worth
  getting excited unless we have clear indication that we would have
  people willing and able to pull it off. The matter of who pays the bill
  is secondary -- I just don't think the profile exists.
 
  For the matter, I tried to push such an idea in the past and couldn't
  find anyone to fit the rare profile I think is needed to succeed. All
  the people I could think of had other more interesting things to do. I
  don't think things changed -- but I'd love to be proven wrong.

 Um, I assume you've thought of this already, but have you tried asking
 lwn.net?  As you say above, they already fit the profile.  Whether they
 have the bandwidth is another matter, but I believe their Chief Editor
 (Jon Corbet) may welcome a broadening of the funding base, particularly
 if the OpenStack foundation were offering seed funding for the
 endeavour.

 James



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-05 Thread Joe Gordon
On Mon, May 4, 2015 at 3:35 PM, Stefano Maffulli stef...@openstack.org
wrote:

 Thanks Joe for bringing this up. I have always tried to find topics
 worth being covered in the weekly newsletter. I assemble that newsletter
 thinking of developers and operators as the main targets, I'd like both
 audiences to have one place to look at weekly and skim rapidly to see if
 they missed something interesting.

 Over the years I have tried to change it based on feedback I received so
 this conversation is great to have

 On 05/04/2015 12:03 PM, Joe Gordon wrote:
  The  big questions I would like to see answered are:
 
  * What are the big challenges each project is currently working on?
  * What can we learn from each other?
  * Where are individual projects trying to solve the same problem
  independently?

 These are all important and interesting questions. When there were fewer
 projects it wasn't too hard to keep things together. Nowadays there is a
 lot more going on and in gerrit, which requires a bit more upfront
 investment to be useful.

  To answer these questions one needs to look at a lot of sources,
 including:
 
  * Weekly meeting logs, or hopefully just the notes assuming we get
  better at taking detailed notes

 I counted over 80 meetings each week: if they all took excellent notes,
 with clear #info lines for the relevant stuff, one person would be able
 probably to parse them all in a couple hours every week to identify
 items worth reporting.

 My experience is that IRC meeting logs don't convey anything useful to
 outsiders. Pick any project log from
 http://eavesdrop.openstack.org/meetings/ and you'll see what I mean.
 Even the best ones don't really mean much to those not into the project
 itself.

 I am very skeptical that we can educate all meeting participants to take
 notes that can be meaningful to outsiders. I am also not sure that the
 IRC meeting notes are the right place for this.

 Maybe it would make more sense to educate PTLs and liasons to nudge me
 with a brief email or log in some sort of notification bucket a quick
 snippet of text to share with the rest of the contributors.


Makes sense to me.


  * approved specs

 More than the approved ones, which are easy to spot on
 specs.openstack.org, I think the new ones proposed are more interesting.

 Ideally I would find a way to publish draft specs on
 specs.openstack.org/drafts/ or somehow provide a way for uneducated (to
 gerrit) readers to more easily discover what's coming.

 Until a better technical solution exists, I can pull regularly from all
 status:open changesets from *-specs repositories and put them in a
 section of the weekly newsletter.


You can search for the  list of merged specs by age (last updated) to get a
decent view of what merged since last week.

https://review.openstack.org/#/q/is:merged+age:1week+project:%255Eopenstack/.*-specs,n,z



  * periodically talk to the PTL of each project to see if any big
  discussions were discussed else where

 I think this already happen in the xproject meeting, doesn't it?


Things come up in the xproject meeting that we already know to be cross
project. If two projects independently are tackling the same issue they may
not realize there is room for collaboration.



  * Topics selected for discussion at summits

 I'm confused about this: aren't these visible already as part of the
 schedule?


Yes they are,  but going through the list of all the topics and making a
list of the big ticket items/highlights is still a lot of work.


  Off the top of my head here are a few topics that would make good
  candidates for this newsletter:
 
  * What are different projects doing with microversioned APIs, I know
  that at least two projects are tackling this
  * How has the specs process evolved in each project, we all started out
  from a common point but seem to have all gone in slightly different
  directions
  * What will each projects priorities be in Liberty? Do any of them
 overlap?
  * Any process changes that projects have tried that worked or didn't work
  * How is functional testing evolving in each project

 Great to have precise examples to work with. It's useful exercise to
 start from the end and trace back to where the answer will be. How would
 the answer to these question look like?

  Would this help with cross project communication? Is this feasible?
  Other thoughts?

 I think it would help, it is feasible. Let's keep the ideas rolling :)

 /stef

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org

[openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-04 Thread Joe Gordon
Before going any further, I am proposing something to make it easier for
the developer community to keep track of what other projects are working
on. I am not proposing anything to directly help operators or users, that
is a separate problem space.



In Mark McClain's TC candidacy email he brought up the issue of cross
project communication[0]:

  Our codebase has grown significantly over the years and a
contributor must invest significant time to understand and follow
every project; however many contributors have limited time must choose
a subset of projects to direct their focus.  As a result, it becomes
important to employ cross project communication to explain major
technical decisions, share solutions to project challenges that might
be widely applicable, and leverage our collective experience.  The TC
should seek new ways to facilitate cross project communication that
will enable the community to craft improvements to the interfaces
between projects as there will be greater familiarity between across
the boundary.

 Better cross project communication will make it easier to share technical
solutions and promote a more unified experience across projects.  It seems
like just about every time I talk to people from different projects I learn
about something interesting and relevant that they are working on.

While I usually track discussions on the mailing list, it is a poor way of
keeping track of what the big issues each project is working on. Stefano's
'OpenStack Community Weekly Newsletter' does a good job of highlighting
many things including important mailing list conversations, but it doesn't
really answer the question of What is X (Ironic, Nova, Neutron, Cinder,
Keystone, Heat etc.) up to?

To tackle this I would like to propose the idea of a periodic developer
oriented newsletter, and if we agree to go forward with this, hopefully the
foundation can help us find someone to write newsletter.

Now on to the details.

I am not sure what the right cadence for this newsletter would be, but I
think weekly is too
frequent and once a 6 month cycle would be too infrequent.

The  big questions I would like to see answered are:

* What are the big challenges each project is currently working on?
* What can we learn from each other?
* Where are individual projects trying to solve the same problem
independently?

To answer these questions one needs to look at a lot of sources, including:

* Weekly meeting logs, or hopefully just the notes assuming we get better
at taking detailed notes
* approved specs
* periodically talk to the PTL of each project to see if any big
discussions were discussed else where
* Topics selected for discussion at summits

Off the top of my head here are a few topics that would make good
candidates for this newsletter:

* What are different projects doing with microversioned APIs, I know that
at least two projects are tackling this
* How has the specs process evolved in each project, we all started out
from a common point but seem to have all gone in slightly different
directions
* What will each projects priorities be in Liberty? Do any of them overlap?
* Any process changes that projects have tried that worked or didn't work
* How is functional testing evolving in each project


Would this help with cross project communication? Is this feasible? Other
thoughts?

best,
Joe




[0]
http://lists.openstack.org/pipermail/openstack-dev/2015-April/062361.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-04 Thread Joe Gordon
On Mon, May 4, 2015 at 12:27 PM, Robert Collins robe...@robertcollins.net
wrote:

 On 5 May 2015 at 07:03, Joe Gordon joe.gord...@gmail.com wrote:
  Before going any further, I am proposing something to make it easier for
 the
  developer community to keep track of what other projects are working on.
 I
  am not proposing anything to directly help operators or users, that is a
  separate problem space.

 I like the thrust of your proposal.

 Any reason not to ask the existing newsletter to be a bit richer? Much
 of the same effort is required to do the existing one and the content
 you propose IMO.


Short answer: Maybe.  To make the existing newsletter 'a bit richer' would
require a fairly significant amount of additional work. Also I think
sending out this more detailed newsletter on a weekly basis would be too
way to frequent. Lastly, there are sections in the current newsletter that
are very useful that doesn't belong in the one I am proposing; sections
such as Upcoming events, Tips n' tricks, The road to Vancouver.


 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Elections] TC Election analysis

2015-04-30 Thread Joe Gordon
As others have done for past elections, here is a brief breakdown of the TC
election ballot data.

analysis: http://paste.openstack.org/show/213831
source code:  http://paste.openstack.org/show/213830

Some highlights are:

* 3 people voted but ranked everyone as #19
* 16% of the ballots voted for 3 or fewer candidates
* Theirry and Jay did much better then everyone else.
* Most winning candidates were ranked #19 over 1/3 of the time.
* No one voted for only James while 6 people only voted for Flavio (min and
max)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Elections] TC Election analysis

2015-04-30 Thread Joe Gordon
On Thu, Apr 30, 2015 at 1:28 PM, Maish Saidel-Keesing mais...@maishsk.com
wrote:


 On 04/30/15 21:48, Joe Gordon wrote:

 As others have done for past elections, here is a brief breakdown of the
 TC election ballot data.

 analysis: http://paste.openstack.org/show/213831
 source code: http://paste.openstack.org/show/213830

 Some highlights are:

 * 3 people voted but ranked everyone as #19
 * 16% of the ballots voted for 3 or fewer candidates
 * Theirry and Jay did much better then everyone else.
 * Most winning candidates were ranked #19 over 1/3 of the time.
 * No one voted for only James while 6 people only voted for Flavio (min
 and max)


  Thanks Joe for the analysis. It is quite interesting. Another thing that
 I find interesting is the low participation rate.

 Out of 2169 eligible voters, 548 participated - that is 25.26%.


According to [0] there were 369 ”Regular” contributors in 2014, and in Kilo
there were only 748 contributors with 5 or more patches out of the 1886
contributors [1]. So measuring participation from eligible votes is a
misleading. Well over half of the eligible voters have only contributed a
few patches, and we had more voters participate then we have 'regular'
contributors.  So I think our turnout is actually really good.

[0] https://www.openstack.org/assets/reports/osf-annual-report-2014.pdf
[1] http://stackalytics.com/?release=kilometric=commits



 Comparing to previous elections
 Oct. 2014 - 1893 eligible voters, 506 participated - 26.73%
 Apr. 2014 - 1510 eligible voters, 408 participated - 29.66%

 I am wondering why the participation level is so low. This is really one
 of the few opportunities a contributor has to define the direction of
 OpenStack as a whole. And yet it goes down each election.

 I can think of perhaps two reasons for low participation.

 1. People do not see find that they need interaction with the TC, they are
 focused on the work going on in their project and at most - have
 interaction they need with the PTL, they do not really care that much about
 - or have any dealings with the TC and it members - so they do not find it
 important enough to participate.

 2. Could it be that OpenStack has contributors that are producing code -
 mainly because that is what their job is - they are hired by a vendor, a
 company that has made it a priority to get code into the products - and
 therefore they produce code, and evidently it is a sizable number of people
 like this - but do not really participate in the community?

 Thoughts?

 --
 Best Regards,
 Maish Saidel-Keesing

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Melanie Witt to nova-core

2015-04-30 Thread Joe Gordon
On Thu, Apr 30, 2015 at 4:30 AM, John Garbutt j...@johngarbutt.com wrote:

 Hi,

 I propose we add Melanie to nova-core.

 She has been consistently doing great quality code reviews[1],
 alongside a wide array of other really valuable contributions to the
 Nova project.

 Please respond with comments, +1s, or objections within one week.


+1



 Many thanks,
 John

 [1] https://review.openstack.org/#/dashboard/4690

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-29 Thread Joe Gordon
On Wed, Apr 22, 2015 at 12:30 PM, Kyle Mestery mest...@mestery.com wrote:

 On Wed, Apr 22, 2015 at 1:19 PM, Russell Bryant rbry...@redhat.com
 wrote:

 Hello!

 A couple of things I've been working on lately are project governance
 issues as a TC member and also implementation of a new virtual
 networking alternative with a Neutron driver.  So, naturally I started
 thinking about how the Neutron driver code fits in to OpenStack
 governance.

 Thanks for starting this conversation Russell.


 There are basically two areas with a lot of movement related to this
 issue.

 1) Project governance has moved to a big tent model [1].  The vast
 majority of projects that used to be in Stackforge are being folded in


I think the phrase 'vast majority' is misleading, there are still a lot of
projects on stackforge.


 to a larger definition of the OpenStack project.  Projects making this
 move meet the following criteria as being one of us:

 http://governance.openstack.org/reference/new-projects-requirements.html

 Official project teams are tracked in this file along with the git repos
 they are responsible for:


 http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

 which is also reflected here:

 http://governance.openstack.org/reference/projects/

 The TC has also been working through defining a system to help
 differentiate efforts by using a set of tags [4].  So far, we have
 tags describing the release handling for a repository, as well as a tag
 for team diversity.  We've also had a lot of discussion about tags to
 help describe maturity, but that is still a work in progress.


 2) In Neutron, some fairly significant good changes are being made to
 help scale the development process.  Advanced services were split out
 into their own repos [2].  Most of the plugin and driver code has also
 been split out into repos [3].

 In terms of project teams, the Neutron team is defined as owning the
 following repos:

   http://governance.openstack.org/reference/projects/neutron.html

  - openstack/neutron
  - openstack/neutron-fwaas
  - openstack/neutron-lbaas
  - openstack/neutron-vpnaas
  - openstack/neutron-specs
  - openstack/python-neutronclient

 The advanced services split is reflected by the fwaas, lbaas, and vpnaas
 repos.

 We also have a large set of repositories related to Neutron backend code:

   http://git.openstack.org/cgit/?q=stackforge%2Fnetworking

  - stackforge/networking-arista
  - stackforge/networking-bagpipe-l2
  - stackforge/networking-bgpvpn
  - stackforge/networking-bigswitch
  - stackforge/networking-brocade
  - stackforge/networking-cisco
  - stackforge/networking-edge-vpn
  - stackforge/networking-hyperv
  - stackforge/networking-ibm
  - stackforge/networking-l2gw
  - stackforge/networking-midonet
  - stackforge/networking-mlnx
  - stackforge/networking-nec
  - stackforge/networking-odl
  - stackforge/networking-ofagent
  - stackforge/networking-ovn
  - stackforge/networking-ovs-dpdk
  - stackforge/networking-plumgrid
  - stackforge/networking-portforwarding
  - stackforge/networking-vsphere

 Note that not all of these are equivalent.  This is just a list of
 stackforge/networking-*.

 In some cases there is a split between code in the Neutron tree and in
 this repo.  In those cases, a shim is in the Neutron tree, but most of
 the code is in the external repo.  It's also possible to have all of the
 code in the external repo.

 There's also a big range of maturity.  Some are quite mature and are
 already used in production.  networking-ovn as an example is quite new
 and being developed in parallel with OVN in the Open vSwitch project.


 So, my question is: Where should these repositories live in terms of
 OpenStack governance and project teams?

 Here are a few paths I think we could take, along with some of my
 initial thoughts on pros/cons.

 a) Adopt these as repositories under the Neutron project team.

 In this case, I would see them operating with their own review teams as
 they do today to avoid imposing additional load on the neutron-core or
 neutron-specs-core teams.  However, by being a part of the Neutron team,
 the backend team would submit to oversight by the Neutron PTL.

 Out of your options proposed, this seems like the most logical one to me.
 I don't really see this imposing a ton of strain on the existing core
 reviewer team, because we'll keep whatever core reviewer teams are already
 in the networking-foo projects.


 There are some other details to work out to ensure expectations are
 clearly set for everyone involved.  If this is the path that makes
 sense, we can work through those as a next step.

 Pros:
  + Seems to be the most natural first choice


Saying something is your first choice isn't a real benefit.  It is implying
some sort of benefit but I cannot define what the benefit actually is.



 Cons:
  - A lot of changes have been made precisely because Neutron has gotten
 so big.  A single project team/PTL may not be able 

Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)

2015-04-27 Thread Joe Gordon
On Fri, Apr 24, 2015 at 5:19 PM, Zane Bitter zbit...@redhat.com wrote:

 On 24/04/15 20:00, Joe Gordon wrote:



 On Fri, Apr 24, 2015 at 4:35 PM, Fox, Kevin M kevin@pnnl.gov
 mailto:kevin@pnnl.gov wrote:

 Notification might be a good way to integrate with nova. Individual
 tenants might want to do things as vm's come up/down, etc. Right
 now, you need a privileged pipe into rabbit. Forwarding them to
 Zaqar, per tenant queue's could solve the problem.


 Right now you can poll the nova API. Or tenants can use any number of
 monitoring tools.  How does zaqar better then the alternatives?


 So, a couple of points about that:

  1) Polling sucks.
  2) If a bunch of things are going to get polled, at least collect them
 together so there is *one* thing to optimise for massive polling load.
 (Zaqar is this thing - you have to poll it too atm.)
  3) Long-polling and WebSockets suck a lot less than polling. If you
 already collected all the polling in one place, it's really easy to make
 the switch as soon as you implement them in that one place.
  4) If you don't have a common place to poll, then you can't use the
 events as triggers for other services in OpenStack (without writing custom
 polling code for every endpoint in every API - which is pretty much what
 Heat does now, but that work doesn't extend automatically to Mistral,
 Congress, c. in the way that Zaqar notifications could.)

 Also, APIs tend to only return the current status. You could miss events
 if you just poll the API, whereas if the events are dispatched to a durable
 queue and you just poll the queue for events, that problem goes away.



  FWIW, I think there are some really neat use cases for amazon SQS, that
 presumably Zaqar would fit as well. Cases such as
 https://aws.amazon.com/articles/1464


 Bingo, this is where it starts to get really interesting.


Instead of asking the community to come up with reasons to integrate with
Zaqar, I think it would be more effective if the Zaqar team came up with
one or two use cases they want to support that require integration with
other projects and go from there. Turn this abstract call for adoption into
a more narrow but concrete proposal for X and Y to integrate with Zaqar to
support a specific use case.



 cheers,
 Zane.


 Thanks,
 Kevin

 
 *From:* Joe Gordon [joe.gord...@gmail.com
 mailto:joe.gord...@gmail.com]
 *Sent:* Friday, April 24, 2015 4:02 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Zaqar] Call for adoption (or
 exclusion?)



 On Mon, Apr 20, 2015 at 5:54 AM, Flavio Percoco fla...@redhat.com
 mailto:fla...@redhat.com wrote:

 Greetings,

 I'd like my first action as Zaqar's PTL to be based on
 reflections and
 transparency with regards to what our past has been, to what our
 present is and to what our future could be as a project and
 community.
 Therefore, I'm sending this call for adoption and support before
 taking other actions (also mentioned below).

 The summit is very close and the Zaqar team is looking forward
 to it.

 The upcoming summit represents an important opportunity for Zaqar
 to
 integrate with other projects. In the previous summits - since


 I get integration with Horizon etc. But to use the SQS/SNS analogy
 how would say Nova integrate with Zaqar?

 Icehouse's - we've been collecting feedback from the community.
 We've
 worked on addressing the many use-cases, we've worked on
 addressing
 the concerns raised by the community and we've also kept moving
 towards reaching the project's goals.

 As you all know, the project has gone through many ups and downs.
 We've had some failures in the past and we've also had
 successes, as
 a project and as a team. Nevertheless, we've got to the point
 where it
 doesn't make much sense to keep pushing new features to the
 project
 until it gains adoption. Therefore, I'd like to take advantage
 of the
 workshop slots and invite people from other projects to help
 us/guide
 us through a hacking session on their projects so we can help
 with the
 adoption. The current adoption of Zaqar consist in:

 - 1 company reachingunning it in production
 - 1 planning to do it soon
 - RDO support

 Unfortunately, the above is certainly not enough for a project to
 succeed and it makes the time and effort spent on the project not
 worth it. It's been more than 2 years since we kicked the
 project off
 and it's time for it to show some results. The current problem
 seems
 to be that many people want the project but no one wants

Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)

2015-04-27 Thread Joe Gordon
On Fri, Apr 24, 2015 at 5:05 PM, Zane Bitter zbit...@redhat.com wrote:

 On 24/04/15 19:02, Joe Gordon wrote:



 On Mon, Apr 20, 2015 at 5:54 AM, Flavio Percoco fla...@redhat.com
 mailto:fla...@redhat.com wrote:

 Greetings,

 I'd like my first action as Zaqar's PTL to be based on reflections and
 transparency with regards to what our past has been, to what our
 present is and to what our future could be as a project and community.
 Therefore, I'm sending this call for adoption and support before
 taking other actions (also mentioned below).

 The summit is very close and the Zaqar team is looking forward to it.

 The upcoming summit represents an important opportunity for Zaqar to
 integrate with other projects. In the previous summits - since


 I get integration with Horizon etc. But to use the SQS/SNS analogy how
 would say Nova integrate with Zaqar?


 Speaking very generally, anything where it makes sense for Nova to tell
 the user - or, more importantly, the application - when something is
 happening. The cloud can't afford to be making synchronous calls to the
 client-side, and applications may not be able to afford missing the
 notifications, so a reliable, asynchronous transport like Zaqar is a good
 candidate.

 So examples might be:
  - Hey, your resize is done
  - Hey, your [re]build is done
  - Hey, your VM rebooted
  - Hey, your VM disappeared

 Now, this is not to presuppose that having Nova put messages directly into
 Zaqar is the correct design. It may be that it's better to have some other
 service (Ceilometer?) collect some or all of those notifications and handle
 putting them into Zaqar (though the reliability would be a concern).
 Certainly EC2 seems to funnel all this stuff to CloudWatch, although other
 services like S3, CloudFormation  Auto Scaling deliver notifications to
 SNS directly. There is some integration work either way though, to produce
 the notification.

 Obviously there's less integration to do for a project like Nova that only
 produces notifications than there would be for those that could actually
 consume notifications. Heat would certainly like to use these notifications
 to reduce the amount of polling we do of the APIs (and ditch it altogether
 if reliability is guaranteed). But if we can get both ends integrated then
 the *user* can start doing really interesting things like:


This is one of the bigger questions for me, as a nova developer. What would
integration look like from Nova's POV.  I am a little weary of adding yet
another API when we have so much trouble with our existing ones.
Especially the notification based API.



  - Hey Zaqar, give me a new queue/topic/whatever
  - Hey Mistral, run this workflow when you see a message on this topic
  - Hey Nova, send a message to this topic whenever my VM reboots

 Bam, user-defined workflow triggered on VM reboot. (Super easy to set up
 in a Heat template BTW ;)

 It gets even cooler when there are multiple producers and consumers:
 imagine that Ceilometer alarms and all other kinds of notifications in
 OpenStack are produced this way, and that SNS-style notifications, Heat
 autoscaling events and Mistral workflows can all be triggered by them. And
 of course if the logic available in the workflow isn't sufficient, the user
 can always insert their own conditioning logic running in a VM (future:
 container), since the flow is through a user-facing messaging system.

 I wrote a blog post earlier today about why all this is needed:

 http://www.zerobanana.com/archive/2015/04/24#a-vision-for-openstack

 tl;dr: many applications being written now and even more in the future
 will expect to be able to interact with their own infrastructure and will
 go to proprietary clouds if we don't provide an open source alternative.

 cheers,
 Zane.

  Icehouse's - we've been collecting feedback from the community. We've
 worked on addressing the many use-cases, we've worked on addressing
 the concerns raised by the community and we've also kept moving
 towards reaching the project's goals.

 As you all know, the project has gone through many ups and downs.
 We've had some failures in the past and we've also had successes, as
 a project and as a team. Nevertheless, we've got to the point where it
 doesn't make much sense to keep pushing new features to the project
 until it gains adoption. Therefore, I'd like to take advantage of the
 workshop slots and invite people from other projects to help us/guide
 us through a hacking session on their projects so we can help with the
 adoption. The current adoption of Zaqar consist in:

 - 1 company reachingunning it in production
 - 1 planning to do it soon
 - RDO support

 Unfortunately, the above is certainly not enough for a project to
 succeed and it makes the time and effort spent on the project not
 worth it. It's been more than 2 years since we kicked the project off

Re: [openstack-dev] Please stop reviewing code while asking questions

2015-04-24 Thread Joe Gordon
On Fri, Apr 24, 2015 at 11:16 AM, Julien Danjou jul...@danjou.info wrote:

 On Fri, Apr 24 2015, Joe Gordon wrote:

  When I get a -1 on one of my patches with a question, I personally treat
 it
  as a short coming of the commit message. To often in the past I have
 looked
  at a file, and in trying to figure out why that line is there I do a git
  blame only to see a useless commit message with me as the author.

 That's a thing that I've been stated over and over again in this thread
 and actually paraphrased from the first paragraph on my original email.
 It'd be cool if we could stop restating the obvious over and over again.



So you did, sorry.





 Could someone give me an example of how we are supposed to improve the
 patch or commit message when one get a -1 with e.g. the question:
   Why do you use getattr(foo, bar, None)?


By calling them out in the review or on irc, and explain to them when its
appropriate to use a -1.   I don't think its safe to assume that a
significant number of people who do these -1s are read every thread on the
ML.



 when the answer is Well, otherwise it will raise an error and the code
 will fail because the reviewer do not know how getattr() works.

 --
 Julien Danjou
 // Free Software hacker
 // http://julien.danjou.info

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)

2015-04-24 Thread Joe Gordon
On Fri, Apr 24, 2015 at 4:35 PM, Fox, Kevin M kevin@pnnl.gov wrote:

  Notification might be a good way to integrate with nova. Individual
 tenants might want to do things as vm's come up/down, etc. Right now, you
 need a privileged pipe into rabbit. Forwarding them to Zaqar, per tenant
 queue's could solve the problem.


Right now you can poll the nova API. Or tenants can use any number of
monitoring tools.  How does zaqar better then the alternatives?


FWIW, I think there are some really neat use cases for amazon SQS, that
presumably Zaqar would fit as well. Cases such as
https://aws.amazon.com/articles/1464



 Thanks,
 Kevin
  --
 *From:* Joe Gordon [joe.gord...@gmail.com]
 *Sent:* Friday, April 24, 2015 4:02 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)



 On Mon, Apr 20, 2015 at 5:54 AM, Flavio Percoco fla...@redhat.com wrote:

 Greetings,

 I'd like my first action as Zaqar's PTL to be based on reflections and
 transparency with regards to what our past has been, to what our
 present is and to what our future could be as a project and community.
 Therefore, I'm sending this call for adoption and support before
 taking other actions (also mentioned below).

 The summit is very close and the Zaqar team is looking forward to it.

 The upcoming summit represents an important opportunity for Zaqar to
 integrate with other projects. In the previous summits - since


  I get integration with Horizon etc. But to use the SQS/SNS analogy how
 would say Nova integrate with Zaqar?


 Icehouse's - we've been collecting feedback from the community. We've
 worked on addressing the many use-cases, we've worked on addressing
 the concerns raised by the community and we've also kept moving
 towards reaching the project's goals.

 As you all know, the project has gone through many ups and downs.
 We've had some failures in the past and we've also had successes, as
 a project and as a team. Nevertheless, we've got to the point where it
 doesn't make much sense to keep pushing new features to the project
 until it gains adoption. Therefore, I'd like to take advantage of the
 workshop slots and invite people from other projects to help us/guide
 us through a hacking session on their projects so we can help with the
 adoption. The current adoption of Zaqar consist in:

 - 1 company reachingunning it in production
 - 1 planning to do it soon
 - RDO support

 Unfortunately, the above is certainly not enough for a project to
 succeed and it makes the time and effort spent on the project not
 worth it. It's been more than 2 years since we kicked the project off
 and it's time for it to show some results. The current problem seems
 to be that many people want the project but no one wants to be the
 first in adopting Zaqar (which kind of invalidates the premises of the
 Big tent).

 In summary, this is a call for adoption before we call it a nice
 adventure and ask for the project to be excluded from the OpenStack
 organization based on the lack of adoption and contributions.

 If you think it's worth it, speak up. Either way, thanks for the
 support and for reading thus far.

 On behalf of the Zaqar team,
 Flavio

 --
 @flaper87
 Flavio Percoco

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Add config option for real deletes instead of soft-deletes

2015-04-24 Thread Joe Gordon
On Tue, Apr 21, 2015 at 2:42 PM, Artom Lifshitz alifs...@redhat.com wrote:

 Hello,

 I'd like to gauge acceptance of introducing a feature that would give
 operators
 a config option to perform real database deletes instead of soft deletes.

 There's definitely a need for *something* that cleans up the database.
 There
 have been a few attempts at a DB purge engine [1][2][3][4][5], and
 archiving to
 shadow tables has been merged [6] (though that currently has some issues
 [7]).

 DB archiving notwithstanding, the general response to operators when they
 mention the database becoming too big seems to be DIY cleanup.

 I would like to propose a different approach: add a config option that
 turns
 soft-deletes into real deletes, and start telling operators if you turn
 this
 on, it's DIY backups.

 Would something like that be acceptable and feasible? I'm ready to put in
 the
 work to implement this, however searching the mailing list indicates that
 it
 would be somewhere between non trivial and impossible [8]. Before I start,
 I
 would like some confidence that it's closer to the former than the latter
 :)


Acceptable as a deployer option: Yes
Feasible for all tables: Maybe.  The first steps would be:

1) Make a list of all the times we read soft deleted data and note which
tables they impact.
2) Determine if making soft delete optional on the tables impacted by
read_deleted=True, is useful. If so that would be a an easy win.
3) Figure out how to remove the read_deleted=True where used.


 Cheers!

 [1] https://blueprints.launchpad.net/nova/+spec/db-purge-engine
 [2] https://blueprints.launchpad.net/nova/+spec/db-purge2
 [3] https://blueprints.launchpad.net/nova/+spec/remove-db-archiving
 [4] https://blueprints.launchpad.net/nova/+spec/database-purge
 [5] https://blueprints.launchpad.net/nova/+spec/db-archiving
 [6] https://review.openstack.org/#/c/18493/
 [7] https://review.openstack.org/#/c/109201/
 [8]
 http://lists.openstack.org/pipermail/openstack-operators/2014-November/005591.html

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][cinder] Tests that try to detach volumes in use

2015-04-24 Thread Joe Gordon
On Fri, Apr 24, 2015 at 8:40 AM, Peter Penchev openstack-...@storpool.com
wrote:

 Hi,

 There are a couple of Tempest volume tests, like
 test_rescued_vm_detach_volume or test_list_get_volume_attachments,
 that either sometimes[0] or always attempt to detach a volume from a
 running instance while the instance could still be keeping it open.
 Unfortunately, this is not completely compatible with the StorPool
 distributed storage driver - in a StorPool cluster, a volume may only
 be detached from a client host (the Nova hypervisor) if there are no
 processes running on the host (e.g. qemu) that keep the volume open.
 This came about as a result of a series of Linux kernel crashes that
 we observed during our testing when a volume containing a filesystem
 was detached while the kernel's filesystem driver didn't expect it to.

 Right now, our driver for attaching StorPool volumes (defined in
 Cinder) to Nova instances (proposed in

 http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/libvirt-storpool-volume-attach.html
 yet didn't get enough +1/+2's in time for Kilo RC2) tries to detach
 the volume, then waits for a couple of seconds, hoping that any
 processes would have been notified to let it go, then tries again,
 then fails.  Of course, StorPool has a force detach option that
 could be used in that case; the problem there is that it might indeed
 lead to some trouble for the instances that will have the volume
 pulled out from under their tiny instance legs.  This could go in the
 let the operator handle it category - if we're detaching a volume,
 this supposedly means that the filesystem has already been unmounted
 within the instance... is this a sensible approach?  Should we teach
 our driver to forcibly detach the volume if the second polite attempt
 still fails?


IMHO, the number one thing to keep in mind when answering this is to keep
the user experience backend agnostic. A user should never have to know what
driver a cloud is using or be forced do do something differently because of
it.

So if the other standard is to allow detaching volumes in use in a way that
doesn't lead to trouble for the instances using the volume, your driver
should do that. If the standard is doing this can lead to trouble for
instances but it is still allowed then adding a forcible detach is
sufficient.



 G'luck,
 Peter

 [0] The sometimes part: it seems that in some tests, like
 test_list_get_volume_attachments, the order of the detach volume and
 stop the running instance actions is random, dependent on the order
 in which the Python test framework will execute the cleanup handlers.
 Of course, it might be that I'm misunderstanding something and it is
 completely deterministic and there is another issue at hand...

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please stop reviewing code while asking questions

2015-04-24 Thread Joe Gordon
On Fri, Apr 24, 2015 at 10:31 AM, Doug Hellmann d...@doughellmann.com
wrote:

 Excerpts from Amrith Kumar's message of 2015-04-24 15:02:01 +:
  There have been many replies on this thread, I'll just reply to this one
 rather than trying to reply piecemeal.
 
  Doug, there's asking a question because something is unclear (implying
 that the code is needlessly complex, missing a comment, is unintuitive,
 ...). I believe that this most definitely warrants a -1 as you describe
 because it is indicative that the code, once submitted, would be hard for a
 future reader to follow.
 
  In my mind, the yardstick has been, and continues to be this; would a
 reasonable person believe that the patch set as presented should be allowed
 to merge.
 
  - If I can answer that question unambiguously, and unequivocally with a
 NO, then I will score the patch with a negative score.
 
  - If I can answer that question unambiguously, and unequivocally with a
 YES, then I will score the patch with a positive score.
 
  - For anything else, I'll use a 0.

 I've run into too many cases where a trivial change has an
 unintended consequence, so I suppose I'm more conservative with my
 code reviews. I don't use 0 very often at all, not because of any
 stats counting, but because I don't assume that conveys any information
 to the author or other reviewers. I vote the way I mean for my
 comments to be taken, so I use -1 to indicate that more work is
 needed, even if that work is just explaining something better or
 demonstrating that an edge case is going to be handled.


When I get a -1 on one of my patches with a question, I personally treat it
as a short coming of the commit message. To often in the past I have looked
at a file, and in trying to figure out why that line is there I do a git
blame only to see a useless commit message with me as the author.


 
  If there was a patch to make reviewstats count +0, I'd support that. But
 I would also like to understand what the differences are between
 reviewstats and stackalytics. Ideally I would like stackalytics to count
 0's as well. If you can take the time to review a patch, you should get
 credit for it (no matter what tool is used to count).
 
  I would support changes to both reviewstats and stackalytics to do the
 following.

 I'm not sure where all of the interest in stats counting comes from, but
 it's definitely not a motivation of my own behavior.

 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-24 Thread Joe Gordon
On Fri, Apr 24, 2015 at 12:36 AM, Victor Stinner vstin...@redhat.com
wrote:

 Hi,

 I wrote my spec to Port Nova to Python 3:
 https://review.openstack.org/#/c/176868/

  I squashed all my commits into a single commit of my draft port and I
 pushed it at:
 
 https://github.com/haypo/nova/commit/bad54bc2b278c7c7cb7fa6cc73d03c70138bd89d
 
  I like how the sha1 starts with 'bad'

 Ah ah, I didn't notice that. I would prefer python prefix, but it's not
 possible.

  Overall that is a pretty small patch.

 Cool.

  My general concern, is having to manually review new code for python3
 compliance.

 Do you prefer a single very large patch (as the one I posted above) or
 multiple small patches fixing similar issues?


  If this will take more then a week or two to make a big project python3
 compat (during a virtual sprint), it would be good to have some tooling in
 place to make sure we don't add a lot more burden on reviewers to make sure
 new patches are python3 compatible by hand.

 I tried to write a detailed plan in my spec. Until tox -e py34 pass, I
 would prefer to not touch nova gates nor any add Python 3 checks.


Great, I'll follow up on that spec.



 Victor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-23 Thread Joe Gordon
On Thu, Apr 23, 2015 at 12:10 AM, Victor Stinner vstin...@redhat.com
wrote:

 Hi,

  How invasive would the port to python3 be?

 I squashed all my commits into a single commit of my draft port and I
 pushed it at:

 https://github.com/haypo/nova/commit/bad54bc2b278c7c7cb7fa6cc73d03c70138bd89d


I like how the sha1 starts with 'bad'

Overall that is a pretty small patch.



 As announced, changes are boring, just obvious Python2/Python3 issues:

 - strip L from long integer literals: 123L = 123
 - replace dict.iteritems() with six.iteritems(dict)
 - replace list.sort(cmp_func) with list.sort(key=key_func)
 - replace raise exc_info[0], exc_info[1], exc_info[2] with
 six.reraise(*exc_info)
 - moved functions / modules

   * get http.client, urllib.request and other stdlib modules from
 six.moves since they moved between Python 2 and Python 3
   * get itertools.izip/zip_longest from six.moves
   * replace unichr() with six.unichr()

 - replace filter(func, data) with [item for item in data if func(item)]
 - replace unicode() with six.text_type
 - replace (int, long) with six.integer_types
 - replace file() with open()
 - replace StringIO() with six.StringIO()

 These changes are not enough to port nova to Python 3. But they are
 required to be able to find next Python 3 bugs.

 Reminder: Python 2 compatibility is kept! There is not reason right now to
 drop Python 2 support, it would be a pain to upgrade!


  Would it be easier to have a python3 migration sprint and rip the band
 aid off instead of dragging it out and having partial python3 support for a
 whole cycle?

 Do you mean a physical meeting? Or focus on Python 3 during a short period
 (review/merge all Python 3 patches)?


Focus on Python 3 during a short period.


 A single week may not be enough to fix all Python 3 issues. I prefer to
 submit changes smoothly during the Liberty cycle.



My general concern, is having to manually review new code for python3
compliance.

If this will take more then a week or two to make a big project python3
compat (during a virtual sprint), it would be good to have some tooling in
place to make sure we don't add a lot more burden on reviewers to make sure
new patches are python3 compatible by hand.

 In general what is the least painful way to add python3 support for a
 very large python project?

 Send patches and make incremental changes, as any other change made in
 OpenStack.

 Victor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-22 Thread Joe Gordon
On Wed, Apr 22, 2015 at 3:15 AM, Victor Stinner vstin...@redhat.com wrote:

 Hi,

 It's moving fast. I'm currently working on porting remaining libraries to
 prepare my spec for nova.


Great, I did't realize how close all the dependencies were.



  oslo.db -- looks like it is almost there

 I don't know the status of oslo.db support of Python 3. I guess that it
 already works on Python 3, it's just a matter of running tests with MySQL
 (MySQL-Python blocks again here).

  oslo.messaging -- same

 Two changes of my Python 3 changes were already merged last week. Three
 remaining Python 3 changes are almost there, they are mostly blocked by the
 freeze on requirements, but changes are already approved. The blocker is
 the bump of eventlet to 0.17.3:
 https://review.openstack.org/#/c/172132/

  paste -- almost there

 I fixed that last night (!) with the release of Paste 2.0. It's the first
 Paste release since 2010. Paste 2.0 has an experimental support of Python
 3. It should be enough for nova, according to my quick tests in my local
 nova repository where I fixed most obvious Python 3 issues. Ian Bicking
 allowed me to publish new versions of Paste.

  sqlalchemy-migrate -- almost there

 It already supports Python 3, so it doesn't block.

  suds -- with the suds fork shouldn't be too hard

 You should just switch to the fork. I contacted the author of suds-jurko:
 he plans to rename its component from suds-jurko to suds. So his fork
 would be simple drop-in which would not require any change in OpenStack
 except of requirements.

  websockify -- unknown

 It announces Python 3.3 and 3.4 support and has a py34 env in tox.ini. I
 didn't check yet if it supports Python 3, but since the project is active
 (last commit 12 hours ago), it's shouldn't be hard to fix them quickly.

  libvirt-python -- unknown

 Oh, I missed this dependency. It's my next target :-)

  mysql-python -- alternitive looks viable.

 Yes, it's possible to replace it with PyMySQL. Does anyone know the status
 of this switch?

  Based on there being two unknowns, and a lot of dependencies that are
 just almost there, and a few we may want to migrate off of, I was assuming
 addressing those issues would make it hard for us to make nova python3
 compatible for Liberty.

 My plan for Liberty is not to support fully Python 3 in nova, but only
 start the port (well, continue to be exact). The idea is to fix syntax
 errors and obvious issues like dict.iteritems() = six.iteritems(dict),
 then more complex changes. Maybe it's possible to finish the port in a
 cycle, but it really depends on the time taken to merge patches.


How invasive would the port to python3 be? Would it be easier to have a
python3 migration sprint and rip the band aid off instead of dragging it
out and having partial python3 support for a whole cycle?

In general what is the least painful way to add python3 support for a very
large python project?


 I started to port nova in my local repository. Some unit tests already
 pass.

 nova already uses six in many places, so it's not like we really start
 from scratch, the port is already an ingoing process.

 Victor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Introducing the Cloud Service Federation project (cross-project design summit proposal)

2015-04-22 Thread Joe Gordon
On Apr 16, 2015 3:58 PM, Geoff Arnold ge...@geoffarnold.com wrote:

 Joe: you have identified many of the challenges of trying to work with
multiple OpenStack clouds from different providers with different
configurations, resources, etc. Nevertheless, people are doing it, and
doing so successfully. (I know several teams that are running across
multiple public and private clouds.)

Doing so explicitly is very different then doing so implicitly.  With your
proposal, will the end consumer be aware of which underlying provider they
are using?

Your proposal here is pretty light on details on what this looks like for
each persona involved (end user, reseller, cloud provider etc.)

 Packaging solutions like Docker may help with some of the low-level
compatibility issues.


 This proposal is intended to remove one source of friction. There’s a lot
more to be done. One interesting avenue for research is going to be the
development of a virtual region metadata schema that will allow a tenant
(or a broker) to determine the characteristics of virtual regions. (Such a
model might be a useful complement to the RefStack work.)

 Geoff


 On Apr 16, 2015, at 3:00 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Thu, Apr 16, 2015 at 12:16 AM, Geoff Arnold ge...@geoffarnold.com
wrote:

 I’ve discussed this with the Keystone team, especially the Reseller
folks, but not as deeply as we need to.

 The biggest challenge that I see with doing this inside any existing
project is the Aggregator system. It’s an independent deployment that
doesn’t include any of the core OpenStack IaaS services - there’s no Nova,
no networking (Nova or Neutron), no Glance, no Cinder. It’s just Horizon,
Keystone, and a bunch of orchestration logic to wire up the virtual
regions. Just assembling the bits into a deployable and testable system is
going to be significantly different from a regular OpenStack cloud. Even
though OpenStack is composed of relatively independent services, there’s an
assumed context which affects just about everything. I really wouldn’t ask
Keystone to take on the responsibility for such a thing. Better to build it
in Stackforge, get some experience with it, and figure out where it lives
later on.

 In spite of all that, we believe that this belongs in the “big tent”
OpenStack, because it builds on existing OpenStack component services, and
it’s value depends on interoperability. If you deploy the Virtual Region
service as part of your OpenStack cloud, any Aggregator should be able to
re-present your virtual regions to its users (subject to obvious security
and operational policies). We’ve used the Reseller use case to describe the
workflows, but there are a number of equally important use cases for this
architecture.


 'interoperability' is where I can see a lot of issues arising. If I am
using a reseller with regions from two different providers that are
configured even slightly differently, using the two regions interchangeably
will become exceedingly difficult quickly.  There are many cases where the
same API when powered by different drivers and slightly different
configurations result in very different end user behavior.  A few example
issues:

 * Glance images maintained by the cloud provider would be different
across providers.
 * Policy files dictating what API calls a given user can use can differ
across providers.
 * Network models. There is no single network model for OpenStack.
 * CPU performance. OpenStack has no way of saying 1VCPU in provider X is
equivalent to 1.5 VCPUs under provider Y.
 * Config driver vs. metadata service.
 * Those are just a few issues I can think of off the top of my head but
there are many many more.


 I can see this model working for only the simplest of use cases.
Maintaining a cohesive experience across multiple providers who may not be
working together is very difficult. But perhaps I am missing something.




 Geoff

 On Apr 15, 2015, at 10:03 PM, Miguel Angel Ajo Pelayo 
mangel...@redhat.com wrote:

 Sounds like a very interesting idea.

 Have you talked to the keystone folks?,

 I would do this work into the keystone project itself (just a separate
daemon).

 This still looks like identity management (federated, but identity)

 I know the burden of working with a mainstream project could be
higher, but benefits
 are also higher: it becomes more useful (it becomes automatically
available for everyone)
 and also passes through the review process of the general keystone
contributors, thus
 getting a more robust code.


 Best regards,
 Miguel

 On 16/4/2015, at 6:24, Geoff Arnold ge...@geoffarnold.com wrote:

 Yeah, we’ve taken account of:

https://github.com/openstack/keystone-specs/blob/master/specs/juno/keystone-to-keystone-federation.rst

http://blog.rodrigods.com/playing-with-keystone-to-keystone-federation/
 http://docs.openstack.org/developer/keystone/configure_federation.html

 One of the use cases we’re wrestling with requires fairly strong
anonymization: if user

Re: [openstack-dev] [OpenStack-Infra][cinder] Could you please re-consider Oracle ZFSSA iSCSI Driver

2015-04-21 Thread Joe Gordon
On Tue, Apr 21, 2015 at 1:18 PM, Diem Tran diem.t...@oracle.com wrote:

  Hi Duncan,

 We totally understand that CI is not a box ticking exercise, it actually
 serves a purpose, and we are fully on board with the need for CI. We have
 resources monitoring the CI results and handling failures on daily basis.


I still don't see any successful runs:

http://paste.openstack.org/show/205029/



 As far as the failures, we have resolved all issues related to our drivers
 that occur during the weekend (hence the failures). My understanding is
 that some time due to instability of devstack, there might be intermittent
 failures not related to out drivers at all, such as this:
 https://review.openstack.org/#/c/168177/

 Thanks,
 Diem.


 On 04/21/2015 03:40 PM, Duncan Thomas wrote:

 Can you please investigate and comment here on this thread as to why your
 CI is failing for all patches? If this is not resolved, your driver will
 not be re-added, and ignoring requests to investigate CI failures will
 increase the chances of your driver being removed in future, without
 notice. Setting up a CI system is not a box ticking exercise, it requires
 monitoring and maintenance. If that is not possible then you are not yet in
 a position to be added back into cinder.
 On 21 Apr 2015 19:32, Diem Tran diem.t...@oracle.com wrote:


 On 04/21/2015 01:01 PM, Mike Perez wrote:

 On 09:57 Apr 21, Mike Perez wrote:

 On 15:47 Apr 20, Diem Tran wrote:

 Hi Mike,

 Oracle ZFSSA iSCSI CI is now reporting test results. It is configured
 to run against the ZFSSA iSCSI driver. You can see the results here:
 https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z

 Below are some patchsets that the CI reports:
 https://review.openstack.org/#/c/168424/
 https://review.openstack.org/#/c/168419/
 https://review.openstack.org/#/c/175247/
 https://review.openstack.org/#/c/175077/
 https://review.openstack.org/#/c/163706/


 I would like to kindly request you and the core team to get the
 Oracle ZFSSA iSCSI driver re-integrated back to the Cinder code base.
 If there is anything else you need from the CI and the driver, please
 do let me know.

 This was done on 4/8:

 https://review.openstack.org/#/c/170770/

 My mistake this was only the NFS driver. The window to have drivers
 readded in
 Kilo has long past. Please see:

 http://lists.openstack.org/pipermail/openstack-dev/2015-March/059990.html

 This will have to be readded in Liberty only at this point.


 Thank you for your reply. Could you please let me know the procedure
 needed for the driver to be readded to Liberty? Specifically, will you be
 the one who upload the revert patchset, or it is the driver maintainer's
 responsibility?

 Diem.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-21 Thread Joe Gordon
On Fri, Apr 17, 2015 at 1:22 AM, Victor Stinner vstin...@redhat.com wrote:

  For the full list, see the wiki page:
  https://wiki.openstack.org/wiki/Python3#Core_OpenStack_projects

  Thanks for updating the wiki page that is a very useful list.
  From the looks of things, it seems like nova getting Python3 support in
 Liberty is not going to happen.



I based this on the wiki, but maybe I am wrong.

remaining libraries for nova:
oslo.db -- looks like it is almost there
oslo.messaging  -- same
paste -- almost there
sqlalchemy-migrate -- almost there
suds -- with the suds fork shouldn't be too hard
websockify -- unknown
libvirt-python -- unknown
mysql-python  -- alternitive looks viable.

Based on there being two unknowns, and a lot of dependencies that are just
almost there, and a few we may want to migrate off of, I was assuming
addressing those issues would make it hard for us to make nova python3
compatible for Liberty.



 Why? I plan to work on porting nova to Python 3. I proposed a nova session
 on Python 3 at the next OpenStack Summit at Vancouver. I plan to write a
 spec too.

 I'm not aware of any real blocker for nova.

  What are your thoughts on how to tackle sqlalchemy-migrate? It looks
 like that is a blocker for several projects. And something I think we have
 wanted to move off of for some time now.

 I just checked sqlachemy-migrate. The README and the documentation are
 completly outdated, but the project is very active: latest commit one month
 ago and latest release (0.9.6) one month ago. There are py33 and py34
 environments and tests pass on Python 3.3 and Python 3.4! I didn't check
 yet, but I guess that sqlachemy-migrate 0.9.6 already works on Python 3.
 Python 3 classifiers are just missing in setup.cfg.

 I sent patches to update the doc, to add Python 3 classifiers and to
 upgrade requirements. The project moved to stackforge, reviews are at
 review.openstack.org:


 https://review.openstack.org/#/q/status:open+project:stackforge/sqlalchemy-migrate,n,z

 The wiki page said that scripttest and ibm-db-sa were not Python 3
 compatible. It's no more true: scripttest is compatible Python 3, and there
 is ibm-db-sa-py3 which is Python 3 compatible.

 I updated the wiki page for sqlachemy-migrate.

 Victor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][releases] upcoming library releases to unfreeze requirements in master

2015-04-21 Thread Joe Gordon
On Tue, Apr 21, 2015 at 1:53 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 4/21/2015 2:44 PM, Doug Hellmann wrote:

 Excerpts from Doug Hellmann's message of 2015-04-21 13:11:09 -0400:

 I'm working on releasing a *bunch* of libraries, including clients, from
 their master branches so we can thaw the requirements list for the
 liberty cycle. As with any big operation, this may be disruptive. I
 apologize in advance if it is, but we cannot thaw the requirements
 without making the releases so we need them all.

 Here's the full list, in the form of the release script I am running,
 in case you start seeing issues and want to check if you were
 affected:


 OK, this is all done. I have verified that all of the libraries are on
 PyPI and I have sent the release notes (sometimes several copies, at no
 extra charge to you -- seriously, sorry about the noise).

 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 And the gate is wedged. :)

 https://bugs.launchpad.net/openstack-gate/+bug/1446847


Double wedged:

https://bugs.launchpad.net/openstack-gate/+bug/1446882


 --

 Thanks,

 Matt Riedemann



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][releases] upcoming library releases to unfreeze requirements in master

2015-04-21 Thread Joe Gordon
On Tue, Apr 21, 2015 at 4:35 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Tue, Apr 21, 2015 at 1:53 PM, Matt Riedemann 
 mrie...@linux.vnet.ibm.com wrote:



 On 4/21/2015 2:44 PM, Doug Hellmann wrote:

 Excerpts from Doug Hellmann's message of 2015-04-21 13:11:09 -0400:

 I'm working on releasing a *bunch* of libraries, including clients, from
 their master branches so we can thaw the requirements list for the
 liberty cycle. As with any big operation, this may be disruptive. I
 apologize in advance if it is, but we cannot thaw the requirements
 without making the releases so we need them all.

 Here's the full list, in the form of the release script I am running,
 in case you start seeing issues and want to check if you were
 affected:


 OK, this is all done. I have verified that all of the libraries are on
 PyPI and I have sent the release notes (sometimes several copies, at no
 extra charge to you -- seriously, sorry about the noise).

 Doug


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 And the gate is wedged. :)

 https://bugs.launchpad.net/openstack-gate/+bug/1446847


 Double wedged:

 https://bugs.launchpad.net/openstack-gate/+bug/1446882



Fixed, now we are back to being are just wedged once.


 --

 Thanks,

 Matt Riedemann



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-21 Thread Joe Gordon
On Tue, Apr 21, 2015 at 1:37 PM, Ian Cordasco ian.corda...@rackspace.com
wrote:

 On 4/16/15, 17:54, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Joe Gordon's message of 2015-04-16 15:15:01 -0700:
  On Fri, Apr 10, 2015 at 4:01 AM, Victor Stinner vstin...@redhat.com
 wrote:
 
https://wiki.openstack.org/wiki/Python3#Dependencies appears to be
   fairly out of date.
  
   You're right. I updated this wiki page. In practice, much more
 OpenStack
   clients, Common Libraries and Development Tools are already Python 3
   compatible. I added the link to my pull request for Oslo Messaging.
  
It would be nice to get a better sense of what the remaining
 libraries
   to port over are before the summit so we can start planning how to do
 the
   python34 migration.
  
   I checked quickly. There are small libraries like pyEClib required by
   Swift, but the major blocker libraries are: MySQL-Python, suds,
 Paste. For
   oslo.db, it's already Python 3 compatible no?
  
  
   * MySQL-Python
  
   MySQL-Python doesn't look to be active (last commit in january 2014).
   There are multiple Python 3 pending pull requests:
   https://github.com/farcepest/MySQLdb1/pulls
  
   Mike Bayer is evaluating PyMySQL which is Python 3 compatible:
   https://wiki.openstack.org/wiki/PyMySQL_evaluation
  
   See also https://github.com/farcepest/moist (is it alive? is it
 Python 3
   compatible?)
  
  
   * suds
  
   There is https://bitbucket.org/jurko/suds : a fork compatible with
 Python
   3. Global requirements contain this comment:
  
   # NOTE(dims): suds is not python 3.x compatible, suds-jurko is a fork
 that
   # works with py3x. oslo.vmware would convert to suds-jurko first then
 nova
   # and cinder would follow. suds should be remove immediately once
 those
   # projects move to suds-jurko for all jobs.
  
  
   * Paste
  
   I already fixed Python 3 compatibility issues and my changes were
 merged,
   but there is no release including my fixes yet :-(
  
   I heard that Paste is completly outdated and should be replaced. Ok,
 but
   in practice it's still used and not Python 3 compatible.
  
   Workaround: use the development (git) version of Paste.
  
  
   For the full list, see the wiki page:
   https://wiki.openstack.org/wiki/Python3#Core_OpenStack_projects
 
 
  Thanks for updating the wiki page that is a very useful list.
 
  From the looks of things, it seems like nova getting Python3 support in
  Liberty is not going to happen. But we can make good progress in
  dependencies sorted out. By fixing the dependencies and switching a few
 out
  for better ones.
 
  What are your thoughts on how to tackle  sqlalchemy-migrate? It looks
 like
  that is a blocker for several projects. And something I think we have
  wanted to move off of for some time now.
 
 
 IMHO it is quite a bit easier to port something to python 3 than to
 move off of it entirely. I'd say it's worth it for forward progress to
 try and port sqlalchemy-migrate, even if that means the effort becomes
 a sunk cost in a year.

 Also, isn’t sqlalchemy-migrate something we currently maintain (or a group
 of OpenStack developers do it for OpenStack. Can’t we work with them to
 add support for Python 3?


yup https://github.com/stackforge/sqlalchemy-migrate. The better question
is: 'is it worth adding support in versus moving over to alembic, since we
want to do that anyway?' I don't personally have an answer for that.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][cinder] Could you please re-consider Oracle ZFSSA iSCSI Driver

2015-04-20 Thread Joe Gordon
On Mon, Apr 20, 2015 at 1:18 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 Hi Diem

 It appears the CI failed to pass on any of the 5 reviews you linked to.
 Are there any examples of the CI passing?


I don't see any passing runs in the last 20 comments left by 'Oracle ZFSSA
CI'

http://paste.openstack.org/show/204933/  (generated with
https://github.com/jogo/lastcomment)


 On 20 April 2015 at 20:47, Diem Tran diem.t...@oracle.com wrote:

 Hi Mike,

 Oracle ZFSSA iSCSI CI is now reporting test results. It is configured to
 run against the ZFSSA iSCSI driver. You can see the results here:
 https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z

 Below are some patchsets that the CI reports:
 https://review.openstack.org/#/c/168424/
 https://review.openstack.org/#/c/168419/
 https://review.openstack.org/#/c/175247/
 https://review.openstack.org/#/c/175077/
 https://review.openstack.org/#/c/163706/


 I would like to kindly request you and the core team to get the Oracle
 ZFSSA iSCSI driver re-integrated back to the Cinder code base. If there is
 anything else you need from the CI and the driver, please do let me know.

 Thank you,
 Diem.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Introducing the Cloud Service Federation project (cross-project design summit proposal)

2015-04-16 Thread Joe Gordon
On Thu, Apr 16, 2015 at 12:16 AM, Geoff Arnold ge...@geoffarnold.com
wrote:

 I’ve discussed this with the Keystone team, especially the Reseller folks,
 but not as deeply as we need to.

 The biggest challenge that I see with doing this inside any existing
 project is the Aggregator system. It’s an independent deployment that
 doesn’t include any of the core OpenStack IaaS services - there’s no Nova,
 no networking (Nova or Neutron), no Glance, no Cinder. It’s just Horizon,
 Keystone, and a bunch of orchestration logic to wire up the virtual
 regions. Just assembling the bits into a deployable and testable system is
 going to be significantly different from a regular OpenStack cloud. Even
 though OpenStack is composed of relatively independent services, there’s an
 assumed context which affects just about everything. I really wouldn’t ask
 Keystone to take on the responsibility for such a thing. Better to build it
 in Stackforge, get some experience with it, and figure out where it lives
 later on.

 In spite of all that, we believe that this belongs in the “big tent”
 OpenStack, because it builds on existing OpenStack component services, and
 it’s value depends on interoperability. If you deploy the Virtual Region
 service as part of your OpenStack cloud, any Aggregator should be able to
 re-present your virtual regions to its users (subject to obvious security
 and operational policies). We’ve used the Reseller use case to describe the
 workflows, but there are a number of equally important use cases for this
 architecture.


'interoperability' is where I can see a lot of issues arising. If I am
using a reseller with regions from two different providers that are
configured even slightly differently, using the two regions interchangeably
will become exceedingly difficult quickly.  There are many cases where the
same API when powered by different drivers and slightly different
configurations result in very different end user behavior.  A few example
issues:

* Glance images maintained by the cloud provider would be different across
providers.
* Policy files dictating what API calls a given user can use can differ
across providers.
* Network models. There is no single network model for OpenStack.
* CPU performance. OpenStack has no way of saying 1VCPU in provider X is
equivalent to 1.5 VCPUs under provider Y.
* Config driver vs. metadata service.
* Those are just a few issues I can think of off the top of my head but
there are many many more.


I can see this model working for only the simplest of use cases.
Maintaining a cohesive experience across multiple providers who may not be
working together is very difficult. But perhaps I am missing something.




 Geoff

 On Apr 15, 2015, at 10:03 PM, Miguel Angel Ajo Pelayo 
 mangel...@redhat.com wrote:

 Sounds like a very interesting idea.

 Have you talked to the keystone folks?,

 I would do this work into the keystone project itself (just a separate
 daemon).

 This still looks like identity management (federated, but identity)

 I know the burden of working with a mainstream project could be higher,
 but benefits
 are also higher: it becomes more useful (it becomes automatically
 available for everyone)
 and also passes through the review process of the general keystone
 contributors, thus
 getting a more robust code.


 Best regards,
 Miguel

 On 16/4/2015, at 6:24, Geoff Arnold ge...@geoffarnold.com wrote:

 Yeah, we’ve taken account of:

 https://github.com/openstack/keystone-specs/blob/master/specs/juno/keystone-to-keystone-federation.rst
 http://blog.rodrigods.com/playing-with-keystone-to-keystone-federation/
 http://docs.openstack.org/developer/keystone/configure_federation.html

 One of the use cases we’re wrestling with requires fairly strong
 anonymization: if user A purchases IaaS services from reseller B, who
 sources those services from service provider C, nobody at C (OpenStack
 admin, root on any box) should be able to identify that A is consuming
 resources; all that they can see is that the requests are coming from B.
 It’s unclear if we should defer this requirement to a future version, or
 whether there’s something we need to (or can) do now.

 The main focus of Cloud Service Federation is managing the life cycle of
 virtual regions and service chaining. It builds on the Keystone federated
 identity work over the last two cycles, but identity is only part of the
 problem. However, I recognize that we do have an issue with terminology.
 For a lot of people, “federation” is simply interpreted as “identity
 federation”. If there’s a better term than “cloud service federation”, I’d
 love to hear it. (The Cisco term “Intercloud” is accurate, but probably
 inappropriate!)

 Geoff

 On Apr 15, 2015, at 7:07 PM, Adam Young ayo...@redhat.com wrote:

 On 04/15/2015 04:23 PM, Geoff Arnold wrote:

 That’s the basic idea.  Now, if you’re a reseller of cloud services, you
 deploy Horizon+Aggregator/Keystone behind your public endpoint, with your
 

Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-16 Thread Joe Gordon
On Fri, Apr 10, 2015 at 4:01 AM, Victor Stinner vstin...@redhat.com wrote:

  https://wiki.openstack.org/wiki/Python3#Dependencies appears to be
 fairly out of date.

 You're right. I updated this wiki page. In practice, much more OpenStack
 clients, Common Libraries and Development Tools are already Python 3
 compatible. I added the link to my pull request for Oslo Messaging.

  It would be nice to get a better sense of what the remaining libraries
 to port over are before the summit so we can start planning how to do the
 python34 migration.

 I checked quickly. There are small libraries like pyEClib required by
 Swift, but the major blocker libraries are: MySQL-Python, suds, Paste. For
 oslo.db, it's already Python 3 compatible no?


 * MySQL-Python

 MySQL-Python doesn't look to be active (last commit in january 2014).
 There are multiple Python 3 pending pull requests:
 https://github.com/farcepest/MySQLdb1/pulls

 Mike Bayer is evaluating PyMySQL which is Python 3 compatible:
 https://wiki.openstack.org/wiki/PyMySQL_evaluation

 See also https://github.com/farcepest/moist (is it alive? is it Python 3
 compatible?)


 * suds

 There is https://bitbucket.org/jurko/suds : a fork compatible with Python
 3. Global requirements contain this comment:

 # NOTE(dims): suds is not python 3.x compatible, suds-jurko is a fork that
 # works with py3x. oslo.vmware would convert to suds-jurko first then nova
 # and cinder would follow. suds should be remove immediately once those
 # projects move to suds-jurko for all jobs.


 * Paste

 I already fixed Python 3 compatibility issues and my changes were merged,
 but there is no release including my fixes yet :-(

 I heard that Paste is completly outdated and should be replaced. Ok, but
 in practice it's still used and not Python 3 compatible.

 Workaround: use the development (git) version of Paste.


 For the full list, see the wiki page:
 https://wiki.openstack.org/wiki/Python3#Core_OpenStack_projects


Thanks for updating the wiki page that is a very useful list.

From the looks of things, it seems like nova getting Python3 support in
Liberty is not going to happen. But we can make good progress in
dependencies sorted out. By fixing the dependencies and switching a few out
for better ones.

What are your thoughts on how to tackle  sqlalchemy-migrate? It looks like
that is a blocker for several projects. And something I think we have
wanted to move off of for some time now.



 Victor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-14 Thread Joe Gordon
On Sun, Apr 12, 2015 at 3:43 PM, Robert Collins robe...@robertcollins.net
wrote:

 Right now we do something that upstream pip considers wrong: we make
 our requirements.txt be our install_requires.

 Upstream there are two separate concepts.

 install_requirements, which are meant to document what *must* be
 installed to import the package, and should encode any mandatory
 version constraints while being as loose as otherwise possible. E.g.
 if package A depends on package B version 1.5 or above, it should say
 B=1.5 in A's install_requires. They should not specify maximum
 versions except when that is known to be a problem: they shouldn't
 borrow trouble.

 deploy requirements - requirements.txt - which are meant to be *local
 to a deployment*, and are commonly expected to specify very narrow (or
 even exact fit) versions.


Link to where this is documented? If this isn't written down anywhere, then
that should be a pre-requisite to this conversation. Get upstream to
document this.



 What pbr, which nearly if not all OpenStack projects use, does, is to
 map the contents of requirements.txt into install_requires. And then
 we use the same requirements.txt in our CI to control whats deployed
 in our test environment[*]. and there we often have tight constraints
 like seen here -

 http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt#n63

 I'd like to align our patterns with those of upstream, so that we're
 not fighting our tooling so much.

 Concretely, I think we need to:
  - teach pbr to read in install_requires from setup.cfg, not
 requirements.txt
  - when there are requirements in setup.cfg, stop reading requirements.txt
  - separate out the global intall_requirements from the global CI
 requirements, and update our syncing code to be aware of this

 Then, setup.cfg contains more open requirements suitable for being on
 PyPI, requirements.txt is the local CI set we know works - and can be
 much more restrictive as needed.

 Thoughts? If there's broad apathy-or-agreement I can turn this into a
 spec for fine coverage of ramifications and corner cases.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-14 Thread Joe Gordon
On Tue, Apr 14, 2015 at 2:36 AM, Thierry Carrez thie...@openstack.org
wrote:

 Robert Collins wrote:
  On 13 April 2015 at 22:04, Thierry Carrez thie...@openstack.org wrote:
  How does this proposal affect stable branches ? In order to keep the
  breakage there under control, we now have stable branches for all the
  OpenStack libraries and cap accordingly[1]. We planned to cap all other
  libraries to the version that was there when the stable branch was
  cut.  Where would we do those cappings in the new world order ? In
  install_requires ? Or should we not do that anymore ?
 
  [1]
 
 http://specs.openstack.org/openstack/openstack-specs/specs/library-stable-branches.html
 
  I don't think there's a hard and fast answer here. Whats proposed
  there should work fine.
  [...]
 
  tl;dr - I dunno :)


This is the part of the puzzle I am the most interested in. Making sure
stable branches don't break out from underneath us forcing us to go into
fire fighting mode.



 This is not our first iteration at this, and it feels like we never come
 up with a solution that covers all the bases. Past failure can generally
 be traced back to the absence of the owner of a critical puzzle piece...
 so what is the way forward ?

 Should we write a openstack-specs and pray that all the puzzle piece
 owners review it ?

 Should we lock up all the puzzle piece owners in the QA/Infra/RelMgt
 Friday sprint room in Vancouver and get them (us) to sort it out ?


++, we really need to get this sorted out.



 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-14 Thread Joe Gordon
On Sun, Apr 12, 2015 at 6:09 PM, Robert Collins robe...@robertcollins.net
wrote:

 On 13 April 2015 at 12:53, Monty Taylor mord...@inaugust.com wrote:

  What we have in the gate is the thing that produces the artifacts that
  someone installing using the pip tool would get. Shipping anything with
  those artifacts other that a direct communication of what we tested is
  just mean to our end users.

 Actually its not.

 What we test is point in time. At 2:45 UTC on Monday installing this
 git ref of nova worked.

 Noone can reconstruct that today.

 I entirely agree with the sentiment you're expressing, but we're not
 delivering that sentiment today.

 We need to balance the inability to atomically update things - which
 forces a degree of freedom on install_requires - with being able to
 give someone the same install that we tested.

 That is the fundamental tension that we're not handling well, nor have
 I seen a proposal to tackle it so far.

 I'll have to spend some time noodling on this, but one of the clear
 constraints is that install_requires cannot both be:
  - flexible enough to permit us to upgrade requirements across many
 git based packages [because we could do coordinated releases of sdists
 to approximate atomic bulk changes]
  - tight enough enough to give the next person trying to run that ref
 of the package the same things we installed in CI.

 - I think we need something other than install_requires

 ...
  I disagree that anything is broken for us that is not caused by our
  inability to remember that distro packaging concerns are not the same as
  our concerns, and that the mechanism already exists for distro pacakgers
  to do what they want. Furthermore, it is caused by an insistence that we
  need to keep versions open for some ephemeral reason such as upstream
  might release a bug fix Since we all know that if it's not tested,
  it's broken - any changes to upstream software should be considered
  broken until proven otherwise. History over the last 5 years has shown
  this to be accurate more than the other thing.

 This seems like a strong argument for really being able to reconstruct
 what was in CI.

  If we pin the stable branches with hard pins of direct and indirect
  dependencies, we can have our stable branch artifacts be installable.
  Thats awesome. IF there is a bugfix release or a security update to a
  dependent library - someone can propose it. Otherwise, the stable
  release should not be moving.

 Can we do that in stable branches? We've still got the problem of
 bumping dependencies across multiple packages.


We cannot do this today with 'pip install -r requirements.txt' but we can
with 'pip install -r --no-deps requirements.txt'  if requirements includes
all transitive dependencies. And then we have to figure out transitive
dependencies for all projects etc.

What do you mean bumping dependencies across mulitple packages?



 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-14 Thread Joe Gordon
On Tue, Apr 14, 2015 at 2:55 PM, Chris Dent chd...@redhat.com wrote:

 On Tue, 14 Apr 2015, Joe Gordon wrote:

  Upstream there are two separate concepts.

 install_requirements, which are meant to document what *must* be
 installed to import the package, and should encode any mandatory
 version constraints while being as loose as otherwise possible. E.g.
 if package A depends on package B version 1.5 or above, it should say
 B=1.5 in A's install_requires. They should not specify maximum
 versions except when that is known to be a problem: they shouldn't
 borrow trouble.

 deploy requirements - requirements.txt - which are meant to be *local
 to a deployment*, and are commonly expected to specify very narrow (or
 even exact fit) versions.


 Link to where this is documented? If this isn't written down anywhere,
 then
 that should be a pre-requisite to this conversation. Get upstream to
 document this.


 I don't know where it is documented but this was the common wisdom I
 knew from the Python community since long before coming to the
 OpenStack community. To me, seeing a requirements.txt in a repo that
 represents a class of an app or library (rather than an instance of
 a deployment) was quite a surprise.

 (This doesn't have that much bearing on the practical aspects of
 this conversation, just wanted to add some anecdata that the precedent
 described above is not weird or alien in any way.)


https://packaging.python.org/en/latest/requirements.html

Turns out it was easier then I thought to find the documentation for this.




 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient 'stable-compat-jobs-{name}' broken

2015-04-14 Thread Joe Gordon
On Thu, Apr 9, 2015 at 10:02 PM, melanie witt melwi...@gmail.com wrote:

 Hi all,

 The following 'stable-compat-jobs-{name}' build jobs have been broken the
 past two days, blocking all novaclient patches from passing jenkins checks:

 gate-tempest-dsvm-neutron-src-python-novaclient-icehouse
 gate-tempest-dsvm-neutron-src-python-novaclient-juno

 The original purpose of these jobs was to check that patches proposed to
 master wouldn't break in stable branches. This was before we started
 pinning novaclient versions on stable branches. Now that we've pinned, the
 way these jobs were passing was by installing the current novaclient, then
 uninstalling it, then installing a version from pypi that fits within the
 global requirements for the branch in question (icehouse or juno), then
 running the tests.

 Well, recently this stopped working because for some reason, devstack is
 no longer able to uninstall the latest version:

 Found existing installation: python-novaclient 2.23.0.post14
 Can't uninstall 'python-novaclient'. No files were found to uninstall.

 And then I see the following error as a result:

 pkg_resources.ContextualVersionConflict: (python-novaclient 2.23.0.post14
 (/opt/stack/new/python-novaclient),
 Requirement.parse('python-novaclient=2.20.0,=2.18.0'),
 set(['ceilometer']))

 I asked about this in #openstack-infra and was told we really shouldn't be
 running the src build jobs on patches proposed to master anyhow, and that
 it's not the right flow for devstack to install the latest, then uninstall
 it, then install an older global reqs compatible version anyway.

 Given that, is it okay if I propose a patch to remove the
 'stable-compat-jobs-{name}' build jobs for python-novaclient in
 project-config?


++



 Then after that, how are we supposed to go about cutting stable branches
 for novaclient? And how can we get the 'stable-compat-jobs-{name}' jobs
 running on only those respective branches? In project-config I didn't
 understand how to limit build jobs to only patches proposed to a stable
 branch.


cutting a branch for novaclient is easy, anyone with release powers can do
it. Just find the right sha1 to create a new branch from and go. We should
do this for Kilo in fact. As for the proper project-config settings, not
sure off the top of my head but it is doable.



 I'd appreciate any insights.

 Thanks,
 melanie (melwitt)





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-09 Thread Joe Gordon
On Thu, Apr 9, 2015 at 9:25 AM, Victor Stinner vstin...@redhat.com wrote:

 Hi,

 During the last OpenStack Summit at Paris, we discussed how we can port
 OpenStack to Python 3, because eventlet was not compatible with Python 3.
 There are multiple approaches: port eventlet to Python 3, replace eventlet
 with asyncio, replace eventlet with threads, etc. We decided to not take a
 decision and instead investigate all options.

 I fixed 4 issues with monkey-patching in Python 3 (importlib, os.open(),
 threading.RLock, threading.Thread). Good news: the just released eventlet
 0.17.3 includes these fixes and it is now fully compatible with Python 3!
 For example, the Oslo Messaging test suite now pass with this eventlet
 version! Currently, eventlet is disabled in Oslo Messaging on Python 3
 (eventlet tests are skipped).

 I just sent a patch for requirements and Oslo Messaging to bump to
 eventlet 0.17.3, but it will have to wait until everyone has master as
 Liberty.

https://review.openstack.org/#/c/172132/
https://review.openstack.org/#/c/172135/

 It becomes possible to port more projects depending on eventlet to Python
 3!


Awesome!



 Liberty cycle will be a good opportunity to port more OpenStack components
 to Python 3. Most OpenStack clients and Common Libraries are *already*
 Python 3 compatible, see the wiki page:

https://wiki.openstack.org/wiki/Python3


https://wiki.openstack.org/wiki/Python3#Dependencies appears to be fairly
out of date. For example hacking works under python34 as does oslo
messaging as per this email etc.

Also what is the status of all the dependencies in
https://github.com/openstack/nova/blob/master/requirements.txt and more
generally
https://github.com/openstack/requirements/blob/master/global-requirements.txt

It would be nice to get a better sense of what the remaining libraries to
port over are before the summit so we can start planning how to do the
python34 migration.



 --

 To replace eventlet, I wrote a spec to replace it with asyncio:

https://review.openstack.org/#/c/153298/

 Joshua Harlow wrote a spec to replace eventlet with threads:

https://review.openstack.org/#/c/156711/

 But then he wrote a single spec Replace eventlet + monkey-patching with
 ?? which covers threads and asyncio:

https://review.openstack.org/#/c/164035/

 Victor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova Project Priorities for Liberty

2015-04-08 Thread Joe Gordon
Hi All,

After moderate success with our Kilo priorities effort and both Nova PTL
candidates mentioning they wish to continue the process, its time to start
thinking about the new priority list for Liberty.

Just like last time [0]:
We are now collecting ideas for project priorities on this etherpad [1],
with the goal of discussing and finalizing the list or priorities for
liberty
at the summit.


[0]
http://lists.openstack.org/pipermail/openstack-dev/2014-October/047914.html
[1] https://etherpad.openstack.org/p/liberty-nova-priorities
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fixing the Nova Core Reviewer Frustration [was Re: [Nova] PTL Candidacy]

2015-04-07 Thread Joe Gordon
On Tue, Apr 7, 2015 at 10:02 AM, James Bottomley 
james.bottom...@hansenpartnership.com wrote:

 On Tue, 2015-04-07 at 11:27 +1000, Michael Still wrote:
  Additionally, we have consistently asked for non-cores to help cover
  the review load. It doesn't have to be a core that notices a problem
  with a patch -- anyone can do that. There are many people who do help
  out with non-core reviews, and I am thankful for all of them. However,
  I keep meeting people who complain about review delays, but who don't
  have a history of reviewing themselves. That's confusing and
  frustrating to me.

 I can understand why you're frustrated, but not why you're surprised:
 the process needs to be different.  Right now the statement is that for
 a patch series to be accepted it has to have a positive review from a
 core plus one other, however the one other can be a colleague, so it's
 easy.  The problem, as far as submitters see it, is getting that Core
 Reviewer.  That's why so much frenzy (which contributes to your
 frustration) goes into it.  And why all the complaining which annoys
 you.

 To fix the frustration, you need to fix the process:  Make the cores
 more of a second level approver rather than a front line reviewer and I
 predict the frenzy to get a core will go down and so will core
 frustration.  Why not require a +1 from one (or even more than one)
 independent (for some useful value of independent) reviewer before the
 cores will even look at it?  That way the cores know someone already
 thought the patch was good, so they're no longer being pestered to
 review any old thing and the first job of a submitter becomes to find an
 independent reviewer rather than go bother a core.


++, I actually already prefer to focus my reviews on patches that already
have a +1.

We have been using a trivial patch monkey process (see the bottom of
https://etherpad.openstack.org/p/kilo-nova-priorities-tracking) that is
similar to this with great results already.



 James



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fixing the Nova Core Reviewer Frustration [was Re: [Nova] PTL Candidacy]

2015-04-07 Thread Joe Gordon
On Tue, Apr 7, 2015 at 11:12 AM, Tim Bell tim.b...@cern.ch wrote:

  -Original Message-
  From: James Bottomley [mailto:james.bottom...@hansenpartnership.com]
  Sent: 07 April 2015 19:03
  To: Michael Still
  Cc: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] Fixing the Nova Core Reviewer Frustration [was
 Re:
  [Nova] PTL Candidacy]
 
  On Tue, 2015-04-07 at 11:27 +1000, Michael Still wrote:
   Additionally, we have consistently asked for non-cores to help cover
   the review load. It doesn't have to be a core that notices a problem
   with a patch -- anyone can do that. There are many people who do help
   out with non-core reviews, and I am thankful for all of them. However,
   I keep meeting people who complain about review delays, but who don't
   have a history of reviewing themselves. That's confusing and
   frustrating to me.
 
  I can understand why you're frustrated, but not why you're surprised:
  the process needs to be different.  Right now the statement is that for
 a patch
  series to be accepted it has to have a positive review from a core plus
 one other,
  however the one other can be a colleague, so it's easy.  The problem,
 as far as
  submitters see it, is getting that Core Reviewer.  That's why so much
 frenzy
  (which contributes to your
  frustration) goes into it.  And why all the complaining which annoys you.
 
  To fix the frustration, you need to fix the process:  Make the cores
 more of a
  second level approver rather than a front line reviewer and I predict
 the frenzy
  to get a core will go down and so will core frustration.  Why not
 require a +1
  from one (or even more than one) independent (for some useful value of
  independent) reviewer before the cores will even look at it?  That way
 the cores
  know someone already thought the patch was good, so they're no longer
 being
  pestered to review any old thing and the first job of a submitter
 becomes to find
  an independent reviewer rather than go bother a core.
 

 If I take a case that we were very interest in (
 https://review.openstack.org/#/c/129420/) for nested project quota, we
 seemed to need two +2s from core reviewers on the spec.

 There were many +1s but these did not seem to result in an increase in
 attention to get the +2s. Initial submission of the spec was in October but
 we did not get approval till the end of January.


I'll bite. As one of the '+2's on your spec, I don't think this is a fair
characterization of what happened.

* This spec took 36 revisions, but most of those were back to back in the
same day with no feedback. So this actually translates to about 9 or so
reviewed revisions. For a big spec like this I don't think that is too bad
* The first set of feedback came a few days after the first draft was
posted, which isn't too bad.
* The first +1 was on December 11th. So that means from the first +1 to
being merged was  December to January, not October to January.
* The 'many +1s' were all from the group proposing the patch, and as
reviewer with +2 power, I generally discount those +1s since they are
inherently fairly biased. Although a lack of +1s from the teammates of the
author is an even worse sign. So I see a lack of +1s from the group as a
bad sign.
* The final revision was posted on December 18th (ignoring the trivial
cleanup/rebase I ran) and this finally landed on January 26th. This is
where I think we could have done better, it shouldn't have taken over a
month to get 2 +2s  at this point.
* The fact that the RST didn't render properly until I went through and
fixed it myself was a bad sign:
https://review.openstack.org/#/c/129420/35..36/specs/kilo/approved/nested-quota-driver-api.rst,cm


In short, I think we dropped the ball a little bit on getting the final +2
on this, but by the time this was really ready it was already very late in
the cycle.


 Unfortunately, we were unable to get the code into the right shape after
 the spec approval to make it into Kilo.

 One of the issues for the academic/research sector is that there is a
 significant resource available from project resources but these are time
 limited. Thus, if a blueprint and code commit cannot be completed within
 the window for the project, the project ends and resources to complete are
 no longer available. Naturally, rejections on quality grounds such as code
 issues or lack of test cases is completely reasonable but the latency time
 can extend the time to delivery significantly.


This is the first time I heard of any time window issues or this blueprint,
so I am not sure how you expected us to work within said window. In cases
where there are some time constraints like this, you should just raise
awareness of this early on in the process so we can work around it.



 Luckily, in this case, the people concerned are happy to continue to
 completion (and the foundation is sponsoring the travel for the summit too)
 but this would not always be the case.

 Tim

  

  1   2   3   4   5   6   7   8   >